“We don’t need no …” On linguistic inequality

Deviations from so-called standard forms of language (such as the double negative) make you stand out immediately. Try and use double negatives consistently in your university courses or at the next job interview and see how people react. Even if people won’t correct you explicitly, many will do so tacitly. Such features of language function as social markers and evoke pertinent gut reactions. Arguably, this is not only true of grammatical or lexical features, but also of broader stylistic features in writing, speech and even non-linguistic conduct. Some ways of phrasing may sound like heavy boots. Depending on our upbringing, we are familiar with quite different linguistic features. While none of this might be news, it raises crucial questions about teaching that I see rarely addressed. How do we respond to linguistic and stylistic diversity? When we say that certain students “are struggling”, we often mean that they deviate from our stylistic expectations. A common reaction is to impart techniques that help them in conforming to such expectations. But should we perhaps respond by trying to understand the “deviant” style?

Reading the double negative “We don’t need no …”, you might see quite different things: (1) a grammatically incorrect phrase in English; (2) a grammatically correct phrase in English; (3) part of a famous song by Pink Floyd. Assuming that many of us recognise these things, some will want to hasten to add that (2) contradicts (1). A seemingly obvious way to resolve this is to say that reading (1) applies to what is called the standard dialect of English (British English), while (2) applies to some dialects of English (e.g. African-American Vernacular English). This solution prioritises one standard over other “deviant” forms that are deemed incorrect or informal etc. It is obvious that this hierarchy goes hand in hand with social tensions. At German schools and universities, for instance, you can find numerous students and lecturers who hide their dialects or accents. In linguistics, the disadvantages of regional dialect speakers have long been acknowledged. Even if the prescriptive approach has long been challenged, it’s driving much of the implicit culture in education.

But the distinction between standard and deviant forms of language ignores the fact that the latter often come with long-standing rules of their own. Adjusting to the style of your teacher might then require you to deviate from the language of your parents. Thus another solution is to say that there are different English languages. Accordingly, we can acknowledge reading (2) and call African-American Vernacular English (AAVE) a language. The precise status and genealogy is a matter of linguistic controversy. However, the social and political repercussions of this solution come most clearly into view when we consider the public debate about teaching what is called “Ebonics” at school in the 90s (Here is a very instructive video about this debate). If we acknowledge reading (2), it means, mutatis mutandis, that many English speakers raised with AAVE can be considered bilingual. Educators realised that teaching standard forms of English can be aided greatly by using AAVE as the language of instruction. Yet, trying to implement this as a policy at school soon resulted in a debate about a “political correctness exemplar gone out of control” and abandoning the “language of Shakespeare”. The bottom-line is: Non-hierarchical acknowledgement of different standards quickly spirals into defences of the supposed status quo by the dominant social group.

Supposed standards and deviations readily extend to styles of writing and conduct in academic philosophy. We all have a rough idea what a typical lecture looks like, how a discussion goes and how a paper should be structured. Accordingly, attempts at diversification are met with suspicion. Will they be as good as our standards? Won’t they undermine the clarity we have achieved in our styles of reasoning? A more traditional division is that between so-called analytic and continental philosophy. Given the social gut reactions to diversifying linguistic standards, it might not come as a surprise that we find equal responses among philosophers: Shortly before the University of Cambridge awarded a honorary degree to Derrida in 1992, a group of philosophers published an open letter protesting that “Derrida’s work does not meet accepted standards of clarity and rigour.” (Eric Schliesser has a succinct analysis of the letter.) Rather than acknowledging that there might be various standards emerging from different traditions, the supposedly dominant standard of clarity is often defended like an eternal Platonic idea.

While it is easy to see and criticise this, it is much more difficult to find a way of dealing with it in the messy real world. My historically minded self has had and has the luxury to engage with a variety of styles without having to pass judgment, at least not explicitly. More importantly, when teaching students I have to strike a balance between acknowledging variety and preparing them for situations in which such acknowledgement won’t be welcome. In other words, I try to teach “the standard”, while trying to show its limits within an array of alternatives. My goal in teaching, then, would not be to drive out “deviant” stylistic features, but to point to various resources required in different contexts. History (of philosophy) clearly helps with that. But the real resources are provided by the students themselves. Ultimately, I would hope, not to teach them how to write, but how to find their own voices within their various backgrounds and learn to gear them towards different purposes.

But to do so, I have to learn, to some degree, the idioms of my students and try to understand the deep structure of their ways of expression. Not as superior, not as inferior, but as resourceful within contexts yet unknown to me. On the other hand, I cannot but also lay open my own reactions and those of the traditions I am part of. – Returning to the fact that language comes with social markers, perhaps one of the most important aspects of teaching is to convey a variety of means to understand and express oneself through language. Our gut reactions run very deep, and what is perceived as linguistic ‘shortcomings’ will move people, one way or another. But there is a double truth: Although we often cannot but go along with our standards, they will very soon be out of date. New standards and styles will emerge. And we, or I should say “I”, will just sound old-fashioned at best. Memento mori.

You don’t get what you deserve. Part II: diversity versus meritocracy?

“I’m all for diversity. That said, I don’t want to lower the bar.” – If you have been part of a hiring committee, you will probably have heard some version of that phrase. The first sentence expresses a commitment to diversity. The second sentence qualifies it: diversity shouldn’t get in the way of merit. Interestingly, the same phrase can be heard in opposing ways. A staunch defender of meritocracy will find the second sentence (about not lowering the bar) disingenuous. He will argue that, if you’re committed to diversity, you might be disinclined to hire the “best candidate”. By contrast, a defender of diversity will find the first sentence disingenuous. If you’re going in for meritocratic principles, you will just follow your biases and ultimately take the properties of “white” and “male” as a proxy of merit. – This kind of discussion often runs into a stalemate. As I see it, the problem is to treat diversity and meritocracy as an opposition. I will suggest that this kind of discussion can be more fruitful if we see that diversity is not a property of job candidates but of teams, and thus not to be seen in opposition to meritocratic principles.

Let’s begin with a clarification. I assume that it’s false and harmful to believe that we live in a meritocracy. But that doesn’t mean that meritocratic ideas themselves are bad. If it is simply taken as the idea that one gets a job based on their pertinent qualifications, then I am all for meritocratic principles. However, a great problem in applying such principles is that, arguably, the structure of hiring processes makes it difficult to discern qualifications. Why? Because qualifications are often taken to be indicated by other factors such as prestige etc. But prestige, in turn, might be said to correlate with race, gender, class or whatever, rather than with qualifications. At the end of the day, an adherent of diversity can accuse adherents of meritocracy of the same vices that she finds herself accused of. So when merit and diversity are taken as being in opposition, we tend to end up in the following tangle:

  • Adherents of diversity think that meritocracy is ultimately non-meritocratic, racist, sexist, classist etc.
  • Adherents of meritocracy think that diversity is non-meritocratic, racist, sexist, classist etc.*

What can we do in such a stalemate? How can the discussion be decided? Something that typically gets pointed out is homogeneity. The adherent of diversity will point to the homogeneity of people. Most departments in my profession, for instance, are populated with white men. The homogeneity points to a lack of diversity. Whether this correlates to a homogeneity of merit is certainly questionable. Therefore, the next step in the discussion is typically an epistemological one: How can we know whether the candidates are qualified? More importantly, can we discern quality independently from features such as race, gender or class? – In this situation, adherents of diversity typically refer to studies that reveal implicit biases. Identical CVs, for instance, have been shown to be treated as more or less favourable depending on the features of the name on the CV. Meritocratists, by contrast, will typically insist that they can discern quality objectively or correct for biases. Again, both sides seem to have a point. We might be subject to biases, but if we don’t leave decisions to individuals but to, say, committees, then we can perhaps correct for biases. At least if these committees are sufficiently diverse, one might add. – However, I think the stalemate will get passed indefinitely to different levels, as long as we treat merit and diversity as an opposition. So how can we move forward?

We try to correct for biases, for instance, by making a committee diverse. While this is a helpful step, it also reveals a crucial feature about diversity that is typically ignored in such discussions. Diversity is a feature of a team or group, not of an individual. The merit or qualification of a candidate is something pertaining to that candidate. If we look for a Latinist, for instance, knowledge of Latin will be a meritorious qualification. Diversity, by contrast, is not a feature, to be found in the candidate. Rather, it is a feature of the group that the candidate will be part of. Adding a woman to all-male team will make the team more diverse, but that is not a feature of the candidate. Therefore, accusing adherents of diversity of sexism or racism is fallacious. Trying to build a more diverse team rather than favouring one category strikes me as a means to counter such phenomena.

Now if we accept that there is such a thing as qualification (or merit), it makes sense to say that in choosing a candidate for a job we will take qualifications into account as a necessary condition. But one rarely merely hires a candidate; one builds a team, and thus further considerations apply. One might end up with a number of highly qualified candidates. But then one has to consider other questions, such as the team one is trying to build. And then it seems apt to consider the composition of the team. But that does not mean that merit and diversity are opposed to one another.

Nevertheless, prioritising considerations about the team over the considerations about the candidates are often met with suspicion. “She only got the job because …” Such an allegation is indeed sexist, because it construes a diversity consideration applicable to a team as the reason for hiring, as if it were the qualification of an individual. But no matter how suspicious one is, qualification and diversity are not on a par, nor can they be opposing features.

Compare: A singer might complain that the choir hired a soprano rather than him, a tenor. But the choir wasn’t merely looking for a singer but for a soprano. Now that doesn’t make the soprano a better singer than the tenor, nor does it make the tenor better than the soprano. Hiring a soprano is relevant to the quality of the group; it doesn’t reflect the quality of the individual.

____

* However, making such a claim, an adherent of meritocracy will probably rely on the assumption that there is such a thing as “inverted racism or sexism”. In the light of our historical sitation, this strikes me as very difficult to argue, at least with regard to institutional structures. It’s seems like saying that certain doctrines and practices of the Catholic Church are not sexist, simply because there are movements aiming at reform.

Fit. A Note on Aristotle’s Presence in Academia

Since the so-called Scientific Revolution and the birth of modern science, our Western approach towards the world became quantitative. The precedingly dominant qualitative Aristotelian worldview of the Scholastics was replaced by a quantitative one: everything around us was supposed to be quantifiable and quantified. This, of course, seems to affect nowadays academia, too. We often hear “do this, it will be one more line in your CV!” 

Many will reply “This is not true, quality matters just as much!” Yes, it (sometimes) matters in which journal one publishes; it has to be a good journal; one needs to make sure that the quality of the article is good. And how do we know that the journal is good or not? Because of its ranking. So if you thought I will argue that this is Aristotle’s presence in Academia… you were wrong. The criterion is still quantitative. Of course, we trust more that an article in a respectable (i.e., highly ranked) journal is a good one, but we all know this is not always the case. 

Bringing into discussion the qualitative and quantitative distinction is crucial for assessing job applications and the ensuing hiring process. While it used to be easier for those in a position of power to hire whom they want, it has become a bit more difficult. Imagine you really want to hire someone because he (I will use this pronoun for certain reasons) is brilliant. But his brilliance is not reflected in his publications, presentations, teaching evaluations, grants (the latter because he did not get any)… You cannot even say he is a promising scholar, since that should be visible in something. At the same time, there are a lot of competing applications with an impressive record. So what can one do? Make use of the category ‘better fit’, ‘better fit’ for the position, ‘better fit’ for the department.[1] But when is someone a ‘better fit’, given that the job description did not mention anything to this effect? When their research is in line with the department? No, too much overlap! When it complements the existing areas of research? No, way too different!

And here is where Aristotle comes into the picture. It is not the research that has to fit, but the person. And we know from Aristotle and his followers that gender, race and nationality are the result of the (four elemental) qualities. Who can be more fit for a department mostly composed of men from Western Europe than another man from Western Europe? As a woman coming from Eastern Europe, I have no chance. And Eastern Europe is not even the worst place to come from in this respect. 

There is a caveat though. When more people who fit in the department apply, the committee seeks refuge in positing some ‘occult qualities’ to choose the ‘right’ person. ‘Occult’ in the Scholastic sense means that the quality it is not manifest in any way in the person’s profile.[2]

How much is this different from days when positions were just given away on the basis of personal preference? The difference lies in the charade.[3] The difference is that nowadays a bunch of other people, devoid of occult qualities, though with an impressive array of qualities manifest in their CVs and international recognition, spend time and energy to prepare an application, get frustrated, maybe even get sick, just so that the person with the ‘better fit’ can have the impression that he is much better than all the rest who applied.

So when are we going to give up the Aristotelian-Scholastic elementary and occult qualities and opt for a different set of more inclusive qualities?


[1] Aristotle probably put it in his Categories, but it got lost.

[2] I am rather unfair with this term, because the occult qualities were making themselves present through certain effects.

[3] The Oxford dictionary indeed defines charade as “an absurd pretence intended to create a pleasant or respectable appearance.”

On being a first-generation student

First off: the following is not to be taken as a tale of woe. I am grateful for whatever life has had on offer for me so far, and I am indebted to my teachers – from primary school to university and beyond – in many ways. But I felt that, given that Martin invited me to do so, I should probably provide some context to my comment on his recent post on meritocracy, in which I claimed that my being a first-generation student has had a “profound influence on how I conceive of academia”. So here goes.

I am a first-generation student from a lower-middle-class family. My grandparents on the maternal side owned and operated a small farm, my grandfather on the paternal side worked in a foundry, and his wife – my father’s mother – did off-the-books work as a cleaning woman in order to make ends meet.

When I got my first job as a lecturer in philosophy my monthly income already exceeded that of my mother, who has worked a full-time job in a hospital for more than thirty years. My father, a bricklayer by training, is by now divorced from my mother and declutters homes for a living. Sometimes he calls me in order to tell me about a particularly good bargain he stroke on the flea market.

My parents did not save money for my education. As an undergraduate I was lucky to receive close to the maximum amount of financial assistance afforded by the German Federal Law on Support in Education (BAföG) – still, I had to work in order to be able to fully support myself (tuition fees, which had just been introduced when I began my studies, did not help). At the worst time, I juggled three jobs on the side. I have work experience as a call center agent (bad), cleaning woman (not as bad), fitness club receptionist (strange), private tutor (okay), and teaching assistant (by far the nicest experience).

Not every middle-class family is the same, of course. Nor is every family in which both parents are non-academics. Here is one way in which the latter differ: There are those parents who encourage – or, sometimes, push – their children to do better than themselves, who emphasize the value of higher education, who make sure their children acquire certain skills that are tied to a particular habitus (like playing the piano), who provide age-appropriate books and art experiences. My parents were not like that. “Doing well”, especially for my father, meant having a secure and “down-to-earth” job, ideally for a lifetime. For a boy, this would have been a craft. Girls, ostensibly being less well-suited for handiwork, should strive for a desk job – or aim “to be provided for”. My father had strong reservations about my going to grammar school, even though I did well in primary school and despite my teacher’s unambiguous recommendation. I think it never occurred to him that I could want to attend university – academia was a world too far removed from his own to even consider that possibility.

I think that my upbringing has shaped – and shapes – my experience of academia in many ways. Some of these I consider good, others I have considered stifling at times. And some might even be loosely related to Martin’s blogpost about meritocracy. Let me mention a few points (much of what follows is not news, and has been put more eloquently by others):

  • Estrangement. An awareness of the ways in which the experiences of my childhood and youth, my interests and preferences, my habits and skills differ from what I consider a prototypical academic philosopher – and I concede that my picture of said prototype might be somewhat exaggerated – has often made me feel “not quite at home” in academia. At the same time, my “professional advancement” has been accompanied by a growing estrangement from my family. This is something that, to my knowledge, many first-generation students testify to, and which can be painful at times. My day-to-day life does not have much in common with my parents’ life, my struggles (Will this or that paper ever get published?) must seem alien, if not ridiculous, to them. They have no clear idea of what it is that I do, other than that it consists of a lot of desk-sitting, reading, and typing. And I think it is hard for them to understand why anyone would even want to do something like this. One thing I am pretty sure of is that academia is, indeed, or in one sense at least, a comparatively cozy bubble. And while I deem it admirable to think of ways of how to engage more with the public, I am often unsure about how much of what we actually do can be made intelligible to “the folk”, or justified in the face of crushing real-world problems.
  • Empathy. One reason why I am grateful for my experiences is that they help me empathize with my students, especially those who seem to be afflicted by some kind of hardship – or so I think. I believe that I am a reasonably good and well-liked teacher, and I think that part of what makes my teaching good is precisely this: empathy. Also, I believe that my experiences are responsible for a heightened sensibility to mechanisms of inclusion and exclusion, and privilege. I know that – being white, having grown up in a relatively secure small town, being blessed with resilience and a certain kind of stubbornness, and so on – I am still very well-off. And I do not want to pretend that I know what it is like to come from real poverty, or how it feels to be a victim of racism or constant harassment. But I hope that I am reasonably open to others’ stories about these kinds of things.
  • Authority. In my family of origin, the prevailing attitude towards intellectuals was a strange mixture between contempt and reverence. Both sentiments were probably due to a sense of disparity: intellectuals seemed to belong to a kind of people quite different from ourselves. This attitude has, I believe, shaped how I perceived of my teachers when I was a philosophy student. I noticed that our lecturers invited us – me – to engage with them “on equal terms”, but I could not bring myself to do so. I had a clear sense of hierarchy; to me, my teachers were authorities. I did eventually manage to speak up in class, but I often felt at a loss for words outside of the classroom setting with its relatively fixed and easily discernable rules. I also struggled with finding my voice in class papers, with taking up and defending a certain position. I realize that this struggle is probably not unique to first-generation students, or to students from lower-class or lower-middle-class backgrounds, or to students whose parents are immigrants, et cetera – but I believe that the struggle is often aggravated by backgrounds like these. As teachers, I think, we should pay close attention to the different needs our students might have regarding how we engage with them. It should go without saying, but if someone seems shy or reserved, don’t try to push them into a friendly and casual conversation about the model of femininity and its relation to sexuality in the novel you recently read.
  • Merit. Now, how does all this relate to the idea of meritocracy? I think there is a lot to say about meritocracy, much more than can possibly be covered in a (somewhat rambling) blogpost. But let me try to point out at least one aspect. Martin loosely characterizes the belief in meritocracy as the belief that “if you’re good or trying hard enough, you’ll get where you want”. But what does “being good enough” or “trying hard enough” amount to in the first place? Are two students who write equally good term papers working equally hard? What if one of them has two children to care for while the other one still lives with and is supported by her parents? What if one struggles with depression while the other does not? What if one comes equipped with “cultural capital” and a sense of entitlement, while the other feels undeserving and stupid? I am not sure about how to answer these questions. But one thing that has always bothered me is talk of students being “smart” or “not so smart”. Much has been written about this already. And yet, some people still talk that way. Many of the students I teach struggle with writing scientific prose, many of them struggle with understanding the assigned readings, many of them struggle with the task of “making up their own minds” or “finding their voice”. And while I agree that those who do not struggle, or who do not struggle as much, should, of course, be encouraged and supported – I sometimes think that the struggling students might be the ones who benefit the most from our teaching philosophy, and for whom our dedication and encouragement might really make a much-needed difference. It certainly did so for me.

You don’t get what you deserve. Part I: Meritocracy in education vs employment relations

The belief that we live in a meritocracy is the idea that people get what they deserve. At school you don’t get a good grade because of your skin colour or because you have a nice smile but because you demonstrate the required skills. When I was young, the idea helped me to gain a lot of confidence. Being what is now called a first-generation student, I thought I owed my opportunity to study to a meritocratic society. I had this wonderfully confident belief that, if you’re good or trying hard enough, you’ll get where you want. Today, I think that there is so much wrong with this idea that I don’t really know where to start. Meritocratic beliefs are mostly false and harmful. In the light of our sociological knowledge, still believing that people get what they deserve strikes me as on a par with being a flat earther or a climate change denialist. At the same time, beliefs in meritocratic principles are enormously widespread and deep-rooted, even among those who should and do know better. In what follows, I attempt to make nothing more than a beginning to look at that pernicious idea and why it has so much currency.

Perhaps one of the greatest problems of meritocratic ideas is that they create a normative link between possibly unrelated things: There is no intrinsic relation between displaying certain qualities, on the one hand, and getting a job, on the other hand. Of course, they might be related; in fact, displaying certain qualities might be one of the necessary conditions for getting the job. But the justification structure suggested by meritocratic beliefs clearly obscures countless other factors, such as being in the right place at the right time etc. Here are two variants of how this plays out:

  • “I’m not good enough.” – This is a common conclusion drawn by most people. That is, by those, who don’t get the job or grant or promotion they have applied for. If there is one job and a hundred applicants, you can guess that a large amount of people will think they were not good enough. Of course, that’s nonsense for many reasons. But if the belief is that people get what they deserve, then those not getting anything might conclude to be undeserving. A recent piece by a lecturer leaving academia, for instance, contends that part of the problem is that one always has to show that one is “better than the rest”, insinuating that people showing just that might get the job in the end. But apart from the fact that the imbalance between available jobs and applicants pushes such demands to absurd heights, the question arises whether any employer could be sufficiently good to be able to recognise the enormously refined qualities of the applicants.
  • “My qualities are not recognised.” –  The more confident applicants among us might thus draw quite another conclusion, namely that they are good enough, but that their qualities are simply not seen. The counterfactual behind this reasoning seems to be the following: Had my prospective employer seen how good I am, she would have hired me. As I see it, both kinds of reasoning are fallacious in that they construe the relation between performance and getting the job / grant etc. too tightly. Of course, most people know that. But this knowledge does not prevent one from going along with the fallacious reasoning. Why is that? Well, my hunch is that meritocratic beliefs are deeply ingrained in our educational system and spill over to other contexts, such as employment relations. Let me explain.

Most education systems hold a simple promise: If you work hard enough, you’ll get a good grade. While this is a problematic belief in itself, it is a feasible idea in principle. The real problem begins with the transition from education to employment relations in academia. If you have a well performing course, you can give all of your thirty students a high grade. But you can’t give thirty applicants for the same position the job you’ve advertised, even if all the applicants are equally brilliant. Now the problem in higher education is that the transition from educational rewards to employment rewards is often rather subtle. Accordingly, someone not getting a job might draw the same conclusion as someone not getting a good grade.

It is here that we are prone to fallacious reasoning and it is here that especially academic employers need to behave more responsibly: Telling people that “the best candidate” will get the job might too easily come across like telling your first-year students that the best people will get a top grade. But the job market is a zero sum game, while studying is not. (It might be that there is more than just one best candidate or it might be impossible for the employer to determine who the best candidate is.) So a competition among students is of a completely different kind than a competition between job candidates. But this fact is often obscured. An obvious indicator of this is that for PhD candidates it is often unclear whether they are employees or students. Yet, it strikes me as a category mistake to speak about (not) “deserving” a job in the same way as about deserving a certain grade or diploma. So while, at least in an ideal world, a bad grade is a reflection of the work you’ve done, not getting a job is not a reflection of the work you’ve done. There is no intrinsic relation between the latter two things. Now that doesn’t mean that (the prospect of doing) good work is not a condition for getting a job, it just means that there is no relation of being deserving or undeserving.

Or to put the same point somewhat differently, while not every performance deserves a good grade, everyone deserves a job.

Two kinds of philosophy? A response to the “ex philosopher”

Arguably, there are at least two different kinds of philosophy: The first kind is what one might call a spiritual practice, building on exercises or forms of artistic expression and aiming at understanding oneself and others. The second kind is what one might call a theoretical endeavour, building on concepts and arguments and aiming at explaining the world. The first kind is often associated with traditions of mysticism, meditation and therapy; the second is related to theory-building, the formation of schools (scholasticism) and disciplines in the sciences (and humanities). If you open any of the so-called classics, you’ll find representations of both forms. Descartes’ Meditations offer you meditative exercises that you can try at home alongside a battery of arguments engaging with rival theories. Wittgenstein’s Tractatus closes with the mystical and the advice to shut up about the things that matter most after opening with an account of how language relates to the world. However, while both kinds are present in many philosophical works, only the second kind gets recognition in professional academic philosophy. In what follows, I’d like to suggest that this lopsided focus might undermine our discipline.

Although I think that these kinds of philosophy are ultimately intertwined, I’d like to begin by trying to make the difference more palpable. Let’s start with a contentious claim: I think that most people are drawn into philosophy by the first kind, that is, by the desire understand themselves, while academic philosophy trains people in the second kind, that is, in handling respectable theories. People enter philosophy with a first-person perspective and leave or become academics through mastering the third-person perspective. By the way, this is why most first-year students embrace subjectivism of all kinds and lecturers regularly profess to be “puzzled” by this. Such situations thrive on misunderstandings: for the most part, students don’t mean to endorse subjectivism as a theory; they simply and rightly think that perspective matters.* Now, this is perhaps all very obvious. But I do think that this transition from the one kind to the other kind could be made more transparent. The problem I see is not the transition itself, but the dismissal of the first kind of philosophy. As I noted earlier, the two kinds of philosophy require one another. We shouldn’t rip the Tractatus apart, to exclude either mysticism or the theory. Whether you are engaging in the first or second kind is more a matter of emphasis. However, interests in gatekeeping and unfounded convictions about what is and what isn’t philosophy often entail practices of exclusion, often with pernicious effects.

Such sentiments were stirred when I read the confessions of an ex philosopher that are currently making the rounds on social media. The piece struck many chords, quite different ones. I thought it was courageous and truthful as well as heart-breaking and enraging. Some have noted that the piece is perhaps more the complacent rant of someone who was never interested in philosophy and fellow philosophers to begin with. Others saw its value in highlighting what might be called a “phenomenology of failure” (as Dirk Koppelberg put it). These takes are not mutually exclusive. It’s not clear to me whether the author had the distinction between the two kinds of philosophy in mind, but it surely does invoke something along these lines:

“Philosophy has always been a very personal affair. Well, not always. When it stopped being a personal affair, it also stopped being enjoyable. It became a performance.

… Somewhat paradoxically, academia made me dumber, by ripening an intellectual passion I loved to engage with into a rotten performance act I had to dread, and that I hurried to wash out of my mind (impossible ambition) when clocking out. Until the clocking out became the norm. Now I honestly do not have insightful opinions about anything — not rarefied philosophical problems nor products nor popular culture nor current events.”

What the author describes is not merely the transition from one approach to another; it is transition plus denial. It’s the result of the professional academic telling off the first-year student for being overly enthusiastically committed to “subjectivism”. While we can sometimes observe this happening in the lecture hall, most of this denial happens within the same person, the supposed adult telling off themselves, that is, the playful child within. No doubt, sometimes such transition is necessary and called for. But the denial can easily kill the initial motivation. – That said, the author also writes that he has “never enjoyed doing philosophy.” It is at this point (and other similar ones) where I am torn between different readings, but according to the reading I am now proposing the “philosophy” he is talking about is a widespread type of academic philosophy.** What he is talking about, then, is that he never had an interest in a kind of philosophy that would deny the initial enthusiasm and turn it into a mere performance.

Now you might say that this is just the course of a (professionalised) life. But I doubt that we should go along with this dismissal too readily. Let me highlight two problems, unfounded gatekeeping and impoverished practices:

  • The gatekeeping has its most recognisable expression in the petulant question “Is this philosophy?” Of course, it depends on who is asking, but the fact that most texts from the mystic tradition or many decidedly literary expressions of philosophy are just ignored bears witness to the ubiquitous exclusion of certain philosophers. It certainly hit Hildegard of Bingen, parts of Nietzsche and bits of Wittgenstein. But if an exaggerated remark is in order, soon anything that doesn’t follow the current style of paper writing will be considered more or less “weird”. In this regard, the recent attempts at “diversifying the canon” often strike me as enraging. Why do we need to make a special case for re-introducing work that is perfectly fine? In any case, the upshot of dismissing the first kind of philosophy is that a lot of philosophy gets excluded, for unconvincing reasons.
  • You might think that such dismissal only concerns certain kinds of content or style. But in addition to excluding certain traditions of philosophy, there is a subtler sort of dismissal at work: As I see it, the denial of philosophy as a (spiritual) practice or a form of life (as Pierre Hadot put it) pushes personal involvement to the fringes. Arguably, this affects all kinds of philosophy. Let me give an example: Scepticism can be seen as a kind of method that allows us to question knowledge claims and eventually advances our knowledge. But it can also be seen as a personal mental state that affects our decisions. As I see it, the methodological approach is strongly continuous with, if not rooted in, the mental state. Of course, sometimes it is important to decouple the two, but a complete dismissal of the personal involvement cuts the method off from its various motivations. Arguably, the dismissal of philosophy as a spiritual (and also political) practice creates a fiction of philosophy. This fiction might be continuous with academic rankings and pseudo-meritocratic beliefs, but it is dissociated from the involvement that motivates all kinds of philosophical exchange.

In view of these problems, I think it is vital keep a balance between what I called two kinds but what is ultimately one encompassing practice. Otherwise we undermine what motivates people to philosophise in the first place.

____

* Liam Bright has a great post discussing the often lame counterarguments to subjectivism, making the point that I want to make in a different way by saying that the view is more substantial than it is commonly given credit for: “The objection [to subjectivism] imagines a kind of God’s-eye-perspective on truth and launches their attack from there, but the kind of person who is attracted to subjectivism (or for that matter relativism) is almost certainly the kind of person who is suspicious of the idea of such a God’s eye perspective. Seen from within, these objections simply lose their force, they don’t take seriously what the subjectivist is trying to do or say as a philosopher of truth.”

Eric Schliesser provides a brief discussion of Liam’s post, hitting the nail on the following head: “Liam’s post (which echoes the loveliest parts of Carnap’s program with a surprisingly Husserlian/Levinasian sensibility) opens the door to a much more humanistic understanding of philosophy. The very point of the enterprise would be to facilitate mutual understanding. From the philosophical analyst’s perspective the point of analysis or conceptual engineering, then, is not getting the concepts right (or to design them for ameliorative and feasible political programs), but to find ways to understand, or enter into, one’s interlocutor life world.”

** Relatedly, Ian James Kidd distinguishes between philosophy and the performative craft of academic philosophy in his post on “Being good at being good at philosophy”.

Transgression and playfulness in academic exchange

Imagine that you are about to enter one of these hip clothing stores that play fairly loud pop music. Imagine that they play, say, Abba’s “dancing queen” and imagine further that you start to sing very loudly and dance most expressively along to the music as you enter. For how long, do you think, could you go on doing this? – I guess I couldn’t do it at all, because I’d feel embarrassed. It’s just not done, or is it? – I think discussing philosophy is a bit like this. If originality really has such a high status in philosophy, then you should sing and dance in a shop. No? It’s trickier than that. You can see this if you realise that, for some people, uttering a plain sentence in an academic setting feels exactly like singing and dancing in a clothing store. Embarrassing. – Now imagine you’re a bystander: What can you do with this? (1) Well, you can of course expose them, for instance by correcting their behaviour instantly. (2) Or you can make them feel at home a bit by at least humming along with the tune. They’ll probably feel less alone. And you’ll make the others see that it is actually purposeful behaviour. (3) If you’re in the position of the shopkeeper, you could even try and clear the aisles from obstacles to open space for further dancing and join in. There are certainly more options, but the point is: you have a choice and what you choose will partly determine how things develop and how things are judged. But let’s add some context first.

Many of the current discussions about academic exchange are haunted by accusations. On the one hand, there are those who accuse others of censorship in the name of dubious political correctness. On the other, there are those who accuse the accusers of violating safe spaces. What I find particularly sad is that these camps (if there are camps) recurrently run into a hopeless stalemate. I have seen many people attempting to intervene with the best intentions and yet being called out relentlessly. The stalemate seems to arise whenever people pick option one and tell others off for dancing in the shop. – I feel not one jot cleverer than all the people already enmeshed in this mess. But today seems as good as any day, so I’ve set aside time to write a bit about this issue. Before I go through the motions, let me articulate my thesis: I believe that the common distinction between the two camps of “safe space endorsement” and “free speech endorsement” is totally misguiding. Both “camps” are eventually owing to the same problem: the problem of an intersection between educational and professional issues. Let me explain.

Transgression and types of exchange. As I see it, the accusations between the two camps often have a paradoxical air, because the two camps in question share the same goals: Everyone wants open academic exchange, but also wants to prevent harm. Thus, there is always the problem of drawing lines between freedom and harm. One person’s frankness is insulting someone else, and vice versa. People draw these lines differently anyway. But what makes academic or philosophical exchange special is that it partly thrives on transgressing such lines. Most might smile at this today, but many people did worry in debates about the immortality of the soul or personal identity in view of the afterlife. One person’s progress is another’s loss of everything they hold dear. We allow for such transgressions because we (and this includes those who might suffer offence) think that discursive openness might lead to insights that benefit us all. At the same time, it should be clear that potential transgressions require special conditions that protect all those involved both from external repercussions and from internal conflicts.

But here is the catch: In academia we encounter each other in two contexts at once. On the one hand, we are part of an educational exchange in which we learn from each other and help and criticise each other freely. On the other hand, we are part of a professional exchange in which we judge each other from different (hierarchical) positions of power. (By the way, the idea of meritocracy has it that these levels are aligned, but they are not, because the former is way more fluid.) Now these contexts often play out against one another: Your supervisor might say that she wants you to speak up freely, but you might fear that if you speak your mind you’ll be punished professionally.

As I see it, the merging of the two contexts is what creates antagonising camps. No matter which side you take me to be on: if you fear that I might retaliate professionally, it will poison our educational exchange and turn me into an enemy. Conversely, if you trust me to speak in good faith and you don’t hold a professional grudge, I am sure I can utter whatever blather. You might not think very highly of me, but you might still just try and help me see some sense. After all, we all make mistakes. And next time it might be you. Seen in this light, then, I think the two camps boil down to something that has not much to do with the particular political convictions driving either side, but with the merging of contexts.

Playfulness. Where can we go from here? Now, there is no general solution for the merging of contexts. This is why I think that we should assign as much space as possible to educational exchange in academia. We are always different personae at once, and the way to go is to keep the problematic ones in check. How? Through establishing exchange in a more playful manner. Here are some considerations about that (and here is an attempt at playful considerations).* Some of you might remember how philosophical discussions work among friends: You might try out the strangest ideas and see that they end up turning into something surprisingly sustainable. If your interlocutor can’t think on, you make suggestions to help. If it turns out to be nonsense, you laugh and move on. – Why does this work? Because you trust one another. Does it always go well? No, but your friend will be looking out for signs of disagreement and be considerate of your feelings. If you tell them to shut up about a sensitive topic they won’t call you a censor, but shut up. Next time you’ll look into it again and sort out what went wrong, “go meta” or whatever. – Now, you don’t need to make friends with all your interlocutors, but arguing in good faith works like that. We try and fail and laugh and have someone else try. The crucial idea is that such dialogues will be fluid and change the norms as we go along. Is it ok to sing in a shop? Well, let’s see where it get’s us. The whole thing is more like a jazz improvisation where the tune is not fixed. The point is this: everyone’s job is just to make everyone else sound good.

Controversial ideas and conformity. But while the trust of friendship might be a helpful regulative ideal, we have to tackle the interference of the professional level and other group dynamics. This is why I want to consider the question of embarrassment again. Of course, we might also feel occasionally embarrassed among friends, but in professional contexts, that is: in contexts in which we feel judged (be it as students or peers), embarrassment might be outright paralysing. And although some recent articles try to tone down the issue of self-censorship, I would assume that it is fairly pervasive and also problematic, if it stops us from considering what is called “controversial ideas”.

We might begin again by imagining the dance in the shop or, if you like a change of setting, in a philosophy seminar. Would that be ok? Few will think so. It would be a transgression of social norms. While it might not be outright politically incorrect to dance and sing in class, it would certainly put the dancer on the spot. The dancer would be discouraged and perhaps feel embarrassed. Now while making philosophical claims is not exactly like dancing, controversial claims might have the same or worse social effects, to put it mildly. In Descartes’ day, “Everything is material” or “Everything boils down to motion” might just have done the trick. Today, we have other issues, but the shaming of people in professional contexts is said to have become somewhat fashionable. On the whole, shaming is not very resourceful and reduces to option (1) above: If someone says something that sounds off, the common response is to say that this is false. In professional terms this quickly translates into a downgrading of status (unless the person is so established that judgement is outweighed).

At this point, a pattern emerges: Accusing one another, one group will call for safe spaces, the other will call for free speech. But what’s at stake is the embarrassment and fear of bad effects. Unless there are very vocal proponents, people in both camps will fear being put on the spot and thus try to conform to given behavioural standards. The effect is often exposed as self-censorship, but it seems to be a fairly widespread phenomenon sometimes called the Bandwagon Effect: We try to align our views and behaviour with what are the perceived standards. A particularly stunning exposure of our drive to conformity is the Asch experiment (a video is here), in which study participants will align even their own correct perceptual judgments with the obviously wrong judgment of others. However, the experiment has also shown that this effect reduces as soon as there is one ally who also utters the correct judgment. Whatever the intricacies of the social mechanisms at work, the take-home message seems to be that isolation creates embarrassment, while allies help dissolving embarrassment. If this is correct, we can use this to find resources of at least softening the impact of paralysing norms in academic exchange.

Standing by. My hunch is that, at least in the confines of seminars and other philosophical (online) discussions, we should seek to establish more roles than those of proponent and critic. The so-called bystanders are crucial when it comes to demonstrating the normative weight underlying the discussion in question. If you see that someone or some group is isolated because of a controversial position, you might at least try to support their case. Most of us are trained to play devil’s advocate, so we might as well manage helping our peers. The point is not ultimate endorsement but giving space to the idea, ideally in a playful manner such that it can come out as sounding good. This would restore some of the educational context: firstly because the proponent would hopefully feel less threatened through professional isolation; secondly, because it would ensure that we’d be discussing improved versions of ideas rather than strawmen. This would mean something like humming along with “dancing queen” or clearing the space to dance. It might of course also mean to leave. It dispels shame and hopefully even creates some much needed trust.

In my mind’s ear, I can now hear some people objecting that there are really harmful transgressions that should not be endorsed in universities, not even for the sake of argument. I agree that there are such positions. But I also think that these are exceptions. They should be treated as such, as exceptions. If people threaten others, they have left the grounds of academic exchange. For those who remain, it is vital to restore trust and argue resourcefully. This might require more than calling out falsehoods. (Online discussions are not all that different from offline discussions, except for the fact that they have massively increased means of signalling approval or disapproval of bystanders. So “like” with care and don’t pile up!) It might help more to enhance and play around with positions, and forgive each other when we fail. Something which, I am told, we do much of the time. Rather than trying to optimise our positions, it might be better to attempt exchanges by looking for cues to move or stop, try and fail. We have to improvise our way through these conversations; there is no score, and no set of rules will help us making progress.

If we want to make progress, we need transgression of norms, and this is sometimes a risky business. We might choose friendly playfulness to keep possible harms in check and prioritise educational over professional exchange.

___

* Many thanks Lonneke Oostland who emphasised the importance of playfulness in philosophical exchange, and to Ilona de Jong who hinted at Asch’s experiments (referred to further down).

Framing employment in higher education, and father’s day

If you work in (higher) education, you will know some version of the following paradox: It takes the ‘best’ candidates to educate people for a life in which there is no time for education. – What I mean is that, while we pretend to apply meritocratic principles in hiring (of researchers and instructors), there is not even a glimpse of such pretence when it comes to the education of our children. If we were to apply such principles, we would probably expect parents (or others who take care of children) to invest at least some amount of time in the education of their children. But in fact we expect people to disguise time spent with or for their children. So much so that one might say: your children live in competition with your CV. – There are many problems when it comes to issues of care and employment, but in what follows I’d like to focus especially on the role of time and timing.

A few days ago I read a timely blog post over at the Philosophers’ Cocoon: “Taking time off work / the market for motherhood?”. The crucial question asked is whether and, if yes, how to explain “the gap” in productivity. Go and read the post along with the comments (on this blog they tend to be worth reading, too) first.

For what it’s worth, let me begin with my own more practical piece of advice. If a gap is visible, I would tend to address it in the letter and say that a certain amount of time was spent on childcare. Why? I’m inclined to think of cover letters in terms of providing committee members with arguments in one’s favour. If someone says, “look, since his PhD, this candidate has written three rather than two papers”, someone else can reply with “yes, but this difference can be explained by the time spent on childcare”. Yet, this advice might not be sufficient. If candidates are really compared like that, people might not sufficiently care about explanations. All I would hope for is that providing arguments or explanations for gaps should at least not hurt your chances.

However, this does not counter the structural disadvantages for women and mothers in our institutions. You might object that there are now many measures against such disadvantages. While this might be true, it also leads to problematic assumptions. Successful women now often face the suspicion of being mere beneficiaries of affirmative action. This could entail that awards or other successes for women might be assessed as less significant by their peers. (Paradoxically, this could increase the prestige of awards for male peers since they count as harder to get in a climate of suspicion.) But the problems start before any committee member ever sets eyes on an application. What strikes me as crucial is the idea that childcare is construed as a gap. Let me mention just three points:

  • Construing childcare as a gap incentivises treating it as a waste of time (for the stakeholders). But this approach ignores that employees in higher education are representatives of educational values. Treating childcare and, by extension, education as a waste of time undermines the grounds that justify efforts in education in the first place.
  • You would expect that work in higher education requires certain skills, some of which are actually trained by taking care of children. Attentiveness, constant interpretational efforts, openness to failure, patience, time management, dealing with rejection, you name it. While I’m not saying that parents are necessarily better teachers or researchers, it’s outright strange to play off one activity against the other.
  • At least in the field of philosophy, most work products are intrinsically tied to the producer. It’s not like you could have hired Davidson to write the work published by Anscombe. Unlike in certain examination practices, our texts are not crafted such that someone’s work could be replaced anyone else. So all the prestige and quantification cannot stand in for what they are taken to indicate. Thus, comparing products listed on a CV is of limited value when you want to assess someone’s work.

That said, the positive sides of parenthood are often seen and even acknowledged. At least some fathers get a lot of credit. Strangely, this credit is rarely extended to mothers, even less so in questions of employment conditions. Ultimately, the situation reminds me of the cartoon of a sinking boat: the people on the side that is still up and out of the water shout in relief that they are lucky not to be on the side that sank. Yet, educating children is a joint responsibility of our society. If we leave vital care work to others, it’s more than cynical to claim that they didn’t keep up to speed with those who didn’t do any of the care work. Comparing CVs obscures joint responsibilities, incentivising competition where solidarity is due. Such competition sanctions (potential) mothers in particular when excluding them from jobs in higher education or the secure spots on what might turn out to be the Titanic.

Networks and friendships in academia

Recently, I came across an unwelcome reminder of my time as a graduate student and my early-career days. It had the shape of a conference announcement that carries all the signs of a performative contradiction: it invites you by exclusion. What can we learn from such contradictions?

The announcement invites early-career people to attend seminars that run alongside a conference whose line-up is already fixed and seems to consist mainly of a circle of quite established philosophers who have been collaborating closely ever since. Since the invitation is not presented as a “call”, it’s hard to feel invited in the first place. Worse still, you’re not asked to present at the actual conference but to attend “seminars” that are designed “to motivate students and young scholars from all over the world to do research in the field of medieval philosophy and to help them learn new scientific methodology and develop communication skills.” If you’re still interested in attending, you’ll look in vain for time slots dedicated to such seminars. Instead, there is a round table on the last day, scheduled for the same time the organising body holds their annual meeting, thus probably without the established scholars.* You might say there is a sufficient amount of events, so just go somewhere else. But something like the work on the “Dionysian Traditions” is rarely presented. In fact, medieval philosophy is often treated as a niche unto itself, so the choice is not as vast as for, say, analytic metaphysics.

If you think this is problematic, I’ll have to disappoint you. There is no scandal lurking here. Alongside all the great efforts within a growingly inclusive infrastructure of early career support, things like that happen all the time, and since my time as a professor I have been accused of organising events that do at least sound “clubby” myself. Of course, I’m not saying that the actual event announced is clubby like that; it’s just that part of the description triggers old memories. When I was a graduate student, in the years before 2000, at least the academic culture in Germany seemed to be structured in a clubby fashion. By “structured” I mean that academic philosophy often seemed to function as a simple two-class system, established and not-established, and the not-established people had the status of onlookers. They were, it seemed, invited to kind of watch the bigger figures and learn by exposure to greatness. But make no mistake; this culture did not (or not immediately) come across as exclusionary. The onlookers could feel privileged for being around. For firstly, even if this didn’t feel like proper participation, it still felt like the result of a meritocratic selection. Secondly, the onlookers could feel elated, for there was an invisible third class, i.e. the class of all those who either were not selected or didn’t care to watch. The upshot is that part of the attraction of academia worked by exclusion. As an early career person, you felt like you might belong, but you were not yet ready to participate properly.

Although this might come across as a bit negative, it is not meant that way. Academia never was an utopian place outside the structures that apply in the rest of the world. More to the point, the whole idea of what is now called “research-led teaching” grew out of the assumption that certain skills cannot be taught explicitly but have to be picked up by watching others, preferably advanced professionals, at work. Now my point is not to call out traditions of instructing scholars. Rather, this memory triggers a question that keeps coming back to me when advising graduate students. I doubt that research-led teaching requires the old class system. These days, we have a rich infrastructure that, at least on the surface, seems to counter exclusion. But have we overcome this two-class system, and if not, what lesson could it teach us?

Early career people are constantly advised to advance their networking skills and their network. On the whole, I think this is good advice. However, I also fear that one can spend a quarter of a lifetime with proper networking without realising that a network as such does not help. Networks are part of a professionalised academic environment. But while they might help exchanging ideas and even offer frameworks for collaborative projects, they are not functional as such. They need some sort of glue that keeps them together. Some people believe that networks are selective by being meritocratic. But while merit or at least prestige might often belong to the necessary conditions of getting in, it’s probably not sufficient. My hunch is that this glue comes in the shape of friendship. By that I don’t necessarily mean deeply personal friendships but “academic friendships”: people like and trust each other to some degree, and build on that professionally. If true, this might be an unwelcome fact because it runs counter to our policies of inclusion and diversity. But people need to trust each other and thus also need something stronger than policies.

Therefore, the lesson is twofold: On the one hand, people need to see that sustainable networks require trust. On the other hand, we need functional institutional structures to both to sustain such networks and to counterbalance the threat of nepotism that might come with friendship. We have or should have such structures in the shape of laws, universities, academic societies and a growing practice of mentoring. To be sure, saying that networks are not meritocratic does not mean that there is no such thing as merit. Thus, such institutions need to ensure that processes of reviewing are transparent and in keeping with commitments to democratic values as well as to the support of those still underrepresented. No matter whether this concerns written work, conferences or hiring. But the idea that networks as such are meritocratic makes their reliance on friendships invisible.

Now while friendships cannot be forced, they can be cultivated. If we wish to counter the pernicious class system and stabilise institutional remedies against it, we should advise people to extend (academic) friendships rather than competition. Competition fosters the false idea that getting into a network depends on merit. The idea of extending and cultivating academic friendship rests on the idea that merit in philosophy is a collective effort to begin with and that it needs all the people interested to keep weaving the web of knowledge. If at all, it is this way that meritocratic practices can be promoted; not by exclusion. You might object that we are operating with limited resources, but if the demand is on the rise, we have to demand more resources rather than competing for less and less. That said, cultivating academic friendships needs to be counterbalanced by transparency. Yet while we continue to fail, friendships are not only the glue of networks, but might be what keeps you sane when academia seems to fall apart.

Postscriptum I: So what about the conference referred to above? The event is a follow-up from a conference in 1999, and quite some of the former participants are present again. If it was, as it seems, based on academic friendships, isn’t that a reason to praise it? As I said and wish to emphasise again, academic friendships without institutional control do not foster the kinds of inclusive environments we should want. For neither can there be meritocratic procedures without the inclusion of underrepresented groups, nor can a two-class separation of established and not-established scholars lead to the desired extension of academic friendships. In addition to the memories triggered, one might note other issues. Given that there are comparatively many women working in this field, it is surprising that only three women are among the invited speakers. That said, the gendered conference campaign has of course identified understandable reasons for such imbalances. A further point is the fact that early career people wishing to attend have roughly two weeks after the announcement to register and apply. There is no reimbursement of costs, but one can apply for financial support after committing oneself to participate. – In view of these critical remarks, it should be noted again that this conference rather represents the status quo than the exception. The idea is not to criticise that academic friendships lead to such events, but rather to stress the need for rethinking how these can be joined with institutional mechanisms that counterbalance the downsides in tightening our networks.

___

* Postscriptum II (14 March 2019): Yes. Before writing this post, I sent a mail to S.I.E.M.P. inquiring about the nature of the seminars for early career people. I asked:

(1) Are there any time slots reserved for this or are the seminars held parallel to the colloquium?
(2) What is the “new scientific methodology” referred to in the call?
(3) And is there any sort of application procedure?

The mail was forwarded to the local organisers and prompted the following reply:

“Thank you for interest in the colloquium on the Dionysian Traditions!

The time for the seminars is Friday morning. The papers should not be longer than 20 minutes. You should send us a list with titles, and preferably – with abstracts too. We have a strict time limit and not everyone may have the opportunity to present. Travel and accommodation costs are to be covered by the participants.

The new scientific methodology is the methodology you deem commensurate with the current knowledge about the Corpus.”

Apart from the fact that the event runs from a Monday to a Wednesday, the main question about the integration and audience of these seminars remains unanswered. Assuming that “Friday” is Wednesday, the seminars conicide with the announced round table, to be held at the same time at which the bureau of S.I.E.P.M. holds their meeting. (This was confirmed by a further exchange of mails.) But unlike the announcement itself, the mail now speaks of “papers” that the attendees may present.

The competition fallacy

“We asked for workers. We got people instead.” Max Frisch

 

Imagine that you want to buy an album by the composer and singer Caroline Shaw, but they sell you one by Luciano Pavarotti instead, arguing that Pavarotti is clearly the more successful and better singer. Well, philosophers often make similar moves. They will say things like “Lewis was a better philosopher than Arendt” and even make polls to see how the majority sees the matter. Perhaps you agree with me in thinking that something has gone severely wrong in such cases. But what exactly is it? In the following I’d like to suggest that competitive rankings are not applicable when we compare individuals in certain respects. This should have serious repercussions on thinking about the job market in academia.

Ranking two individual philosophers who work in fairly different fields and contexts strikes me as pointless. Of course, you can compare them, see differences and agreements, ask about their respective popularity and so forth. But what would Lewis have said about the banality of evil? Or Arendt about modal realism? – While you might have preferences for one kind of philosophy over another, you would have a hard time explaining who the “better” or more “important” philosopher is (irrespective of said preferences). There are at least three reasons for this: Firstly, Arendt and Lewis have very little point of contact, i.e. a straightforward common ground on which to plot a comparison of their philosophies. Secondly, even if they had more commonalities or overlaps, the respective understandings of what philosophy is and what good philosophy should accomplish can be fairly different. Thirdly and perhaps most importantly, philosophies are always individual and unique accomplishments. Unique creations are not something one can have a competition about. If we assume that there is a philosophical theory T1, T1 is not the kind of thing that you can compete about being better at. Of course, you can refine T1, but then you’ve created a refined theory T2. Now you might want to claim that T2 can be called better than T1. But what would T2 be, were it not for T1? Relatedly, philosophers are unique. The assumption that what one philosopher does can be done better or equally well by another philosopher is an illusion fostered by professionalised environments. People are always unique individuals and their ideas cannot be exchanged salva veritate.*

Now since there are open job searches (sometimes even without a specification of the area of specialisation) you could imagine a philosophy department in 2019 having to decide whether they hire Lewis or Arendt. I can picture the discussions among the committee members quite vividly. But in doing such a search they are doing the same thing as the shop assistant who ends up arguing for Pavarotti over Shaw. Then words like “quality”, “output”, “grant potential”, “teaching evaluations”, “fit” … oh, and “diversity” will be uttered. “Arendt will pull more students!” – “Yeah, but what about her publication record? I don’t see any top journals!” – “Well, she is a woman.” In a good world both of them would be hired, but we live in a world where many departments might rather hire two David Lewises. So what’s going on?

It’s important to note that the competition is not about their philosophies: Despite the word “quality”, for the three reasons given above, the committee members cannot have them compete as philosophers. Rather, the department has certain “needs” that the competition is about.** The competition is about functions in the department, not about philosophy. As I see it, this point generalises: competitions are never about philosophy but always about work and functions in a department.*** Now, the pernicious thing is that departments and search committees and even candidates often pretend that the search is about the quality of their philosophy. But in the majority of cases that cannot be true, simply because the precise shape, task and ends of philosophy are a matter of dispute. What weighs is functions, not philosophy.

Arguably, there can be no competition between philosophers qua philosophers. Neither between Arendt and Lewis, nor between Arendt and Butler, nor between Lewis and Kripke. Philosophers can discuss and disagree but they cannot compete. What should they compete about? If they compete about jobs, it’s the functions in departments that are at stake. (That is also the reason why we allow for prestige as quality indicators.)  If they assume to be competing about who is the better philosopher, they mistake what they are doing. Of course, one philosopher might be preferred over another, but this is subject to change and chance, and owing to the notion of philosophy of the dominant committee member. The idea that there can be genuinely philosophical competition is a fallacy.

Does it follow, then, that there is no such thing as good or better philosophy? Although this seems to follow, it doesn’t. In a given context and group, things will count as good or better philosophy. But here is another confusion lurking. “Good” philosophy is not the property of an individual person. Rather, it is a feature of a discussion or interacting texts. Philosophy is good if the discussion “works well”. It takes good interlocutors on all sides. If I stammer out aphorisms or treatises, they are neither good nor bad. What turns them into something worthwhile is owing to those listening, understanding and responding. To my mind, good quality is resourcefulness of conversations. The more notions and styles of philosophy a conversation can integrate, the more resources it has to tackle what is at stake. In philosophy, there is no competition, just conversation.

Therefore, departments and candidates should stop assuming that the competition is about the quality of philosophy. Moreover, we should stop claiming that competitiveness is an indicator of being a good philosopher.**** Have you ever convinced an interlocutor by shouting that you’re better or more excellent than them?

___

* At the end of the day, philosophies are either disparate or they are in dialogue. In the former case, rivalry would be pointless; in the latter case, the rivalry is not competitive but a form of (disagreeing or agreeing) refinement. If philosophers take themselves to be competing about something like the better argument, they are actually not competing but discussing and thus depend on one another.

** This does not mean that these needs or their potential fulfillment ultimately decide the outcome of the competition. Often there is disagreement or ignorance about what these needs are or how they are to be prioritised. With regard to committees, I find this article quite interesting.

*** In a recent blog post, Ian James Kidd distinguishes between being good at philosophy vs being good at academic philosophy. It’s a great post. (My only disagreement would be that being good at philosophy is ultimately a feature of groups and discussions, not individuals.) Eric Schliesser made similar points in an older more gloomy post.

**** On FB, Evelina Miteva suggeststhat we need fair trade philosophy, like the fair trade coffee. Fair trade coffee is not necessarily of a better taste or quality, it ‘only’ makes sure that the producers will get something out if their work.” – I think this is exactly right: On some levels, this already seems to be happening, for instance, in the open access movement. Something similar could be applied to recruiting and employment conditions in academia. In fact, something like this seems to be happening, in that some universities are awarded for being family friendly or being forthcoming in other ways (good hiring practice e.g.). – My idea is that we could amend many problems (the so-called mental health crisis etc.), if we were to stop incentivising competitiveness on the wrong levels and promote measures of solidarity instead. – The message should be that the community does no longer tolerate certain forms of wrong competition and exploitation.

Relatedly, this also makes for an argument in favour of affirmative action against discrimination of underrepresented groups: People who believe in meritocracy often say that affirmative action threatens quality. But affirmative action is not about replacing “good” with “underrepresented” philosophers. Why? Because the quality of philososphy is not an issue in competitive hiring in the first place.