Is criticism of mismanagement and misconduct taken as snitching? How academia maintains the status quo

Recently, I became interested (again) in the way our upbringing affects our values. Considering how groups, especially in academia, often manage to suppress criticism of misconduct, I began to wonder which values we associate with criticism more generally. First, I noticed a strange ambivalence. Just think about the ambivalent portrayal of whistle blowers like Edward Snowden! The ambivalence is captured in values like loyalty that mostly pertain to a group and are not taken to be universal. Then, it hit me. Yes, truth telling is nice. But in-groups ostracise you as a snitch, a rat or a tattletale! Denouncing “virtue signalling” or “cancel culture” seems to be on a par with this verdict. So while criticism of mismanagement or misconduct is often invited as an opportunity for improvement, it is mostly received as a cause of reputational damage.

Now I wrote up a small piece for Zoon Politikon.* In this blog post, I just want to share what I take to be the main idea.

The ambivalence of criticism in academia seems to be rooted in an on-going tension between academic and managerial hierarchies. While they are intertwined, they are founded on very different lines of justification. If I happen to be your department chair, this authority weighs nothing in the setting of, say, an academic conference. Such hierarchies might be justifiable in principle. But while the goals of academic work and thus hierarchies are to some degree in the control of the actual agents involved, managerial hierarchies cannot be justified in the same way. A helpful illustration is the way qualitative and quantitative assessment of our work come apart: A single paper might take years of research and end up being a game-changer in the field of specialisation, but if it happens to be the only paper published in the course of three years, it won’t count as sufficient output. So while my senior colleague might have great respect for my work as an academic, she might find herself confronted with incentives to admonish and perhaps even fire me.

What does this mean for the status of criticism? The twofold nature of hierarchies leaves us with two entirely disparate justifications of criticism. But these disparate lines of justification are themselves a constant reason for criticism. The fact that a field-changing paper and a mediocre report both make one single line in a CV bears testimony to this. But here’s the thing: we seemingly delegitimise such criticism by tolerating and ultimately accepting the imperfect status quo. Of course, most academics are aware of a tension: The quantification of our work is an almost constant reason for shared grievance. But as employees we find ourselves often enough buying into it as a “necessary evil”. Now, if we accept it as a necessary evil, we seem to give up on our right to criticise it. Or don’t we? Of course not, and the situation is a lot more dynamic than I can capture here. To understand how “buying into” an imperfect situation (a necessary evil) might seemingly delegitimise criticism, it is crucial to pause and briefly zoom in on the shared grievance I just mentioned.

Let me begin by summarising the main idea: The shared grievance constitutes our status quo and, in turn, provides social cohesion among academics. Criticism will turn out to be a disturbance of that social cohesion. Thus, critics of the status quo will likely be ostracised as “telling on” us.

One might portray the fact that we live with an academic and a managerial hierarchy simply as unjust. One hierarchy is justified, the other isn’t (isn’t really, that is). Perhaps, in a perfect world, the two hierarchies would coincide. But in fact we accept that, with academia being part of the capitalist world at large, they will never coincide. This means that both hierarchies can be justified: one as rooted in academic acclaim; the other as a necessary evil of organising work. If this is correct and if we accept that the world is never perfect, we will find ourselves in an on-going oscillation and vacillation. We oscillate between the two hierarchies. And we vacillate between criticising and accepting the imperfection of this situation. This vacillation is, I submit, what makes criticism truly ambivalent. On the one hand, we can see our work-relations from the different perspectives; on the other hand, we have no clear means to decide which side is truly justified. The result of this vacillation is thus not some sort of solution but a shared grievance. A grievance acknowledging both the injustices and the persisting imperfection. There are two crucial factors in this: The fact that we accept the imperfect situation to some degree; and the fact that this acceptance is a collective status, it is our status quo. Now, I alone could not accept on-going injustices in that status quo, if my colleagues were to continuously rebel against it. Thus, one might assume that, in sharing such an acceptance, we share a form of grievance about the remaining vacillation.

It is of course difficult to pin down such a phenomenon, as it obtains mostly tacitly. But we might notice it in our daily interactions when we mutually accept that we see a tension, for instance, between the qualitative and quantitative assessment of our work. This shared acceptance, then, gives us some social cohesion. We form a group that is tied together neither by purely academic nor by purely managerial hierarchies and relations. There might be a growing sense of complicity in dynamic structures that are and aren’t justified but continue to obtain. So what forms social cohesion between academics are not merely factors of formal appraisal or informal friendship. Rather, a further crucial factor is the shared acceptance of the imperfection of the status quo. The acceptance is crucial in that it acknowledges the vacillation and informs what one might call the “morale” of the group.

If this is correct, academics do indeed form a kind of group through acceptance of commonly perceived imperfections. Now if we form such a group, it means that criticism will be seen as both justified but also as threatening the shared acceptance. We know that a critic of quantitative work measures is justified. But we also feel that we gave in and accepted this imperfection a while ago. The critic seemingly breaks with this tacit consent and will be seen like someone snitching or “telling on us”. As I see it, it is this departure from an in-group consensus that makes criticism appear as snitching. And while revealing a truth about the group might count as virtuous, it makes the critic seemingly depart from the in-group. Of course, companies and universities enjoy also some legal protection. Even if you find out about something blameworthy, you might be bound by rules about confidentiality. This is why whistle blowers do indeed have an ambivalent reputation, too. But I guess that the legal component alone does not account for the force of the in-group mentality at work in suppressing criticism.

This mode of suppressing criticism has pernicious effects. The intertwined academic and managerial hierarchies often come with inverse perceptions of criticism: your professorial colleague might be happy to learn from your objections, while your department chair might shun your criticism and even retaliate against you. Yet, they might be the same person. Considering the ubiquitous histories of suppressing critics of sexism, racism and other kinds of misconduct, we do not need to look far to find evidence for ostracism or retaliation against critics. I think that it’s hard to explain this level of complicity with wrongdoers merely by referring to bad intentions, on the one hand, or formal agreements such as confidentiality, on the other. Rather, I think, it is worthwhile to consider the deep-rooted in-group consensus that renders criticism as snitching. One reason is that snitching counts, at least in a good number of cultures, as a bad action. But while this might be explained with concerns about social cohesion, it certainly remains a morally dubious verdict, given that snitching is truth-conducive and should thus be aligned with values such as transparency. Going by personal anecdotes, however, I witnessed that snitching was often condemned even by school teachers, who often seemed to worry about social cohesion no less than about truthfulness. In other words, we don’t seem to like that the truth be told when it threatens our status quo.

In sum, we see that the ambivalent status of criticism is rooted in a twofold hierarchy that, in turn, comes with disparate sets of values. Shared acceptance of these disparate sets as an unavoidable imperfection binds together an in-group that will sanction explicit criticism of this imperfection as a deviation from the consensus. The current charges against so-called “virtue signalling”, a “call out culture” or “cancel culture” on social media strike me as instances of such sanctions. If we ask what makes the inclinations to sanction in-group norm violations so strong, it seems helpful to consider the deep-rooted code against snitching. While the moral status of sanctioning snitching is certainly questionable, it can shed light on the pervasive motivation and strikingly ready acceptance of such behaviour.

______

* Following a discussion of a blog post on silence in academia, Izabela Wagner kindly invited me to contribute to a special issue in Zoon Politikon. I am enormously grateful to her for the exchanges and for providing this opportunity. Moreover, I have benefitted greatly from advice by Lisa Herzog, Pietro Ingallina, Mariya Ivancheva, Christopher Quinatana, Rineke Verbrugge, and Justin Weinberg.

On self-censorship

For a few years during the 80s, Modern Talking was one of the most well known pop bands in Germany. But although their first single “You’re my heart, you’re my soul” was sold over eight million times, no one admitted to having bought it. Luckily, my dislike of their music was authentic, so I never had to suffer that particular embarrassment. Yet, imagine all these people alone in their homes, listening to their favourite tune but never daring to acknowledge it openly. Enjoying kitsch of any sort brings the whole drama of self-censorship to the fore. You might be moved deeply, but the loss of face is more unbearable than remaining in hiding. What’s going on here? Depending on what precisely is at stake, people feel very differently about this phenomenon. Some will say that self-censorship just maintains an acceptable level of decency or tact; others will say that it reflects political oppression or, ahem, correctness. At some point, however, you might let go of all shame. Perhaps you’ve got tenure and start blogging or something like that … While some people think it’s a feature of the current “cancel culture”, left or right, I think it’s more important to see the different kinds of reasons behind self-censorship. In some cases, there really is oppression at work; in other cases, it’s peer pressure. Neither is fun. In any case, it’s in the nature of this phenomenon that it is hard to track in a methodologically sound way. So rather than draw a general conclusion, it might be better to go through some very different stories.

Bad thoughts. – Do you remember how you, as a child, entertained the idea that your thoughts might have horrible consequences? My memory is faint, but I still remember assuming that thinking of swear words might entail my parents having an accident. So I felt guilty for thinking these words, and tried to break the curse by uttering them to my parents. But somehow I failed to convince them of the actual function of my utterance, and so they thought I was just calling them names. Today, I know that this is something that happens to occur in children, sometimes even pathologically strong and thus known as “intrusive thoughts” within an “obsessive compulsory disorder”. Whatever the psychological assessment, my experience was that of “forbidden” thoughts and, simultaneously, the inability to explain myself properly. Luckily, it didn’t haunt me, but I can imagine it becoming problematic.

One emergence of the free speech debate. – When I was between 7 and 10 years old (thus in the 1970s), I sometimes visited a lonely elderly woman. She was an acquaintance of my mother, well in her 70s and happy to receive some help. When no one else was around she often explained her political views to me. She was a great admirer of Franz Josef Strauß whom she described to me as a “small Hitler – something that Germany really needs again”. She hastened to explain that, of course, the real Hitler would be too much, but a “small” one would be quite alright. She then praised how, back in the day, women could still go for walks after dark etc. Listening to other people of that generation, I got the impression that many people in Germany shared these ideas. In 2007, the news presenter Eva Herman explicitly praised the family values of Nazi Germany and was dismissed from her position. The current rise of fascism in Germany strikes me as continuous with the sentiments I found around me early on. And if I’m not mistaken these sentiments date back at least to the 1930s and 1940s. In my experience, Nazism was never just an abstract political view. Early on did I realise that otherwise seemingly “decent” people could be taken by it. But this concrete personal dimension made the sweaty and simplistic attitude to other people all the more repulsive. In any case, I personally found that people in the vicinity of that ideology are the most vocal people who like to portray themselves as “victims” of censorship, though they are certainly not censoring themselves. (When it comes to questions of free speech, I am always surprised that whistleblowers such as Snowden are not mentioned.)

Peer pressure and classism. – I recently hosted a guest post on being a first generation student that really made me want to write about this issue myself. But often when I think about this topic, I still feel uncomfortable writing about it. In some ways, it’s all quite undramatic in that the transition to academia was made very easy by my friends. For what shouldn’t be forgotten is that it’s not only your parents and teachers who educate you. In my case at least, I tacitly picked up many of the relevant habits from my friends and glided into being a new persona. Although I hold no personal grudges, I know that “clothes make people” or “the man” as Gottfried Keller’s story is sometimes translated. What I noticed most is that people from other backgrounds often have a different kind of confidence being around academics. Whether that is an advantage across the board I don’t know. What I do know is that I took great care to keep my own background hidden from most colleagues, at least before getting a tenured job.

Opportunism and tenure. – Personally, I believe that I wouldn’t dare publishing this very post or indeed any of my posts, had I not obtained a tenured position. Saying this, I don’t want to impart advice. All I want to say is that getting this kind of job is what personally freed me to speak openly about certain things. But the existential weight of this fact makes me think that the greatest problem about self-censorship lies in the different socio-economic status that people find themselves in. This is just my experience, but perhaps it’s worth sharing. So what is it about, you might wonder? There is no particular truth that I would not have told before but would tell now. It’s not a matter of any particular opinion, be it left or right. Rather, it affects just about everything I say. The fact that I feel free to talk about my tastes, about the kitsch I adore, about the music I dislike, about the artworks I find dull, alongside the political inclinations I have – talking about all of this openly, not just politics, is affected by the fact that I cannot be fired just so and that I do not have to impress anyone I don’t want to impress. It is this freedom that I think does not only allow us to speak but also requires us to speak up when others will remain silent out of fear.

The myth of authenticity. – The fact that many of us feel they have to withhold something creates the idea that there might be a vast amount of unspoken truths under the surface. “Yes”, you might be inclined to ask, “but what do you really think?” This reminds me of the assumption that, in our hearts, we speak a private language that we cannot make intelligible to others. Or of the questions immigrants get to hear when people inquire where they really come from. It doesn’t really make sense. While it is likely that many people do not say what they would say if their situation were different, I don’t think it’s right to construe this as a situation of hidden truths or lies. (Some people construe the fact that we might hide conceal our opinions as lies. But I doubt that’s a pertinent description.) For better or worse, the world we live in is all we have when it comes to questions of authenticity. If you choose to remain silent, there is no hidden truth left unspoken. It just is what it is: you’re not speaking up and you might be in agony about that. You might conceal what you think. But then it is the concealing that shapes the world and yourself, not the stuff left unspoken. Put differently, there are no truths, no hidden selves, authentic or not, that persist without some relation to interlocutors.

***

Speaking of which, I want to finish this post with a word of thanks. It’s now two years ago that I started this blog. By now I have written 118 posts. If I include the guest posts, it adds up to 131. Besides having the pleasure of hosting great guest authors, I feel enormously privileged to write for you openly. On the one hand, this is enabled by the relatively comfortable situation that I am in. On the other hand, none of this would add up to anything if it weren’t for you, dear interlocutors.

“We don’t need no …” On linguistic inequality

Deviations from so-called standard forms of language (such as the double negative) make you stand out immediately. Try and use double negatives consistently in your university courses or at the next job interview and see how people react. Even if people won’t correct you explicitly, many will do so tacitly. Such features of language function as social markers and evoke pertinent gut reactions. Arguably, this is not only true of grammatical or lexical features, but also of broader stylistic features in writing, speech and even non-linguistic conduct. Some ways of phrasing may sound like heavy boots. Depending on our upbringing, we are familiar with quite different linguistic features. While none of this might be news, it raises crucial questions about teaching that I see rarely addressed. How do we respond to linguistic and stylistic diversity? When we say that certain students “are struggling”, we often mean that they deviate from our stylistic expectations. A common reaction is to impart techniques that help them in conforming to such expectations. But should we perhaps respond by trying to understand the “deviant” style?

Reading the double negative “We don’t need no …”, you might see quite different things: (1) a grammatically incorrect phrase in English; (2) a grammatically correct phrase in English; (3) part of a famous song by Pink Floyd. Assuming that many of us recognise these things, some will want to hasten to add that (2) contradicts (1). A seemingly obvious way to resolve this is to say that reading (1) applies to what is called the standard dialect of English (British English), while (2) applies to some dialects of English (e.g. African-American Vernacular English). This solution prioritises one standard over other “deviant” forms that are deemed incorrect or informal etc. It is obvious that this hierarchy goes hand in hand with social tensions. At German schools and universities, for instance, you can find numerous students and lecturers who hide their dialects or accents. In linguistics, the disadvantages of regional dialect speakers have long been acknowledged. Even if the prescriptive approach has long been challenged, it’s driving much of the implicit culture in education.

But the distinction between standard and deviant forms of language ignores the fact that the latter often come with long-standing rules of their own. Adjusting to the style of your teacher might then require you to deviate from the language of your parents. Thus another solution is to say that there are different English languages. Accordingly, we can acknowledge reading (2) and call African-American Vernacular English (AAVE) a language. The precise status and genealogy is a matter of linguistic controversy. However, the social and political repercussions of this solution come most clearly into view when we consider the public debate about teaching what is called “Ebonics” at school in the 90s (Here is a very instructive video about this debate). If we acknowledge reading (2), it means, mutatis mutandis, that many English speakers raised with AAVE can be considered bilingual. Educators realised that teaching standard forms of English can be aided greatly by using AAVE as the language of instruction. Yet, trying to implement this as a policy at school soon resulted in a debate about a “political correctness exemplar gone out of control” and abandoning the “language of Shakespeare”. The bottom-line is: Non-hierarchical acknowledgement of different standards quickly spirals into defences of the supposed status quo by the dominant social group.

Supposed standards and deviations readily extend to styles of writing and conduct in academic philosophy. We all have a rough idea what a typical lecture looks like, how a discussion goes and how a paper should be structured. Accordingly, attempts at diversification are met with suspicion. Will they be as good as our standards? Won’t they undermine the clarity we have achieved in our styles of reasoning? A more traditional division is that between so-called analytic and continental philosophy. Given the social gut reactions to diversifying linguistic standards, it might not come as a surprise that we find equal responses among philosophers: Shortly before the University of Cambridge awarded a honorary degree to Derrida in 1992, a group of philosophers published an open letter protesting that “Derrida’s work does not meet accepted standards of clarity and rigour.” (Eric Schliesser has a succinct analysis of the letter.) Rather than acknowledging that there might be various standards emerging from different traditions, the supposedly dominant standard of clarity is often defended like an eternal Platonic idea.

While it is easy to see and criticise this, it is much more difficult to find a way of dealing with it in the messy real world. My historically minded self has had and has the luxury to engage with a variety of styles without having to pass judgment, at least not explicitly. More importantly, when teaching students I have to strike a balance between acknowledging variety and preparing them for situations in which such acknowledgement won’t be welcome. In other words, I try to teach “the standard”, while trying to show its limits within an array of alternatives. My goal in teaching, then, would not be to drive out “deviant” stylistic features, but to point to various resources required in different contexts. History (of philosophy) clearly helps with that. But the real resources are provided by the students themselves. Ultimately, I would hope, not to teach them how to write, but how to find their own voices within their various backgrounds and learn to gear them towards different purposes.

But to do so, I have to learn, to some degree, the idioms of my students and try to understand the deep structure of their ways of expression. Not as superior, not as inferior, but as resourceful within contexts yet unknown to me. On the other hand, I cannot but also lay open my own reactions and those of the traditions I am part of. – Returning to the fact that language comes with social markers, perhaps one of the most important aspects of teaching is to convey a variety of means to understand and express oneself through language. Our gut reactions run very deep, and what is perceived as linguistic ‘shortcomings’ will move people, one way or another. But there is a double truth: Although we often cannot but go along with our standards, they will very soon be out of date. New standards and styles will emerge. And we, or I should say “I”, will just sound old-fashioned at best. Memento mori.

You don’t get what you deserve. Part II: diversity versus meritocracy?

“I’m all for diversity. That said, I don’t want to lower the bar.” – If you have been part of a hiring committee, you will probably have heard some version of that phrase. The first sentence expresses a commitment to diversity. The second sentence qualifies it: diversity shouldn’t get in the way of merit. Interestingly, the same phrase can be heard in opposing ways. A staunch defender of meritocracy will find the second sentence (about not lowering the bar) disingenuous. He will argue that, if you’re committed to diversity, you might be disinclined to hire the “best candidate”. By contrast, a defender of diversity will find the first sentence disingenuous. If you’re going in for meritocratic principles, you will just follow your biases and ultimately take the properties of “white” and “male” as a proxy of merit. – This kind of discussion often runs into a stalemate. As I see it, the problem is to treat diversity and meritocracy as an opposition. I will suggest that this kind of discussion can be more fruitful if we see that diversity is not a property of job candidates but of teams, and thus not to be seen in opposition to meritocratic principles.

Let’s begin with a clarification. I assume that it’s false and harmful to believe that we live in a meritocracy. But that doesn’t mean that meritocratic ideas themselves are bad. If it is simply taken as the idea that one gets a job based on their pertinent qualifications, then I am all for meritocratic principles. However, a great problem in applying such principles is that, arguably, the structure of hiring processes makes it difficult to discern qualifications. Why? Because qualifications are often taken to be indicated by other factors such as prestige etc. But prestige, in turn, might be said to correlate with race, gender, class or whatever, rather than with qualifications. At the end of the day, an adherent of diversity can accuse adherents of meritocracy of the same vices that she finds herself accused of. So when merit and diversity are taken as being in opposition, we tend to end up in the following tangle:

  • Adherents of diversity think that meritocracy is ultimately non-meritocratic, racist, sexist, classist etc.
  • Adherents of meritocracy think that diversity is non-meritocratic, racist, sexist, classist etc.*

What can we do in such a stalemate? How can the discussion be decided? Something that typically gets pointed out is homogeneity. The adherent of diversity will point to the homogeneity of people. Most departments in my profession, for instance, are populated with white men. The homogeneity points to a lack of diversity. Whether this correlates to a homogeneity of merit is certainly questionable. Therefore, the next step in the discussion is typically an epistemological one: How can we know whether the candidates are qualified? More importantly, can we discern quality independently from features such as race, gender or class? – In this situation, adherents of diversity typically refer to studies that reveal implicit biases. Identical CVs, for instance, have been shown to be treated as more or less favourable depending on the features of the name on the CV. Meritocratists, by contrast, will typically insist that they can discern quality objectively or correct for biases. Again, both sides seem to have a point. We might be subject to biases, but if we don’t leave decisions to individuals but to, say, committees, then we can perhaps correct for biases. At least if these committees are sufficiently diverse, one might add. – However, I think the stalemate will get passed indefinitely to different levels, as long as we treat merit and diversity as an opposition. So how can we move forward?

We try to correct for biases, for instance, by making a committee diverse. While this is a helpful step, it also reveals a crucial feature about diversity that is typically ignored in such discussions. Diversity is a feature of a team or group, not of an individual. The merit or qualification of a candidate is something pertaining to that candidate. If we look for a Latinist, for instance, knowledge of Latin will be a meritorious qualification. Diversity, by contrast, is not a feature, to be found in the candidate. Rather, it is a feature of the group that the candidate will be part of. Adding a woman to all-male team will make the team more diverse, but that is not a feature of the candidate. Therefore, accusing adherents of diversity of sexism or racism is fallacious. Trying to build a more diverse team rather than favouring one category strikes me as a means to counter such phenomena.

Now if we accept that there is such a thing as qualification (or merit), it makes sense to say that in choosing a candidate for a job we will take qualifications into account as a necessary condition. But one rarely merely hires a candidate; one builds a team, and thus further considerations apply. One might end up with a number of highly qualified candidates. But then one has to consider other questions, such as the team one is trying to build. And then it seems apt to consider the composition of the team. But that does not mean that merit and diversity are opposed to one another.

Nevertheless, prioritising considerations about the team over the considerations about the candidates are often met with suspicion. “She only got the job because …” Such an allegation is indeed sexist, because it construes a diversity consideration applicable to a team as the reason for hiring, as if it were the qualification of an individual. But no matter how suspicious one is, qualification and diversity are not on a par, nor can they be opposing features.

Compare: A singer might complain that the choir hired a soprano rather than him, a tenor. But the choir wasn’t merely looking for a singer but for a soprano. Now that doesn’t make the soprano a better singer than the tenor, nor does it make the tenor better than the soprano. Hiring a soprano is relevant to the quality of the group; it doesn’t reflect the quality of the individual.

____

* However, making such a claim, an adherent of meritocracy will probably rely on the assumption that there is such a thing as “inverted racism or sexism”. In the light of our historical sitation, this strikes me as very difficult to argue, at least with regard to institutional structures. It’s seems like saying that certain doctrines and practices of the Catholic Church are not sexist, simply because there are movements aiming at reform.

Fit. A Note on Aristotle’s Presence in Academia

Since the so-called Scientific Revolution and the birth of modern science, our Western approach towards the world became quantitative. The precedingly dominant qualitative Aristotelian worldview of the Scholastics was replaced by a quantitative one: everything around us was supposed to be quantifiable and quantified. This, of course, seems to affect nowadays academia, too. We often hear “do this, it will be one more line in your CV!” 

Many will reply “This is not true, quality matters just as much!” Yes, it (sometimes) matters in which journal one publishes; it has to be a good journal; one needs to make sure that the quality of the article is good. And how do we know that the journal is good or not? Because of its ranking. So if you thought I will argue that this is Aristotle’s presence in Academia… you were wrong. The criterion is still quantitative. Of course, we trust more that an article in a respectable (i.e., highly ranked) journal is a good one, but we all know this is not always the case. 

Bringing into discussion the qualitative and quantitative distinction is crucial for assessing job applications and the ensuing hiring process. While it used to be easier for those in a position of power to hire whom they want, it has become a bit more difficult. Imagine you really want to hire someone because he (I will use this pronoun for certain reasons) is brilliant. But his brilliance is not reflected in his publications, presentations, teaching evaluations, grants (the latter because he did not get any)… You cannot even say he is a promising scholar, since that should be visible in something. At the same time, there are a lot of competing applications with an impressive record. So what can one do? Make use of the category ‘better fit’, ‘better fit’ for the position, ‘better fit’ for the department.[1] But when is someone a ‘better fit’, given that the job description did not mention anything to this effect? When their research is in line with the department? No, too much overlap! When it complements the existing areas of research? No, way too different!

And here is where Aristotle comes into the picture. It is not the research that has to fit, but the person. And we know from Aristotle and his followers that gender, race and nationality are the result of the (four elemental) qualities. Who can be more fit for a department mostly composed of men from Western Europe than another man from Western Europe? As a woman coming from Eastern Europe, I have no chance. And Eastern Europe is not even the worst place to come from in this respect. 

There is a caveat though. When more people who fit in the department apply, the committee seeks refuge in positing some ‘occult qualities’ to choose the ‘right’ person. ‘Occult’ in the Scholastic sense means that the quality it is not manifest in any way in the person’s profile.[2]

How much is this different from days when positions were just given away on the basis of personal preference? The difference lies in the charade.[3] The difference is that nowadays a bunch of other people, devoid of occult qualities, though with an impressive array of qualities manifest in their CVs and international recognition, spend time and energy to prepare an application, get frustrated, maybe even get sick, just so that the person with the ‘better fit’ can have the impression that he is much better than all the rest who applied.

So when are we going to give up the Aristotelian-Scholastic elementary and occult qualities and opt for a different set of more inclusive qualities?


[1] Aristotle probably put it in his Categories, but it got lost.

[2] I am rather unfair with this term, because the occult qualities were making themselves present through certain effects.

[3] The Oxford dictionary indeed defines charade as “an absurd pretence intended to create a pleasant or respectable appearance.”

On being a first-generation student

First off: the following is not to be taken as a tale of woe. I am grateful for whatever life has had on offer for me so far, and I am indebted to my teachers – from primary school to university and beyond – in many ways. But I felt that, given that Martin invited me to do so, I should probably provide some context to my comment on his recent post on meritocracy, in which I claimed that my being a first-generation student has had a “profound influence on how I conceive of academia”. So here goes.

I am a first-generation student from a lower-middle-class family. My grandparents on the maternal side owned and operated a small farm, my grandfather on the paternal side worked in a foundry, and his wife – my father’s mother – did off-the-books work as a cleaning woman in order to make ends meet.

When I got my first job as a lecturer in philosophy my monthly income already exceeded that of my mother, who has worked a full-time job in a hospital for more than thirty years. My father, a bricklayer by training, is by now divorced from my mother and declutters homes for a living. Sometimes he calls me in order to tell me about a particularly good bargain he stroke on the flea market.

My parents did not save money for my education. As an undergraduate I was lucky to receive close to the maximum amount of financial assistance afforded by the German Federal Law on Support in Education (BAföG) – still, I had to work in order to be able to fully support myself (tuition fees, which had just been introduced when I began my studies, did not help). At the worst time, I juggled three jobs on the side. I have work experience as a call center agent (bad), cleaning woman (not as bad), fitness club receptionist (strange), private tutor (okay), and teaching assistant (by far the nicest experience).

Not every middle-class family is the same, of course. Nor is every family in which both parents are non-academics. Here is one way in which the latter differ: There are those parents who encourage – or, sometimes, push – their children to do better than themselves, who emphasize the value of higher education, who make sure their children acquire certain skills that are tied to a particular habitus (like playing the piano), who provide age-appropriate books and art experiences. My parents were not like that. “Doing well”, especially for my father, meant having a secure and “down-to-earth” job, ideally for a lifetime. For a boy, this would have been a craft. Girls, ostensibly being less well-suited for handiwork, should strive for a desk job – or aim “to be provided for”. My father had strong reservations about my going to grammar school, even though I did well in primary school and despite my teacher’s unambiguous recommendation. I think it never occurred to him that I could want to attend university – academia was a world too far removed from his own to even consider that possibility.

I think that my upbringing has shaped – and shapes – my experience of academia in many ways. Some of these I consider good, others I have considered stifling at times. And some might even be loosely related to Martin’s blogpost about meritocracy. Let me mention a few points (much of what follows is not news, and has been put more eloquently by others):

  • Estrangement. An awareness of the ways in which the experiences of my childhood and youth, my interests and preferences, my habits and skills differ from what I consider a prototypical academic philosopher – and I concede that my picture of said prototype might be somewhat exaggerated – has often made me feel “not quite at home” in academia. At the same time, my “professional advancement” has been accompanied by a growing estrangement from my family. This is something that, to my knowledge, many first-generation students testify to, and which can be painful at times. My day-to-day life does not have much in common with my parents’ life, my struggles (Will this or that paper ever get published?) must seem alien, if not ridiculous, to them. They have no clear idea of what it is that I do, other than that it consists of a lot of desk-sitting, reading, and typing. And I think it is hard for them to understand why anyone would even want to do something like this. One thing I am pretty sure of is that academia is, indeed, or in one sense at least, a comparatively cozy bubble. And while I deem it admirable to think of ways of how to engage more with the public, I am often unsure about how much of what we actually do can be made intelligible to “the folk”, or justified in the face of crushing real-world problems.
  • Empathy. One reason why I am grateful for my experiences is that they help me empathize with my students, especially those who seem to be afflicted by some kind of hardship – or so I think. I believe that I am a reasonably good and well-liked teacher, and I think that part of what makes my teaching good is precisely this: empathy. Also, I believe that my experiences are responsible for a heightened sensibility to mechanisms of inclusion and exclusion, and privilege. I know that – being white, having grown up in a relatively secure small town, being blessed with resilience and a certain kind of stubbornness, and so on – I am still very well-off. And I do not want to pretend that I know what it is like to come from real poverty, or how it feels to be a victim of racism or constant harassment. But I hope that I am reasonably open to others’ stories about these kinds of things.
  • Authority. In my family of origin, the prevailing attitude towards intellectuals was a strange mixture between contempt and reverence. Both sentiments were probably due to a sense of disparity: intellectuals seemed to belong to a kind of people quite different from ourselves. This attitude has, I believe, shaped how I perceived of my teachers when I was a philosophy student. I noticed that our lecturers invited us – me – to engage with them “on equal terms”, but I could not bring myself to do so. I had a clear sense of hierarchy; to me, my teachers were authorities. I did eventually manage to speak up in class, but I often felt at a loss for words outside of the classroom setting with its relatively fixed and easily discernable rules. I also struggled with finding my voice in class papers, with taking up and defending a certain position. I realize that this struggle is probably not unique to first-generation students, or to students from lower-class or lower-middle-class backgrounds, or to students whose parents are immigrants, et cetera – but I believe that the struggle is often aggravated by backgrounds like these. As teachers, I think, we should pay close attention to the different needs our students might have regarding how we engage with them. It should go without saying, but if someone seems shy or reserved, don’t try to push them into a friendly and casual conversation about the model of femininity and its relation to sexuality in the novel you recently read.
  • Merit. Now, how does all this relate to the idea of meritocracy? I think there is a lot to say about meritocracy, much more than can possibly be covered in a (somewhat rambling) blogpost. But let me try to point out at least one aspect. Martin loosely characterizes the belief in meritocracy as the belief that “if you’re good or trying hard enough, you’ll get where you want”. But what does “being good enough” or “trying hard enough” amount to in the first place? Are two students who write equally good term papers working equally hard? What if one of them has two children to care for while the other one still lives with and is supported by her parents? What if one struggles with depression while the other does not? What if one comes equipped with “cultural capital” and a sense of entitlement, while the other feels undeserving and stupid? I am not sure about how to answer these questions. But one thing that has always bothered me is talk of students being “smart” or “not so smart”. Much has been written about this already. And yet, some people still talk that way. Many of the students I teach struggle with writing scientific prose, many of them struggle with understanding the assigned readings, many of them struggle with the task of “making up their own minds” or “finding their voice”. And while I agree that those who do not struggle, or who do not struggle as much, should, of course, be encouraged and supported – I sometimes think that the struggling students might be the ones who benefit the most from our teaching philosophy, and for whom our dedication and encouragement might really make a much-needed difference. It certainly did so for me.

You don’t get what you deserve. Part I: Meritocracy in education vs employment relations

The belief that we live in a meritocracy is the idea that people get what they deserve. At school you don’t get a good grade because of your skin colour or because you have a nice smile but because you demonstrate the required skills. When I was young, the idea helped me to gain a lot of confidence. Being what is now called a first-generation student, I thought I owed my opportunity to study to a meritocratic society. I had this wonderfully confident belief that, if you’re good or trying hard enough, you’ll get where you want. Today, I think that there is so much wrong with this idea that I don’t really know where to start. Meritocratic beliefs are mostly false and harmful. In the light of our sociological knowledge, still believing that people get what they deserve strikes me as on a par with being a flat earther or a climate change denialist. At the same time, beliefs in meritocratic principles are enormously widespread and deep-rooted, even among those who should and do know better. In what follows, I attempt to make nothing more than a beginning to look at that pernicious idea and why it has so much currency.

Perhaps one of the greatest problems of meritocratic ideas is that they create a normative link between possibly unrelated things: There is no intrinsic relation between displaying certain qualities, on the one hand, and getting a job, on the other hand. Of course, they might be related; in fact, displaying certain qualities might be one of the necessary conditions for getting the job. But the justification structure suggested by meritocratic beliefs clearly obscures countless other factors, such as being in the right place at the right time etc. Here are two variants of how this plays out:

  • “I’m not good enough.” – This is a common conclusion drawn by most people. That is, by those, who don’t get the job or grant or promotion they have applied for. If there is one job and a hundred applicants, you can guess that a large amount of people will think they were not good enough. Of course, that’s nonsense for many reasons. But if the belief is that people get what they deserve, then those not getting anything might conclude to be undeserving. A recent piece by a lecturer leaving academia, for instance, contends that part of the problem is that one always has to show that one is “better than the rest”, insinuating that people showing just that might get the job in the end. But apart from the fact that the imbalance between available jobs and applicants pushes such demands to absurd heights, the question arises whether any employer could be sufficiently good to be able to recognise the enormously refined qualities of the applicants.
  • “My qualities are not recognised.” –  The more confident applicants among us might thus draw quite another conclusion, namely that they are good enough, but that their qualities are simply not seen. The counterfactual behind this reasoning seems to be the following: Had my prospective employer seen how good I am, she would have hired me. As I see it, both kinds of reasoning are fallacious in that they construe the relation between performance and getting the job / grant etc. too tightly. Of course, most people know that. But this knowledge does not prevent one from going along with the fallacious reasoning. Why is that? Well, my hunch is that meritocratic beliefs are deeply ingrained in our educational system and spill over to other contexts, such as employment relations. Let me explain.

Most education systems hold a simple promise: If you work hard enough, you’ll get a good grade. While this is a problematic belief in itself, it is a feasible idea in principle. The real problem begins with the transition from education to employment relations in academia. If you have a well performing course, you can give all of your thirty students a high grade. But you can’t give thirty applicants for the same position the job you’ve advertised, even if all the applicants are equally brilliant. Now the problem in higher education is that the transition from educational rewards to employment rewards is often rather subtle. Accordingly, someone not getting a job might draw the same conclusion as someone not getting a good grade.

It is here that we are prone to fallacious reasoning and it is here that especially academic employers need to behave more responsibly: Telling people that “the best candidate” will get the job might too easily come across like telling your first-year students that the best people will get a top grade. But the job market is a zero sum game, while studying is not. (It might be that there is more than just one best candidate or it might be impossible for the employer to determine who the best candidate is.) So a competition among students is of a completely different kind than a competition between job candidates. But this fact is often obscured. An obvious indicator of this is that for PhD candidates it is often unclear whether they are employees or students. Yet, it strikes me as a category mistake to speak about (not) “deserving” a job in the same way as about deserving a certain grade or diploma. So while, at least in an ideal world, a bad grade is a reflection of the work you’ve done, not getting a job is not a reflection of the work you’ve done. There is no intrinsic relation between the latter two things. Now that doesn’t mean that (the prospect of doing) good work is not a condition for getting a job, it just means that there is no relation of being deserving or undeserving.

Or to put the same point somewhat differently, while not every performance deserves a good grade, everyone deserves a job.

The impotence of hierarchy

Want to know a secret? There is this recurrent fear that many people in leadership positions told me about: “Now that I am in this position, I fear that people around me won’t speak their mind anymore, that they won’t dare criticising me. For all I know, they might think I am a fool but never tell me.” I think the first time it was my PhD supervisor who told me, and he even told me that this was also the worst fear of his supervisor. So there is a chain of fear passed on down the line. If I ask my students to be frank, I could also add that my supervisor … It’s a bit of a sad story, because we know how it goes. Back in the day, I wasn’t very open to my supervisor, and the times I tried, I often regretted it. – These fears are woven into the fabric of our hierarchies. Understandable as they might be, they are dangerous. The can preclude open discussion and correction. Given that I’m spending much of my time in universities, I am struck by how often I encounter this. In what follows, I’d like to look at a few instances and ask whether there are any remedies.

Before walking through some examples, let’s begin by looking at the phenomenon. Power imbalance is often portrayed as unidirectional state. The boss or supervisor has power; the employees or students dependent on the boss fear the boss. But as I see it, the fear has a reciprocal structure: You are afraid to criticise your boss because he or she might reproach you for doing so. Knowing your fear, the boss is afraid that you will hide your criticisms. This might spiral into a number of refined and uncomfortable assumptions. “I’ll rather tell him something nice about himself.” – “She only said that because she wants to divert attention from her criticism.” – “He doesn’t take me seriously.” – “She doesn’t take me seriously.” Mutual mistrust might follow.* If this kind of setting is large enough, the mistrust might translate into full-blown conspiracy theories. But I think the problem, at root, is not the hierarchy itself. The problem is that we often treat a hierarchical position as a personal rather than an institutional feature. But your boss is not your boss because he or she is a better whatever, but because the design of our institutions requires this function.** In this sense, hierarchy is owing to ways of dividing labour. However, while some contexts might require hierarchical division of labour, certain processes cannot function in the presence of hierarchy. Collective deliberation, for instance, is not possible if someone in the collective intervenes qua greater power. If my thoughts are taken to carry more weight because I’m a professor rather than a student, then we do not need any discussion. Or do we? Let’s look at some instances then:

  • Deliberation in science. – It’s often noted that the current corona crisis makes our shortcomings obvious. So let’s begin with policy design in the corona crisis. Given the complexity of the problems in this crisis, you would expect that decision makers listen to a number of voices. But in the Netherlands, for instance, the opposite seems to be true: “There is no discussion … Because there is a crisis, it is not allowed to have a discussion.” These are the words of Microbiologist Alex Friedrich. Rather than following the guidelines of the government, he caused quite some stir by speaking up against the Dutch strategy and partly changed the course of action by demanding more testing in the north. His point is that scientific advice is too hierarchical and focused on too few voices. Instead, it should be run like a “jam session” where everyone speaks up. I guess you don’t have to be a jazz lover to appreciate the fact that you are more likely to hit on crucial facts when you listen to as many people and disciplines as possible. But the example shows that collective deliberation is still obstructed rather than enhanced (see also here).
  • Transitions in the university. – Borrowing a quote from a British colleague, the president of our university recently noted that implementing change in universities were like ‘reorganising a graveyard: “You don’t get much support from the inside”.’ The idea seems to be that changes like the current transitions to online teaching require an “external shock”. While shock might indeed be an occasion for change, I think that the comparison to the graveyard has clear limitations. I doubt that successful transition works without calling on the employees who actually do the implementing. So when we plan the details of this transition, I am sure our success will depend on whether we will listen carefully to the experiences and insights “the inside” has to offer. Indeed, the digital infrastructure that we now rely on increasingly provides great tools to implement change with the necessary transparency and participation of everyone involved. Sometimes this comes with comic relief: At the mercy of advanced technology, hierarchies of seniority are quickly turned upside down.
  • Hierarchy in teaching. – As I have noted earlier, my status as a professor should not enhance the status of what I have to say. And yet we all know that when I enter the lecture hall, the institutional powers put me in a special position, whether we like it or not. The fact that I am grading the very students who might criticise me seems to settle the intellectual hierarchy, too. Can this be evaded? Should it be? As I see it, the hierarchical power of professors over students is limited to the educational tasks they set within the institutional setting they share. I can give you a task and then tell to what extent you solved it well, whether you drew on all relevant resources etc. But an educational task, to be dealt with in an exam or essay, is different from the complex problems that confront us. Once we go beyond the confinement of exercises, students are fellow philosophers and citizens, where hierarchy should no longer apply. For the reasons noted above, the hierarchy might still be effective. But it is to our detriment if we allow it to happen.

Hierarchy, taken as a personal trait, then, obstructs true deliberation, diversity and learning. In an ideal setting, archangels could openly learn from the devil’s criticism. That said, it’s hard to figure out how we can evade the traps and fears that hierarchies foster. But we should be weary whenever discussion is closed with reference to hierarchical position. It harms all sides, those with more and less powers. But of course it’s hard to bypass something so deeply ingrained in our system. Yet, if someone politely asks you to shut up and listen, it might be best to go along and listen. In the same vain, those with more power should seek, not shun, advice from everyone. Acquiring knowledge and finding solutions, even if governed by good methods, is an accidental and collective process. You might have no idea what you’re missing. So keep asking around and encourage others. It’s always an institution, not you, that grants the power over people. The more power you exercise over them, the more likely it is that people refrain from telling you uncomfortable truths.

____

* A perfect illustration is Paul Watzlawick’s “Story of the Hammer”.

** However, one might doubt whether hierarchies really obtain because of a functional division of labour. The economist Stephen Marglin famously argues that “the capitalist organization of work came into existence not because of superior efficiency but in consequence of the rent-seeking activities of the capitalist.” (I wish to thank Daniel Moure for pointing me to the work of Marglin, especially to the seminal paper “What do bosses do?”)

Precarity and Privilege. And why you should join a union, today

Reflecting on the personal impact of the corona crisis, a close friend remarked that things didn’t change all that much, rather they became obvious. I then began to hear variations of that idea repeatedly. If you live in a complicated relationship, that might very well show right now. If you have made difficult decisions, their consequences might be more palpable now. If you live in a precarious situation, you will feel that clearly now. On the other hand, there might be good stuff that you perhaps hardly noticed, but if it’s there, it will carry you now. On social media, I sense a lot of (positive) nostalgia. People remember things, show what mattered then and now. Things become obvious. The crisis works like a magnifying glass.

This effect also shows how well we are prepared. As an adolescent, I used to smile at my parents for storing lots of food cans in their basement. Of course, most of us also laugh at people rushing to hoard toilet paper, but how well prepared are we for what is coming? Perhaps you think that if we’re lacking things and certain habits now, this is owing to individual failures or laziness. But if we experience precariousness, hardly any of that is an individual fault. Habits need collective stabilisation and consolidation to persist. That said, I’m not going to focus on the state of your basement or hygiene measures. Rather, I’m worried about the question of how well we are politically prepared. Many people around me are facing really dire situations. And our political preparation (or lack thereof) leaves us with very few means to address them properly. So what can be done? I’ll begin with some general considerations and try to finish with some practical advice.

If we look around, we see that a lot can be done. Slowing down the economy like that without immediate chaos ensuing is a huge success. But very soon, people will start telling each other that certain things “cannot” be done, because they are “too difficult”, “too expensive” or “against the rules”. While a lot of good is happening, the bargaining and gaslighting has already begun. Being a highly competitive culture, academia has a notorious problem with collective action (role models in the UK who have been on strike for enormous amounts of time notwithstanding). But this crisis requires collective measures, both in terms of hygiene and in terms of politics.

What’s the problem? Precarious employment (not only) in academia has been a growing factor for a long time. As I see it, this jeopardizes not only political but also academic goals, because it leads to an unwelcome dissociation of teaching and research. But at the present moment, this precarity might turn into something much worse. We already see furloughs and dismissals especially of people on fixed term contracts and the flimsy justifications rolling in on a fairly large scale. At the same time, we witness what we have already seen in the medical sector. We lack transnational policies and thus people are being treated very differently, depending on where they happen to work and what sort of contract they have. Add to this that many ad hoc measures, such as online teaching, are now used as a pretext to introduce lasting changes that may be detrimental to both employment conditions and educational standards. So the precarity and educational standards might worsen to a tipping point where education might become largely disposable. Indeed, mass education is of course disposable already, unless you have democratic tendencies.

What can be done? The first thing I find striking is that, while people continuously talk about online teaching and other means of continuing work, hardly anyone addresses the question of precarious employment. Given the current firings and freezing of hirings, we know that the job market is going to be brutal. If you are, say, an international postdoc or teaching fellow whose contract runs out after the summer, it will be very difficult to find or even seek employment. While I see people readily exchanging advice on zooming, I’ve seen hardly anything so far on how to address this problem. The exceptions to this rule are labour unions and some employee organisations some of which are currently collecting data and push for measures. (You know of more exceptions? Please spread the news widely!)* Now let me ask you: Are you a member of a union? No? You’re no exception. In the various places I worked during and after my PhD, I have never been encouraged to join a union. It’s almost as if there were no awareness that there is such a thing as the representation of employees’ interests. In fact, I guess it’s worse, and it’s something I’ve not only noticed in academia but also in much of the growing freelance and start-up culture. Going from my own experience, I’d say that people always have been and still are (more or less subtly) discouraged from joining such organisations. So when employees encounter difficulties in their employment, they typically will be portrayed as not being tough enough for the job. You are overworked? Well, if you don’t blame yourself already, you’ll likely be shamed into avoiding publicity. Being overworked is still portrayed as a personal lack of stamina, to be addressed not by collective industrial action but by courses on time management or mindfulness. This way, failing to secure (permanent) employment can still be blamed on the individual rather than on the way higher education is run.

The individualisation of such problems does not only affect people’s (mental) health, it also keeps people away from engaging in collective action. In turn, this means that unions etc. will remain weak because they can easily be portrayed as not representing anyone. If people keep blaming themselves, the unions don’t have a case for building an argument in favour of better employment conditions. I see this as one of the main reasons why we are politically not well prepared for addressing economic problems in this crisis. So what should we do now?

Trying to collect ideas, I have written to a number of friends and colleagues who kindly provided me with suggestions. Let me briefly summarise some crucial points:

  • Generally, permanent / tenured people should take it upon them to make a move. We should be aware that people on fixed term contracts are vulnerable and should not be expected to lobby for their interests alone.
  • Try to see who is in or is likely to get into trouble and talk about the situation. Bring it up with your colleagues and managers whenever the opportunity arises. If you have department meetings or exchanges with funding agencies such as the ERC, ask what can be or is done to ameliorate the situation.
  • Join a union and encourage others to do so, too. In the Netherlands, the unions are taking it upon them to make a case for employees in precarious positions.
  • As I see it, it would be good for universities to invest in staff rather than reduce numbers. Wherever possible contracts should be extended, as is in fact done by various funding bodies.
  • If there are no financial resources for staff, measures should be taken to reallocate budgets, especially travel and overhead funding for the extension of contracts or hires.
  • Universities in Austria and Switzerland have created hardship funds for employees facing precarious situations. This should be done proactively, as people in vulnerable positions might feel discouraged to come forward.

These are just some ideas. I’d be grateful to hear more. But to my mind, the most important point is that we need to pursue such steps in a collective effort. Right now, these steps should be taken because we are in an emergency. Ensuring stability is what is required for providing a safe working environment.

Ultimately, taking measures of solidarity is also about helping academia to survive beyond this crisis. Whenever recession hits, education is often considered disposable. If we were to allow for the reduction of staff without resistance, it would just signal that academia could do with even fewer people and resources. Dictatorships would certainly welcome this. The way we treat our colleagues and students will contribute to determining the political system that we’ll find ourselves in after the crisis.

____

* Of course, there have been initiatives addressing the adjunct crisis. But I havent’t noticed that precarity has been an issue of great public concern in this crisis, even less so among tenured academics, as a recent piece by Emma Pettit notes:

“While tenured professors have typically stood by silently as their nontenured colleagues advocated for themselves on the national stage, they have watched their own kind dwindle. Positions are remaining unfilled. Tenure lines are getting pruned. There’s still the same service work to do, but fewer people to do it, and those who remain shoulder the burden.

And today, as a global pandemic has devastated budgets and led college leaders to freeze hiring and furlough even tenured professors, the cause seems especially urgent.

The structural changes that preceded the pandemic helped set the stage for those austerity measures, and manufactured a growing — if uneven, slow, some would say glacial — recognition among the tenured that relying on contingent labor hurts everyone, activists and higher-education researchers say. …
How much tenured professors have cared, historically, about their contingent colleagues, is difficult to measure. Everyone knows the caricature: the older, typically white, typically male full professor whose non-tenure-track colleagues escape his vision, who still believes merit rises to the top and those who fail to land tenure-track jobs lack work ethic, intelligence, or both. …
Even if tenured professors might not pay attention to the adjuncts who walked their hallways, they couldn’t help but notice the fates of their graduate students, who were being sent into a bottlenecked academic-jobs market to compete for slimmer pickings. They started to connect the dots.”

Two kinds of philosophy? A response to the “ex philosopher”

Arguably, there are at least two different kinds of philosophy: The first kind is what one might call a spiritual practice, building on exercises or forms of artistic expression and aiming at understanding oneself and others. The second kind is what one might call a theoretical endeavour, building on concepts and arguments and aiming at explaining the world. The first kind is often associated with traditions of mysticism, meditation and therapy; the second is related to theory-building, the formation of schools (scholasticism) and disciplines in the sciences (and humanities). If you open any of the so-called classics, you’ll find representations of both forms. Descartes’ Meditations offer you meditative exercises that you can try at home alongside a battery of arguments engaging with rival theories. Wittgenstein’s Tractatus closes with the mystical and the advice to shut up about the things that matter most after opening with an account of how language relates to the world. However, while both kinds are present in many philosophical works, only the second kind gets recognition in professional academic philosophy. In what follows, I’d like to suggest that this lopsided focus might undermine our discipline.

Although I think that these kinds of philosophy are ultimately intertwined, I’d like to begin by trying to make the difference more palpable. Let’s start with a contentious claim: I think that most people are drawn into philosophy by the first kind, that is, by the desire understand themselves, while academic philosophy trains people in the second kind, that is, in handling respectable theories. People enter philosophy with a first-person perspective and leave or become academics through mastering the third-person perspective. By the way, this is why most first-year students embrace subjectivism of all kinds and lecturers regularly profess to be “puzzled” by this. Such situations thrive on misunderstandings: for the most part, students don’t mean to endorse subjectivism as a theory; they simply and rightly think that perspective matters.* Now, this is perhaps all very obvious. But I do think that this transition from the one kind to the other kind could be made more transparent. The problem I see is not the transition itself, but the dismissal of the first kind of philosophy. As I noted earlier, the two kinds of philosophy require one another. We shouldn’t rip the Tractatus apart, to exclude either mysticism or the theory. Whether you are engaging in the first or second kind is more a matter of emphasis. However, interests in gatekeeping and unfounded convictions about what is and what isn’t philosophy often entail practices of exclusion, often with pernicious effects.

Such sentiments were stirred when I read the confessions of an ex philosopher that are currently making the rounds on social media. The piece struck many chords, quite different ones. I thought it was courageous and truthful as well as heart-breaking and enraging. Some have noted that the piece is perhaps more the complacent rant of someone who was never interested in philosophy and fellow philosophers to begin with. Others saw its value in highlighting what might be called a “phenomenology of failure” (as Dirk Koppelberg put it). These takes are not mutually exclusive. It’s not clear to me whether the author had the distinction between the two kinds of philosophy in mind, but it surely does invoke something along these lines:

“Philosophy has always been a very personal affair. Well, not always. When it stopped being a personal affair, it also stopped being enjoyable. It became a performance.

… Somewhat paradoxically, academia made me dumber, by ripening an intellectual passion I loved to engage with into a rotten performance act I had to dread, and that I hurried to wash out of my mind (impossible ambition) when clocking out. Until the clocking out became the norm. Now I honestly do not have insightful opinions about anything — not rarefied philosophical problems nor products nor popular culture nor current events.”

What the author describes is not merely the transition from one approach to another; it is transition plus denial. It’s the result of the professional academic telling off the first-year student for being overly enthusiastically committed to “subjectivism”. While we can sometimes observe this happening in the lecture hall, most of this denial happens within the same person, the supposed adult telling off themselves, that is, the playful child within. No doubt, sometimes such transition is necessary and called for. But the denial can easily kill the initial motivation. – That said, the author also writes that he has “never enjoyed doing philosophy.” It is at this point (and other similar ones) where I am torn between different readings, but according to the reading I am now proposing the “philosophy” he is talking about is a widespread type of academic philosophy.** What he is talking about, then, is that he never had an interest in a kind of philosophy that would deny the initial enthusiasm and turn it into a mere performance.

Now you might say that this is just the course of a (professionalised) life. But I doubt that we should go along with this dismissal too readily. Let me highlight two problems, unfounded gatekeeping and impoverished practices:

  • The gatekeeping has its most recognisable expression in the petulant question “Is this philosophy?” Of course, it depends on who is asking, but the fact that most texts from the mystic tradition or many decidedly literary expressions of philosophy are just ignored bears witness to the ubiquitous exclusion of certain philosophers. It certainly hit Hildegard of Bingen, parts of Nietzsche and bits of Wittgenstein. But if an exaggerated remark is in order, soon anything that doesn’t follow the current style of paper writing will be considered more or less “weird”. In this regard, the recent attempts at “diversifying the canon” often strike me as enraging. Why do we need to make a special case for re-introducing work that is perfectly fine? In any case, the upshot of dismissing the first kind of philosophy is that a lot of philosophy gets excluded, for unconvincing reasons.
  • You might think that such dismissal only concerns certain kinds of content or style. But in addition to excluding certain traditions of philosophy, there is a subtler sort of dismissal at work: As I see it, the denial of philosophy as a (spiritual) practice or a form of life (as Pierre Hadot put it) pushes personal involvement to the fringes. Arguably, this affects all kinds of philosophy. Let me give an example: Scepticism can be seen as a kind of method that allows us to question knowledge claims and eventually advances our knowledge. But it can also be seen as a personal mental state that affects our decisions. As I see it, the methodological approach is strongly continuous with, if not rooted in, the mental state. Of course, sometimes it is important to decouple the two, but a complete dismissal of the personal involvement cuts the method off from its various motivations. Arguably, the dismissal of philosophy as a spiritual (and also political) practice creates a fiction of philosophy. This fiction might be continuous with academic rankings and pseudo-meritocratic beliefs, but it is dissociated from the involvement that motivates all kinds of philosophical exchange.

In view of these problems, I think it is vital keep a balance between what I called two kinds but what is ultimately one encompassing practice. Otherwise we undermine what motivates people to philosophise in the first place.

____

* Liam Bright has a great post discussing the often lame counterarguments to subjectivism, making the point that I want to make in a different way by saying that the view is more substantial than it is commonly given credit for: “The objection [to subjectivism] imagines a kind of God’s-eye-perspective on truth and launches their attack from there, but the kind of person who is attracted to subjectivism (or for that matter relativism) is almost certainly the kind of person who is suspicious of the idea of such a God’s eye perspective. Seen from within, these objections simply lose their force, they don’t take seriously what the subjectivist is trying to do or say as a philosopher of truth.”

Eric Schliesser provides a brief discussion of Liam’s post, hitting the nail on the following head: “Liam’s post (which echoes the loveliest parts of Carnap’s program with a surprisingly Husserlian/Levinasian sensibility) opens the door to a much more humanistic understanding of philosophy. The very point of the enterprise would be to facilitate mutual understanding. From the philosophical analyst’s perspective the point of analysis or conceptual engineering, then, is not getting the concepts right (or to design them for ameliorative and feasible political programs), but to find ways to understand, or enter into, one’s interlocutor life world.”

** Relatedly, Ian James Kidd distinguishes between philosophy and the performative craft of academic philosophy in his post on “Being good at being good at philosophy”.