Meditation in philosophy. A conversation with Andrea Sangiacomo (podcast)

Meditation in philosophy. A conversation with Andrea Sangiacomo (podcast)

This is the fourth installment of my still fairly new series Philosophical Chats. In this episode, I have a conversation with Andrea Sangiacomo who is an associate professor of philosophy at Groningen University. In this conversation, we focus on meditation both as part of philosophical traditions as well as an approach that might be a resourceful factor impacting (academic) philosophy, teaching and academic culture. While Cartesian and Buddhist ideas* form a continuous resource in the background of our discussion, here is a list of themes in case you look for something specific:

  • Introduction   0:00
  • Meditation and Descartes’ Meditations   2:20
  • The notion of experience – and objections against experience as a basis in philosophy   9:00
  • Meditation in teaching   21:14
  • Why aren’t we already using these insights in education?   37:00
  • How can we teach and learn effectively?   44:36
  • How can we guide and assess?   52:50
  • Where is this approach leading, also in terms of academic culture?   1:03:00

______

* The opening quotation is from Andrea’s blogpost What can we learn today from Descartes’ Meditations? Here is the passage: “Since last year, I appreciated the text of the Mediations as real meditation, namely, as a way of practicing a meditative kind of philosophy (for lack of better term), a philosophy more concerned with what it means to experience reality in this way or that way, rather than with what a certain set of propositions means.”

He has published four more posts on this topic on the blog of the Centre for Medieval and Early Modern Thought. They are:

ADHD, struggling with decisions, and the myth of autonomy in academia. A conversation about mental health with Jef Delvaux (podcast)

ADHD, struggling with decisions, and the myth of autonomy in academia.
A conversation about mental health with Jef Delvaux (podcast)

This is the third installment of my still fairly new series Philosophical Chats. In this episode, I have a conversation with Jef Delvaux who is in the third year of his PhD programme in Philosophy at York University in Toronto. Although we had a number of themes lined up, we ended up focusing on what is called Attention Deficit Hyperactivity Disorder (ADHD), which despite an increasing attention to mental health in academia still seems to be flying somewhat under the radar. Jef addresses this issue not as a specialist, but from the perspective of someone affected by it. The aim is to provide an understanding of the condition and how it can be addressed (and perhaps ameliorated) in academic settings. One thing we discuss in particular is the difficulty of deliberating and making decisions. It’s a long conversation. So if you feel like skipping bits or want to focus on a specific topic, here is a rough overview:

  • Introduction   0:00
  • Mental health and ADHD   2:00
  • Belittling ADHD   4:00
  • What is it like to live with ADHD?   7:20
  • Teaching students with ADHD: buddy systems* and autonomy   12:20      
  • Decision paralysis with and without ADHD: what is the difference?   22:15
  • ADHD during the pandemic   1:02
  • “What if I could talk to my undergraduate self?”   1:08

______

* Regarding study buddy systems, I (Martin) state that Groningen has them for writing theses. But it turns out that we also offer them for BA and MA students generally.

On being a first-gen student, hierarchies and harassment. A conversation about meritocratic ideology with Nora Migdad (podcast)

On being a first-gen student, hierarchies and harassment.
A conversation about meritocratic ideology with Nora Migdad (podcast)

This is the second installment of my still fairly new series Philosophical Chats. In this episode, I have a conversation with Nora Migdad who majors in Biology and minors in Philosophy. Like me (but a long time ago), Nora is a first-generation student. While being a first-gen student is often (rightly) treated as lending itself to disadvantages, it also offers intriguing perspectives on the peculiarities of academic life.

Following up on a guest post about being a first-gen student, Nora eventually initiated a conversation about this topic. After some exchanges about possible questions to be addressed we finally found time for the virtual meeting recorded above. Among the issues we covered are:

  • being a first-gen student 0:00
  • work-pressure and hierarchies 11:17
  • hierarchies, misconduct and prestige 12:32
  • protecting harassers 15:00
  • dealing with harassment outside and inside academia 22:40
  • criticism within hierarchies in academia 31:52
  • depending on others 34:50
  • ideas for improvement 38:06
  • dealing with sexism and racism 41:55

Is criticism of mismanagement and misconduct taken as snitching? How academia maintains the status quo

Recently, I became interested (again) in the way our upbringing affects our values. Considering how groups, especially in academia, often manage to suppress criticism of misconduct, I began to wonder which values we associate with criticism more generally. First, I noticed a strange ambivalence. Just think about the ambivalent portrayal of whistle blowers like Edward Snowden! The ambivalence is captured in values like loyalty that mostly pertain to a group and are not taken to be universal. Then, it hit me. Yes, truth telling is nice. But in-groups ostracise you as a snitch, a rat or a tattletale! Denouncing “virtue signalling” or “cancel culture” seems to be on a par with this verdict. So while criticism of mismanagement or misconduct is often invited as an opportunity for improvement, it is mostly received as a cause of reputational damage.

Now I wrote up a small piece for Zoon Politikon.* In this blog post, I just want to share what I take to be the main idea. (In the meantime it’s been published here.)

The ambivalence of criticism in academia seems to be rooted in an on-going tension between academic and managerial hierarchies. While they are intertwined, they are founded on very different lines of justification. If I happen to be your department chair, this authority weighs nothing in the setting of, say, an academic conference. Such hierarchies might be justifiable in principle. But while the goals of academic work and thus hierarchies are to some degree in the control of the actual agents involved, managerial hierarchies cannot be justified in the same way. A helpful illustration is the way qualitative and quantitative assessment of our work come apart: A single paper might take years of research and end up being a game-changer in the field of specialisation, but if it happens to be the only paper published in the course of three years, it won’t count as sufficient output. So while my senior colleague might have great respect for my work as an academic, she might find herself confronted with incentives to admonish and perhaps even fire me.

What does this mean for the status of criticism? The twofold nature of hierarchies leaves us with two entirely disparate justifications of criticism. But these disparate lines of justification are themselves a constant reason for criticism. The fact that a field-changing paper and a mediocre report both make one single line in a CV bears testimony to this. But here’s the thing: we seemingly delegitimise such criticism by tolerating and ultimately accepting the imperfect status quo. Of course, most academics are aware of a tension: The quantification of our work is an almost constant reason for shared grievance. But as employees we find ourselves often enough buying into it as a “necessary evil”. Now, if we accept it as a necessary evil, we seem to give up on our right to criticise it. Or don’t we? Of course not, and the situation is a lot more dynamic than I can capture here. To understand how “buying into” an imperfect situation (a necessary evil) might seemingly delegitimise criticism, it is crucial to pause and briefly zoom in on the shared grievance I just mentioned.

Let me begin by summarising the main idea: The shared grievance constitutes our status quo and, in turn, provides social cohesion among academics. Criticism will turn out to be a disturbance of that social cohesion. Thus, critics of the status quo will likely be ostracised as “telling on” us.

One might portray the fact that we live with an academic and a managerial hierarchy simply as unjust. One hierarchy is justified, the other isn’t (isn’t really, that is). Perhaps, in a perfect world, the two hierarchies would coincide. But in fact we accept that, with academia being part of the capitalist world at large, they will never coincide. This means that both hierarchies can be justified: one as rooted in academic acclaim; the other as a necessary evil of organising work. If this is correct and if we accept that the world is never perfect, we will find ourselves in an on-going oscillation and vacillation. We oscillate between the two hierarchies. And we vacillate between criticising and accepting the imperfection of this situation. This vacillation is, I submit, what makes criticism truly ambivalent. On the one hand, we can see our work-relations from the different perspectives; on the other hand, we have no clear means to decide which side is truly justified. The result of this vacillation is thus not some sort of solution but a shared grievance. A grievance acknowledging both the injustices and the persisting imperfection. There are two crucial factors in this: The fact that we accept the imperfect situation to some degree; and the fact that this acceptance is a collective status, it is our status quo. Now, I alone could not accept on-going injustices in that status quo, if my colleagues were to continuously rebel against it. Thus, one might assume that, in sharing such an acceptance, we share a form of grievance about the remaining vacillation.

It is of course difficult to pin down such a phenomenon, as it obtains mostly tacitly. But we might notice it in our daily interactions when we mutually accept that we see a tension, for instance, between the qualitative and quantitative assessment of our work. This shared acceptance, then, gives us some social cohesion. We form a group that is tied together neither by purely academic nor by purely managerial hierarchies and relations. There might be a growing sense of complicity in dynamic structures that are and aren’t justified but continue to obtain. So what forms social cohesion between academics are not merely factors of formal appraisal or informal friendship. Rather, a further crucial factor is the shared acceptance of the imperfection of the status quo. The acceptance is crucial in that it acknowledges the vacillation and informs what one might call the “morale” of the group.

If this is correct, academics do indeed form a kind of group through acceptance of commonly perceived imperfections. Now if we form such a group, it means that criticism will be seen as both justified but also as threatening the shared acceptance. We know that a critic of quantitative work measures is justified. But we also feel that we gave in and accepted this imperfection a while ago. The critic seemingly breaks with this tacit consent and will be seen like someone snitching or “telling on us”. As I see it, it is this departure from an in-group consensus that makes criticism appear as snitching. And while revealing a truth about the group might count as virtuous, it makes the critic seemingly depart from the in-group. Of course, companies and universities enjoy also some legal protection. Even if you find out about something blameworthy, you might be bound by rules about confidentiality. This is why whistle blowers do indeed have an ambivalent reputation, too. But I guess that the legal component alone does not account for the force of the in-group mentality at work in suppressing criticism.

This mode of suppressing criticism has pernicious effects. The intertwined academic and managerial hierarchies often come with inverse perceptions of criticism: your professorial colleague might be happy to learn from your objections, while your department chair might shun your criticism and even retaliate against you. Yet, they might be the same person. Considering the ubiquitous histories of suppressing critics of sexism, racism and other kinds of misconduct, we do not need to look far to find evidence for ostracism or retaliation against critics. I think that it’s hard to explain this level of complicity with wrongdoers merely by referring to bad intentions, on the one hand, or formal agreements such as confidentiality, on the other. Rather, I think, it is worthwhile to consider the deep-rooted in-group consensus that renders criticism as snitching. One reason is that snitching counts, at least in a good number of cultures, as a bad action. But while this might be explained with concerns about social cohesion, it certainly remains a morally dubious verdict, given that snitching is truth-conducive and should thus be aligned with values such as transparency. Going by personal anecdotes, however, I witnessed that snitching was often condemned even by school teachers, who often seemed to worry about social cohesion no less than about truthfulness. In other words, we don’t seem to like that the truth be told when it threatens our status quo.

In sum, we see that the ambivalent status of criticism is rooted in a twofold hierarchy that, in turn, comes with disparate sets of values. Shared acceptance of these disparate sets as an unavoidable imperfection binds together an in-group that will sanction explicit criticism of this imperfection as a deviation from the consensus. The current charges against so-called “virtue signalling”, a “call out culture” or “cancel culture” on social media strike me as instances of such sanctions. If we ask what makes the inclinations to sanction in-group norm violations so strong, it seems helpful to consider the deep-rooted code against snitching. While the moral status of sanctioning snitching is certainly questionable, it can shed light on the pervasive motivation and strikingly ready acceptance of such behaviour.

______

* Following a discussion of a blog post on silence in academia, Izabela Wagner kindly invited me to contribute to a special issue in Zoon Politikon. I am enormously grateful to her for the exchanges and for providing this opportunity. Moreover, I have benefitted greatly from advice by Lisa Herzog, Pietro Ingallina, Mariya Ivancheva, Christopher Quinatana, Rineke Verbrugge, and Justin Weinberg.

On self-censorship

For a few years during the 80s, Modern Talking was one of the most well known pop bands in Germany. But although their first single “You’re my heart, you’re my soul” was sold over eight million times, no one admitted to having bought it. Luckily, my dislike of their music was authentic, so I never had to suffer that particular embarrassment. Yet, imagine all these people alone in their homes, listening to their favourite tune but never daring to acknowledge it openly. Enjoying kitsch of any sort brings the whole drama of self-censorship to the fore. You might be moved deeply, but the loss of face is more unbearable than remaining in hiding. What’s going on here? Depending on what precisely is at stake, people feel very differently about this phenomenon. Some will say that self-censorship just maintains an acceptable level of decency or tact; others will say that it reflects political oppression or, ahem, correctness. At some point, however, you might let go of all shame. Perhaps you’ve got tenure and start blogging or something like that … While some people think it’s a feature of the current “cancel culture”, left or right, I think it’s more important to see the different kinds of reasons behind self-censorship. In some cases, there really is oppression at work; in other cases, it’s peer pressure. Neither is fun. In any case, it’s in the nature of this phenomenon that it is hard to track in a methodologically sound way. So rather than draw a general conclusion, it might be better to go through some very different stories.

Bad thoughts. – Do you remember how you, as a child, entertained the idea that your thoughts might have horrible consequences? My memory is faint, but I still remember assuming that thinking of swear words might entail my parents having an accident. So I felt guilty for thinking these words, and tried to break the curse by uttering them to my parents. But somehow I failed to convince them of the actual function of my utterance, and so they thought I was just calling them names. Today, I know that this is something that happens to occur in children, sometimes even pathologically strong and thus known as “intrusive thoughts” within an “obsessive compulsory disorder”. Whatever the psychological assessment, my experience was that of “forbidden” thoughts and, simultaneously, the inability to explain myself properly. Luckily, it didn’t haunt me, but I can imagine it becoming problematic.

One emergence of the free speech debate. – When I was between 7 and 10 years old (thus in the 1970s), I sometimes visited a lonely elderly woman. She was an acquaintance of my mother, well in her 70s and happy to receive some help. When no one else was around she often explained her political views to me. She was a great admirer of Franz Josef Strauß whom she described to me as a “small Hitler – something that Germany really needs again”. She hastened to explain that, of course, the real Hitler would be too much, but a “small” one would be quite alright. She then praised how, back in the day, women could still go for walks after dark etc. Listening to other people of that generation, I got the impression that many people in Germany shared these ideas. In 2007, the news presenter Eva Herman explicitly praised the family values of Nazi Germany and was dismissed from her position. The current rise of fascism in Germany strikes me as continuous with the sentiments I found around me early on. And if I’m not mistaken these sentiments date back at least to the 1930s and 1940s. In my experience, Nazism was never just an abstract political view. Early on did I realise that otherwise seemingly “decent” people could be taken by it. But this concrete personal dimension made the sweaty and simplistic attitude to other people all the more repulsive. In any case, I personally found that people in the vicinity of that ideology are the most vocal people who like to portray themselves as “victims” of censorship, though they are certainly not censoring themselves. (When it comes to questions of free speech, I am always surprised that whistleblowers such as Snowden are not mentioned.)

Peer pressure and classism. – I recently hosted a guest post on being a first generation student that really made me want to write about this issue myself. But often when I think about this topic, I still feel uncomfortable writing about it. In some ways, it’s all quite undramatic in that the transition to academia was made very easy by my friends. For what shouldn’t be forgotten is that it’s not only your parents and teachers who educate you. In my case at least, I tacitly picked up many of the relevant habits from my friends and glided into being a new persona. Although I hold no personal grudges, I know that “clothes make people” or “the man” as Gottfried Keller’s story is sometimes translated. What I noticed most is that people from other backgrounds often have a different kind of confidence being around academics. Whether that is an advantage across the board I don’t know. What I do know is that I took great care to keep my own background hidden from most colleagues, at least before getting a tenured job.

Opportunism and tenure. – Personally, I believe that I wouldn’t dare publishing this very post or indeed any of my posts, had I not obtained a tenured position. Saying this, I don’t want to impart advice. All I want to say is that getting this kind of job is what personally freed me to speak openly about certain things. But the existential weight of this fact makes me think that the greatest problem about self-censorship lies in the different socio-economic status that people find themselves in. This is just my experience, but perhaps it’s worth sharing. So what is it about, you might wonder? There is no particular truth that I would not have told before but would tell now. It’s not a matter of any particular opinion, be it left or right. Rather, it affects just about everything I say. The fact that I feel free to talk about my tastes, about the kitsch I adore, about the music I dislike, about the artworks I find dull, alongside the political inclinations I have – talking about all of this openly, not just politics, is affected by the fact that I cannot be fired just so and that I do not have to impress anyone I don’t want to impress. It is this freedom that I think does not only allow us to speak but also requires us to speak up when others will remain silent out of fear.

The myth of authenticity. – The fact that many of us feel they have to withhold something creates the idea that there might be a vast amount of unspoken truths under the surface. “Yes”, you might be inclined to ask, “but what do you really think?” This reminds me of the assumption that, in our hearts, we speak a private language that we cannot make intelligible to others. Or of the questions immigrants get to hear when people inquire where they really come from. It doesn’t really make sense. While it is likely that many people do not say what they would say if their situation were different, I don’t think it’s right to construe this as a situation of hidden truths or lies. (Some people construe the fact that we might hide conceal our opinions as lies. But I doubt that’s a pertinent description.) For better or worse, the world we live in is all we have when it comes to questions of authenticity. If you choose to remain silent, there is no hidden truth left unspoken. It just is what it is: you’re not speaking up and you might be in agony about that. You might conceal what you think. But then it is the concealing that shapes the world and yourself, not the stuff left unspoken. Put differently, there are no truths, no hidden selves, authentic or not, that persist without some relation to interlocutors.

***

Speaking of which, I want to finish this post with a word of thanks. It’s now two years ago that I started this blog. By now I have written 118 posts. If I include the guest posts, it adds up to 131. Besides having the pleasure of hosting great guest authors, I feel enormously privileged to write for you openly. On the one hand, this is enabled by the relatively comfortable situation that I am in. On the other hand, none of this would add up to anything if it weren’t for you, dear interlocutors.

“We don’t need no …” On linguistic inequality

Deviations from so-called standard forms of language (such as the double negative) make you stand out immediately. Try and use double negatives consistently in your university courses or at the next job interview and see how people react. Even if people won’t correct you explicitly, many will do so tacitly. Such features of language function as social markers and evoke pertinent gut reactions. Arguably, this is not only true of grammatical or lexical features, but also of broader stylistic features in writing, speech and even non-linguistic conduct. Some ways of phrasing may sound like heavy boots. Depending on our upbringing, we are familiar with quite different linguistic features. While none of this might be news, it raises crucial questions about teaching that I see rarely addressed. How do we respond to linguistic and stylistic diversity? When we say that certain students “are struggling”, we often mean that they deviate from our stylistic expectations. A common reaction is to impart techniques that help them in conforming to such expectations. But should we perhaps respond by trying to understand the “deviant” style?

Reading the double negative “We don’t need no …”, you might see quite different things: (1) a grammatically incorrect phrase in English; (2) a grammatically correct phrase in English; (3) part of a famous song by Pink Floyd. Assuming that many of us recognise these things, some will want to hasten to add that (2) contradicts (1). A seemingly obvious way to resolve this is to say that reading (1) applies to what is called the standard dialect of English (British English), while (2) applies to some dialects of English (e.g. African-American Vernacular English). This solution prioritises one standard over other “deviant” forms that are deemed incorrect or informal etc. It is obvious that this hierarchy goes hand in hand with social tensions. At German schools and universities, for instance, you can find numerous students and lecturers who hide their dialects or accents. In linguistics, the disadvantages of regional dialect speakers have long been acknowledged. Even if the prescriptive approach has long been challenged, it’s driving much of the implicit culture in education.

But the distinction between standard and deviant forms of language ignores the fact that the latter often come with long-standing rules of their own. Adjusting to the style of your teacher might then require you to deviate from the language of your parents. Thus another solution is to say that there are different English languages. Accordingly, we can acknowledge reading (2) and call African-American Vernacular English (AAVE) a language. The precise status and genealogy is a matter of linguistic controversy. However, the social and political repercussions of this solution come most clearly into view when we consider the public debate about teaching what is called “Ebonics” at school in the 90s (Here is a very instructive video about this debate). If we acknowledge reading (2), it means, mutatis mutandis, that many English speakers raised with AAVE can be considered bilingual. Educators realised that teaching standard forms of English can be aided greatly by using AAVE as the language of instruction. Yet, trying to implement this as a policy at school soon resulted in a debate about a “political correctness exemplar gone out of control” and abandoning the “language of Shakespeare”. The bottom-line is: Non-hierarchical acknowledgement of different standards quickly spirals into defences of the supposed status quo by the dominant social group.

Supposed standards and deviations readily extend to styles of writing and conduct in academic philosophy. We all have a rough idea what a typical lecture looks like, how a discussion goes and how a paper should be structured. Accordingly, attempts at diversification are met with suspicion. Will they be as good as our standards? Won’t they undermine the clarity we have achieved in our styles of reasoning? A more traditional division is that between so-called analytic and continental philosophy. Given the social gut reactions to diversifying linguistic standards, it might not come as a surprise that we find equal responses among philosophers: Shortly before the University of Cambridge awarded a honorary degree to Derrida in 1992, a group of philosophers published an open letter protesting that “Derrida’s work does not meet accepted standards of clarity and rigour.” (Eric Schliesser has a succinct analysis of the letter.) Rather than acknowledging that there might be various standards emerging from different traditions, the supposedly dominant standard of clarity is often defended like an eternal Platonic idea.

While it is easy to see and criticise this, it is much more difficult to find a way of dealing with it in the messy real world. My historically minded self has had and has the luxury to engage with a variety of styles without having to pass judgment, at least not explicitly. More importantly, when teaching students I have to strike a balance between acknowledging variety and preparing them for situations in which such acknowledgement won’t be welcome. In other words, I try to teach “the standard”, while trying to show its limits within an array of alternatives. My goal in teaching, then, would not be to drive out “deviant” stylistic features, but to point to various resources required in different contexts. History (of philosophy) clearly helps with that. But the real resources are provided by the students themselves. Ultimately, I would hope, not to teach them how to write, but how to find their own voices within their various backgrounds and learn to gear them towards different purposes.

But to do so, I have to learn, to some degree, the idioms of my students and try to understand the deep structure of their ways of expression. Not as superior, not as inferior, but as resourceful within contexts yet unknown to me. On the other hand, I cannot but also lay open my own reactions and those of the traditions I am part of. – Returning to the fact that language comes with social markers, perhaps one of the most important aspects of teaching is to convey a variety of means to understand and express oneself through language. Our gut reactions run very deep, and what is perceived as linguistic ‘shortcomings’ will move people, one way or another. But there is a double truth: Although we often cannot but go along with our standards, they will very soon be out of date. New standards and styles will emerge. And we, or I should say “I”, will just sound old-fashioned at best. Memento mori.

You don’t get what you deserve. Part II: diversity versus meritocracy?

“I’m all for diversity. That said, I don’t want to lower the bar.” – If you have been part of a hiring committee, you will probably have heard some version of that phrase. The first sentence expresses a commitment to diversity. The second sentence qualifies it: diversity shouldn’t get in the way of merit. Interestingly, the same phrase can be heard in opposing ways. A staunch defender of meritocracy will find the second sentence (about not lowering the bar) disingenuous. He will argue that, if you’re committed to diversity, you might be disinclined to hire the “best candidate”. By contrast, a defender of diversity will find the first sentence disingenuous. If you’re going in for meritocratic principles, you will just follow your biases and ultimately take the properties of “white” and “male” as a proxy of merit. – This kind of discussion often runs into a stalemate. As I see it, the problem is to treat diversity and meritocracy as an opposition. I will suggest that this kind of discussion can be more fruitful if we see that diversity is not a property of job candidates but of teams, and thus not to be seen in opposition to meritocratic principles.

Let’s begin with a clarification. I assume that it’s false and harmful to believe that we live in a meritocracy. But that doesn’t mean that meritocratic ideas themselves are bad. If it is simply taken as the idea that one gets a job based on their pertinent qualifications, then I am all for meritocratic principles. However, a great problem in applying such principles is that, arguably, the structure of hiring processes makes it difficult to discern qualifications. Why? Because qualifications are often taken to be indicated by other factors such as prestige etc. But prestige, in turn, might be said to correlate with race, gender, class or whatever, rather than with qualifications. At the end of the day, an adherent of diversity can accuse adherents of meritocracy of the same vices that she finds herself accused of. So when merit and diversity are taken as being in opposition, we tend to end up in the following tangle:

  • Adherents of diversity think that meritocracy is ultimately non-meritocratic, racist, sexist, classist etc.
  • Adherents of meritocracy think that diversity is non-meritocratic, racist, sexist, classist etc.*

What can we do in such a stalemate? How can the discussion be decided? Something that typically gets pointed out is homogeneity. The adherent of diversity will point to the homogeneity of people. Most departments in my profession, for instance, are populated with white men. The homogeneity points to a lack of diversity. Whether this correlates to a homogeneity of merit is certainly questionable. Therefore, the next step in the discussion is typically an epistemological one: How can we know whether the candidates are qualified? More importantly, can we discern quality independently from features such as race, gender or class? – In this situation, adherents of diversity typically refer to studies that reveal implicit biases. Identical CVs, for instance, have been shown to be treated as more or less favourable depending on the features of the name on the CV. Meritocratists, by contrast, will typically insist that they can discern quality objectively or correct for biases. Again, both sides seem to have a point. We might be subject to biases, but if we don’t leave decisions to individuals but to, say, committees, then we can perhaps correct for biases. At least if these committees are sufficiently diverse, one might add. – However, I think the stalemate will get passed indefinitely to different levels, as long as we treat merit and diversity as an opposition. So how can we move forward?

We try to correct for biases, for instance, by making a committee diverse. While this is a helpful step, it also reveals a crucial feature about diversity that is typically ignored in such discussions. Diversity is a feature of a team or group, not of an individual. The merit or qualification of a candidate is something pertaining to that candidate. If we look for a Latinist, for instance, knowledge of Latin will be a meritorious qualification. Diversity, by contrast, is not a feature, to be found in the candidate. Rather, it is a feature of the group that the candidate will be part of. Adding a woman to all-male team will make the team more diverse, but that is not a feature of the candidate. Therefore, accusing adherents of diversity of sexism or racism is fallacious. Trying to build a more diverse team rather than favouring one category strikes me as a means to counter such phenomena.

Now if we accept that there is such a thing as qualification (or merit), it makes sense to say that in choosing a candidate for a job we will take qualifications into account as a necessary condition. But one rarely merely hires a candidate; one builds a team, and thus further considerations apply. One might end up with a number of highly qualified candidates. But then one has to consider other questions, such as the team one is trying to build. And then it seems apt to consider the composition of the team. But that does not mean that merit and diversity are opposed to one another.

Nevertheless, prioritising considerations about the team over the considerations about the candidates are often met with suspicion. “She only got the job because …” Such an allegation is indeed sexist, because it construes a diversity consideration applicable to a team as the reason for hiring, as if it were the qualification of an individual. But no matter how suspicious one is, qualification and diversity are not on a par, nor can they be opposing features.

Compare: A singer might complain that the choir hired a soprano rather than him, a tenor. But the choir wasn’t merely looking for a singer but for a soprano. Now that doesn’t make the soprano a better singer than the tenor, nor does it make the tenor better than the soprano. Hiring a soprano is relevant to the quality of the group; it doesn’t reflect the quality of the individual.

____

* However, making such a claim, an adherent of meritocracy will probably rely on the assumption that there is such a thing as “inverted racism or sexism”. In the light of our historical sitation, this strikes me as very difficult to argue, at least with regard to institutional structures. It’s seems like saying that certain doctrines and practices of the Catholic Church are not sexist, simply because there are movements aiming at reform.

Fit. A Note on Aristotle’s Presence in Academia

Since the so-called Scientific Revolution and the birth of modern science, our Western approach towards the world became quantitative. The precedingly dominant qualitative Aristotelian worldview of the Scholastics was replaced by a quantitative one: everything around us was supposed to be quantifiable and quantified. This, of course, seems to affect nowadays academia, too. We often hear “do this, it will be one more line in your CV!” 

Many will reply “This is not true, quality matters just as much!” Yes, it (sometimes) matters in which journal one publishes; it has to be a good journal; one needs to make sure that the quality of the article is good. And how do we know that the journal is good or not? Because of its ranking. So if you thought I will argue that this is Aristotle’s presence in Academia… you were wrong. The criterion is still quantitative. Of course, we trust more that an article in a respectable (i.e., highly ranked) journal is a good one, but we all know this is not always the case. 

Bringing into discussion the qualitative and quantitative distinction is crucial for assessing job applications and the ensuing hiring process. While it used to be easier for those in a position of power to hire whom they want, it has become a bit more difficult. Imagine you really want to hire someone because he (I will use this pronoun for certain reasons) is brilliant. But his brilliance is not reflected in his publications, presentations, teaching evaluations, grants (the latter because he did not get any)… You cannot even say he is a promising scholar, since that should be visible in something. At the same time, there are a lot of competing applications with an impressive record. So what can one do? Make use of the category ‘better fit’, ‘better fit’ for the position, ‘better fit’ for the department.[1] But when is someone a ‘better fit’, given that the job description did not mention anything to this effect? When their research is in line with the department? No, too much overlap! When it complements the existing areas of research? No, way too different!

And here is where Aristotle comes into the picture. It is not the research that has to fit, but the person. And we know from Aristotle and his followers that gender, race and nationality are the result of the (four elemental) qualities. Who can be more fit for a department mostly composed of men from Western Europe than another man from Western Europe? As a woman coming from Eastern Europe, I have no chance. And Eastern Europe is not even the worst place to come from in this respect. 

There is a caveat though. When more people who fit in the department apply, the committee seeks refuge in positing some ‘occult qualities’ to choose the ‘right’ person. ‘Occult’ in the Scholastic sense means that the quality it is not manifest in any way in the person’s profile.[2]

How much is this different from days when positions were just given away on the basis of personal preference? The difference lies in the charade.[3] The difference is that nowadays a bunch of other people, devoid of occult qualities, though with an impressive array of qualities manifest in their CVs and international recognition, spend time and energy to prepare an application, get frustrated, maybe even get sick, just so that the person with the ‘better fit’ can have the impression that he is much better than all the rest who applied.

So when are we going to give up the Aristotelian-Scholastic elementary and occult qualities and opt for a different set of more inclusive qualities?


[1] Aristotle probably put it in his Categories, but it got lost.

[2] I am rather unfair with this term, because the occult qualities were making themselves present through certain effects.

[3] The Oxford dictionary indeed defines charade as “an absurd pretence intended to create a pleasant or respectable appearance.”

On being a first-generation student

First off: the following is not to be taken as a tale of woe. I am grateful for whatever life has had on offer for me so far, and I am indebted to my teachers – from primary school to university and beyond – in many ways. But I felt that, given that Martin invited me to do so, I should probably provide some context to my comment on his recent post on meritocracy, in which I claimed that my being a first-generation student has had a “profound influence on how I conceive of academia”. So here goes.

I am a first-generation student from a lower-middle-class family. My grandparents on the maternal side owned and operated a small farm, my grandfather on the paternal side worked in a foundry, and his wife – my father’s mother – did off-the-books work as a cleaning woman in order to make ends meet.

When I got my first job as a lecturer in philosophy my monthly income already exceeded that of my mother, who has worked a full-time job in a hospital for more than thirty years. My father, a bricklayer by training, is by now divorced from my mother and declutters homes for a living. Sometimes he calls me in order to tell me about a particularly good bargain he stroke on the flea market.

My parents did not save money for my education. As an undergraduate I was lucky to receive close to the maximum amount of financial assistance afforded by the German Federal Law on Support in Education (BAföG) – still, I had to work in order to be able to fully support myself (tuition fees, which had just been introduced when I began my studies, did not help). At the worst time, I juggled three jobs on the side. I have work experience as a call center agent (bad), cleaning woman (not as bad), fitness club receptionist (strange), private tutor (okay), and teaching assistant (by far the nicest experience).

Not every middle-class family is the same, of course. Nor is every family in which both parents are non-academics. Here is one way in which the latter differ: There are those parents who encourage – or, sometimes, push – their children to do better than themselves, who emphasize the value of higher education, who make sure their children acquire certain skills that are tied to a particular habitus (like playing the piano), who provide age-appropriate books and art experiences. My parents were not like that. “Doing well”, especially for my father, meant having a secure and “down-to-earth” job, ideally for a lifetime. For a boy, this would have been a craft. Girls, ostensibly being less well-suited for handiwork, should strive for a desk job – or aim “to be provided for”. My father had strong reservations about my going to grammar school, even though I did well in primary school and despite my teacher’s unambiguous recommendation. I think it never occurred to him that I could want to attend university – academia was a world too far removed from his own to even consider that possibility.

I think that my upbringing has shaped – and shapes – my experience of academia in many ways. Some of these I consider good, others I have considered stifling at times. And some might even be loosely related to Martin’s blogpost about meritocracy. Let me mention a few points (much of what follows is not news, and has been put more eloquently by others):

  • Estrangement. An awareness of the ways in which the experiences of my childhood and youth, my interests and preferences, my habits and skills differ from what I consider a prototypical academic philosopher – and I concede that my picture of said prototype might be somewhat exaggerated – has often made me feel “not quite at home” in academia. At the same time, my “professional advancement” has been accompanied by a growing estrangement from my family. This is something that, to my knowledge, many first-generation students testify to, and which can be painful at times. My day-to-day life does not have much in common with my parents’ life, my struggles (Will this or that paper ever get published?) must seem alien, if not ridiculous, to them. They have no clear idea of what it is that I do, other than that it consists of a lot of desk-sitting, reading, and typing. And I think it is hard for them to understand why anyone would even want to do something like this. One thing I am pretty sure of is that academia is, indeed, or in one sense at least, a comparatively cozy bubble. And while I deem it admirable to think of ways of how to engage more with the public, I am often unsure about how much of what we actually do can be made intelligible to “the folk”, or justified in the face of crushing real-world problems.
  • Empathy. One reason why I am grateful for my experiences is that they help me empathize with my students, especially those who seem to be afflicted by some kind of hardship – or so I think. I believe that I am a reasonably good and well-liked teacher, and I think that part of what makes my teaching good is precisely this: empathy. Also, I believe that my experiences are responsible for a heightened sensibility to mechanisms of inclusion and exclusion, and privilege. I know that – being white, having grown up in a relatively secure small town, being blessed with resilience and a certain kind of stubbornness, and so on – I am still very well-off. And I do not want to pretend that I know what it is like to come from real poverty, or how it feels to be a victim of racism or constant harassment. But I hope that I am reasonably open to others’ stories about these kinds of things.
  • Authority. In my family of origin, the prevailing attitude towards intellectuals was a strange mixture between contempt and reverence. Both sentiments were probably due to a sense of disparity: intellectuals seemed to belong to a kind of people quite different from ourselves. This attitude has, I believe, shaped how I perceived of my teachers when I was a philosophy student. I noticed that our lecturers invited us – me – to engage with them “on equal terms”, but I could not bring myself to do so. I had a clear sense of hierarchy; to me, my teachers were authorities. I did eventually manage to speak up in class, but I often felt at a loss for words outside of the classroom setting with its relatively fixed and easily discernable rules. I also struggled with finding my voice in class papers, with taking up and defending a certain position. I realize that this struggle is probably not unique to first-generation students, or to students from lower-class or lower-middle-class backgrounds, or to students whose parents are immigrants, et cetera – but I believe that the struggle is often aggravated by backgrounds like these. As teachers, I think, we should pay close attention to the different needs our students might have regarding how we engage with them. It should go without saying, but if someone seems shy or reserved, don’t try to push them into a friendly and casual conversation about the model of femininity and its relation to sexuality in the novel you recently read.
  • Merit. Now, how does all this relate to the idea of meritocracy? I think there is a lot to say about meritocracy, much more than can possibly be covered in a (somewhat rambling) blogpost. But let me try to point out at least one aspect. Martin loosely characterizes the belief in meritocracy as the belief that “if you’re good or trying hard enough, you’ll get where you want”. But what does “being good enough” or “trying hard enough” amount to in the first place? Are two students who write equally good term papers working equally hard? What if one of them has two children to care for while the other one still lives with and is supported by her parents? What if one struggles with depression while the other does not? What if one comes equipped with “cultural capital” and a sense of entitlement, while the other feels undeserving and stupid? I am not sure about how to answer these questions. But one thing that has always bothered me is talk of students being “smart” or “not so smart”. Much has been written about this already. And yet, some people still talk that way. Many of the students I teach struggle with writing scientific prose, many of them struggle with understanding the assigned readings, many of them struggle with the task of “making up their own minds” or “finding their voice”. And while I agree that those who do not struggle, or who do not struggle as much, should, of course, be encouraged and supported – I sometimes think that the struggling students might be the ones who benefit the most from our teaching philosophy, and for whom our dedication and encouragement might really make a much-needed difference. It certainly did so for me.

You don’t get what you deserve. Part I: Meritocracy in education vs employment relations

The belief that we live in a meritocracy is the idea that people get what they deserve. At school you don’t get a good grade because of your skin colour or because you have a nice smile but because you demonstrate the required skills. When I was young, the idea helped me to gain a lot of confidence. Being what is now called a first-generation student, I thought I owed my opportunity to study to a meritocratic society. I had this wonderfully confident belief that, if you’re good or trying hard enough, you’ll get where you want. Today, I think that there is so much wrong with this idea that I don’t really know where to start. Meritocratic beliefs are mostly false and harmful. In the light of our sociological knowledge, still believing that people get what they deserve strikes me as on a par with being a flat earther or a climate change denialist. At the same time, beliefs in meritocratic principles are enormously widespread and deep-rooted, even among those who should and do know better. In what follows, I attempt to make nothing more than a beginning to look at that pernicious idea and why it has so much currency.

Perhaps one of the greatest problems of meritocratic ideas is that they create a normative link between possibly unrelated things: There is no intrinsic relation between displaying certain qualities, on the one hand, and getting a job, on the other hand. Of course, they might be related; in fact, displaying certain qualities might be one of the necessary conditions for getting the job. But the justification structure suggested by meritocratic beliefs clearly obscures countless other factors, such as being in the right place at the right time etc. Here are two variants of how this plays out:

  • “I’m not good enough.” – This is a common conclusion drawn by most people. That is, by those, who don’t get the job or grant or promotion they have applied for. If there is one job and a hundred applicants, you can guess that a large amount of people will think they were not good enough. Of course, that’s nonsense for many reasons. But if the belief is that people get what they deserve, then those not getting anything might conclude to be undeserving. A recent piece by a lecturer leaving academia, for instance, contends that part of the problem is that one always has to show that one is “better than the rest”, insinuating that people showing just that might get the job in the end. But apart from the fact that the imbalance between available jobs and applicants pushes such demands to absurd heights, the question arises whether any employer could be sufficiently good to be able to recognise the enormously refined qualities of the applicants.
  • “My qualities are not recognised.” –  The more confident applicants among us might thus draw quite another conclusion, namely that they are good enough, but that their qualities are simply not seen. The counterfactual behind this reasoning seems to be the following: Had my prospective employer seen how good I am, she would have hired me. As I see it, both kinds of reasoning are fallacious in that they construe the relation between performance and getting the job / grant etc. too tightly. Of course, most people know that. But this knowledge does not prevent one from going along with the fallacious reasoning. Why is that? Well, my hunch is that meritocratic beliefs are deeply ingrained in our educational system and spill over to other contexts, such as employment relations. Let me explain.

Most education systems hold a simple promise: If you work hard enough, you’ll get a good grade. While this is a problematic belief in itself, it is a feasible idea in principle. The real problem begins with the transition from education to employment relations in academia. If you have a well performing course, you can give all of your thirty students a high grade. But you can’t give thirty applicants for the same position the job you’ve advertised, even if all the applicants are equally brilliant. Now the problem in higher education is that the transition from educational rewards to employment rewards is often rather subtle. Accordingly, someone not getting a job might draw the same conclusion as someone not getting a good grade.

It is here that we are prone to fallacious reasoning and it is here that especially academic employers need to behave more responsibly: Telling people that “the best candidate” will get the job might too easily come across like telling your first-year students that the best people will get a top grade. But the job market is a zero sum game, while studying is not. (It might be that there is more than just one best candidate or it might be impossible for the employer to determine who the best candidate is.) So a competition among students is of a completely different kind than a competition between job candidates. But this fact is often obscured. An obvious indicator of this is that for PhD candidates it is often unclear whether they are employees or students. Yet, it strikes me as a category mistake to speak about (not) “deserving” a job in the same way as about deserving a certain grade or diploma. So while, at least in an ideal world, a bad grade is a reflection of the work you’ve done, not getting a job is not a reflection of the work you’ve done. There is no intrinsic relation between the latter two things. Now that doesn’t mean that (the prospect of doing) good work is not a condition for getting a job, it just means that there is no relation of being deserving or undeserving.

Or to put the same point somewhat differently, while not every performance deserves a good grade, everyone deserves a job.