Is criticism of mismanagement and misconduct taken as snitching? How academia maintains the status quo

Recently, I became interested (again) in the way our upbringing affects our values. Considering how groups, especially in academia, often manage to suppress criticism of misconduct, I began to wonder which values we associate with criticism more generally. First, I noticed a strange ambivalence. Just think about the ambivalent portrayal of whistle blowers like Edward Snowden! The ambivalence is captured in values like loyalty that mostly pertain to a group and are not taken to be universal. Then, it hit me. Yes, truth telling is nice. But in-groups ostracise you as a snitch, a rat or a tattletale! Denouncing “virtue signalling” or “cancel culture” seems to be on a par with this verdict. So while criticism of mismanagement or misconduct is often invited as an opportunity for improvement, it is mostly received as a cause of reputational damage.

Now I wrote up a small piece for Zoon Politikon.* In this blog post, I just want to share what I take to be the main idea.

The ambivalence of criticism in academia seems to be rooted in an on-going tension between academic and managerial hierarchies. While they are intertwined, they are founded on very different lines of justification. If I happen to be your department chair, this authority weighs nothing in the setting of, say, an academic conference. Such hierarchies might be justifiable in principle. But while the goals of academic work and thus hierarchies are to some degree in the control of the actual agents involved, managerial hierarchies cannot be justified in the same way. A helpful illustration is the way qualitative and quantitative assessment of our work come apart: A single paper might take years of research and end up being a game-changer in the field of specialisation, but if it happens to be the only paper published in the course of three years, it won’t count as sufficient output. So while my senior colleague might have great respect for my work as an academic, she might find herself confronted with incentives to admonish and perhaps even fire me.

What does this mean for the status of criticism? The twofold nature of hierarchies leaves us with two entirely disparate justifications of criticism. But these disparate lines of justification are themselves a constant reason for criticism. The fact that a field-changing paper and a mediocre report both make one single line in a CV bears testimony to this. But here’s the thing: we seemingly delegitimise such criticism by tolerating and ultimately accepting the imperfect status quo. Of course, most academics are aware of a tension: The quantification of our work is an almost constant reason for shared grievance. But as employees we find ourselves often enough buying into it as a “necessary evil”. Now, if we accept it as a necessary evil, we seem to give up on our right to criticise it. Or don’t we? Of course not, and the situation is a lot more dynamic than I can capture here. To understand how “buying into” an imperfect situation (a necessary evil) might seemingly delegitimise criticism, it is crucial to pause and briefly zoom in on the shared grievance I just mentioned.

Let me begin by summarising the main idea: The shared grievance constitutes our status quo and, in turn, provides social cohesion among academics. Criticism will turn out to be a disturbance of that social cohesion. Thus, critics of the status quo will likely be ostracised as “telling on” us.

One might portray the fact that we live with an academic and a managerial hierarchy simply as unjust. One hierarchy is justified, the other isn’t (isn’t really, that is). Perhaps, in a perfect world, the two hierarchies would coincide. But in fact we accept that, with academia being part of the capitalist world at large, they will never coincide. This means that both hierarchies can be justified: one as rooted in academic acclaim; the other as a necessary evil of organising work. If this is correct and if we accept that the world is never perfect, we will find ourselves in an on-going oscillation and vacillation. We oscillate between the two hierarchies. And we vacillate between criticising and accepting the imperfection of this situation. This vacillation is, I submit, what makes criticism truly ambivalent. On the one hand, we can see our work-relations from the different perspectives; on the other hand, we have no clear means to decide which side is truly justified. The result of this vacillation is thus not some sort of solution but a shared grievance. A grievance acknowledging both the injustices and the persisting imperfection. There are two crucial factors in this: The fact that we accept the imperfect situation to some degree; and the fact that this acceptance is a collective status, it is our status quo. Now, I alone could not accept on-going injustices in that status quo, if my colleagues were to continuously rebel against it. Thus, one might assume that, in sharing such an acceptance, we share a form of grievance about the remaining vacillation.

It is of course difficult to pin down such a phenomenon, as it obtains mostly tacitly. But we might notice it in our daily interactions when we mutually accept that we see a tension, for instance, between the qualitative and quantitative assessment of our work. This shared acceptance, then, gives us some social cohesion. We form a group that is tied together neither by purely academic nor by purely managerial hierarchies and relations. There might be a growing sense of complicity in dynamic structures that are and aren’t justified but continue to obtain. So what forms social cohesion between academics are not merely factors of formal appraisal or informal friendship. Rather, a further crucial factor is the shared acceptance of the imperfection of the status quo. The acceptance is crucial in that it acknowledges the vacillation and informs what one might call the “morale” of the group.

If this is correct, academics do indeed form a kind of group through acceptance of commonly perceived imperfections. Now if we form such a group, it means that criticism will be seen as both justified but also as threatening the shared acceptance. We know that a critic of quantitative work measures is justified. But we also feel that we gave in and accepted this imperfection a while ago. The critic seemingly breaks with this tacit consent and will be seen like someone snitching or “telling on us”. As I see it, it is this departure from an in-group consensus that makes criticism appear as snitching. And while revealing a truth about the group might count as virtuous, it makes the critic seemingly depart from the in-group. Of course, companies and universities enjoy also some legal protection. Even if you find out about something blameworthy, you might be bound by rules about confidentiality. This is why whistle blowers do indeed have an ambivalent reputation, too. But I guess that the legal component alone does not account for the force of the in-group mentality at work in suppressing criticism.

This mode of suppressing criticism has pernicious effects. The intertwined academic and managerial hierarchies often come with inverse perceptions of criticism: your professorial colleague might be happy to learn from your objections, while your department chair might shun your criticism and even retaliate against you. Yet, they might be the same person. Considering the ubiquitous histories of suppressing critics of sexism, racism and other kinds of misconduct, we do not need to look far to find evidence for ostracism or retaliation against critics. I think that it’s hard to explain this level of complicity with wrongdoers merely by referring to bad intentions, on the one hand, or formal agreements such as confidentiality, on the other. Rather, I think, it is worthwhile to consider the deep-rooted in-group consensus that renders criticism as snitching. One reason is that snitching counts, at least in a good number of cultures, as a bad action. But while this might be explained with concerns about social cohesion, it certainly remains a morally dubious verdict, given that snitching is truth-conducive and should thus be aligned with values such as transparency. Going by personal anecdotes, however, I witnessed that snitching was often condemned even by school teachers, who often seemed to worry about social cohesion no less than about truthfulness. In other words, we don’t seem to like that the truth be told when it threatens our status quo.

In sum, we see that the ambivalent status of criticism is rooted in a twofold hierarchy that, in turn, comes with disparate sets of values. Shared acceptance of these disparate sets as an unavoidable imperfection binds together an in-group that will sanction explicit criticism of this imperfection as a deviation from the consensus. The current charges against so-called “virtue signalling”, a “call out culture” or “cancel culture” on social media strike me as instances of such sanctions. If we ask what makes the inclinations to sanction in-group norm violations so strong, it seems helpful to consider the deep-rooted code against snitching. While the moral status of sanctioning snitching is certainly questionable, it can shed light on the pervasive motivation and strikingly ready acceptance of such behaviour.

______

* Following a discussion of a blog post on silence in academia, Izabela Wagner kindly invited me to contribute to a special issue in Zoon Politikon. I am enormously grateful to her for the exchanges and for providing this opportunity. Moreover, I have benefitted greatly from advice by Lisa Herzog, Pietro Ingallina, Mariya Ivancheva, Christopher Quinatana, Rineke Verbrugge, and Justin Weinberg.

Being an actor in times of corona. Kai Ivo Baulitz in conversation with Martin Lenz (video)

This is the first installment of my new video series Philosophical Chats. In this episode, I have a conversation (for about 40 minutes) with my old friend Kai Ivo Baulitz, an actor and playwright, who is currently in Prague for a film shooting, but has to quarantine most of the time.* – We talk about how the crisis changed our minds and ways, about Kai’s situation in Prague, about being under surveillance, about anger and guilt, about acting and kissing, about how doing philosophy is like having a midlife crisis, about embracing fatalism, and about how we end up feeling inconsistent much of the time. Enjoy!

Being an actor in times of corona. Kai Ivo Baulitz in conversation with Martin Lenz (video)

______

* Fun fact: This year, Kai and I would have celebrated our 30th school leaving anniversary (Abiturfeier), but corona took care of preventing that. Perhaps this conversation makes up for that a bit.

Those who know us from our school days might find it particularly ironic that Kai is currently stuck in Prague.

Teaching online versus online teaching

Having heard many stories and tips about online teaching I was very apprehensive about teaching my first class of this term. How could I not fail? At the same time, I’m enormously grateful to all the people and my university for sharing great advice and best practice examples (see e.g. here). Thinking through the scenarios not only got me worried but also gave me a lot of headspace to anticipate and navigate through a number of (possible) obstacles before plunging into the unknown. What can I say? I went through all sorts of worries and finally came up with a decision that, so far, worked for me: I teach online rather than do online teaching. This means that I teach much in the same way I would have done offline, with the small exception that it’s happening online. What does this entail for students and myself?

Let me address students first. I’ve heard many people say that you, students, tend to keep your cameras and microphones off, and use the chat instead. Although I personally prefer to see people’s faces and hear their voices, I think this would be ok for at least some part of the interaction. The reason I recommend switching your gear on and making yourself heard and seen is that you should take your space. Online teaching is often presented as a challenge that deprives us of direct interaction which, in turn, has to be compensated with all sorts of “tools”. Yes, of course, we are used to physical cues in communication, the perception of which we now often simply miss out on. But what strikes me as crucial is that we participate. I don’t think any amount of online tools or refined environment makes up for your participation in the conversation. I find reading the chat more cumbersome than listening to you. I also prefer speaking over writing. But I’ll get used to the chat and learn to change my ways happily, as long as you participate. The crucial aspect is not how it’s done, online or offline, but that you do it. Make your contributions, ask you questions etc. as before.

That said, I also think that you can participate more fully if you use all the devices available. The online space is not just a toolbox or learning environment. It’s first of all a political space with all the power imbalances and hierarchies that we have offline. It’s at least partly our choice whether we want to amplify or adjust the old space as we’re moving online. This is why I’d recommend using all the resources that enhance your presence for participation. You’re not a passive receptor and instructors are not emulating youtube. We’re still at the university, a public and democratic space for academic exchange.

This morning, I was quite nervous whether my “strategy” would work. It’s way too early to assess this, but what I find worth reporting is that, for me at least, teaching this course online was much the same experience as teaching it offline. I thought it would be awkward to be talking to a screen, but then people who know me also know that I often speak with my eyes closed… The silence after asking a question might be slightly longer, but we all know that the situation is special, so it’s fine. Of course, I miss cues, but I noticed I can ask for them more often, if need be.

For the time being, I have also decided not to record my teaching events. If someone misses a class, they will miss it. More importantly, I know that recording the stuff would change the stakes for the students and myself. Over and above the well-known privacy issues and unintended misuse of recorded material, watching a lecture is different even from silently participating in it, let alone giving it. (The fancy word for this is “synchronous” teaching, I guess. But that would be misleading. Even if the lectures cannot be viewed later on, students will still experience “asynchronous” teaching. That is, they will still have to do the reading, thinking and discussing before and after. Of course, there is nothing wrong with the “flipped classroom”, but to my mind this is different from actually teaching students.)

Is that so? On the face of it, one might think there is not much difference between watching a recorded lecture and silently taking part in an online lecture, especially if the audience is rather large. I beg to differ. Even if you don’t want to join in, participating in an actual lecture still gives you the opportunity to interfere. At least for me, such an opportinity changes the level and quality of attention, even if (or especially when) I decide not to raise my hand. Seen this way, recording lectures is a way of providing material on top of other material, such as readings, videos and podcasts. It goes without saying that we shouldn’t underestimate the value of such materials. After all, teaching is no replacement for learning, that is: independent self-study is still a, sometimes underestimated, component of the educational process. But the moments of interaction, the so-called contact hours, remain special: they come and go. And with them the opportunities for participation, for noticing others in relation to yourself come and go. My hunch is that these moments thrive on the fact that they cannot be repeated. In this sense, watching a recorded class is not the same as taking part in a given class.

All that said, this is in no way dismissing the fantastic ideas and tools for proper online teaching. Yet it is crucial to be reminded that, at this moment, most of us teach online (if they have this liberty, that is) because the pandemic causes an emergency. But just as new social media competence is rooted in old-fashioned reading and writing, the quality of teaching is rooted in our resources of offline thinking and interaction.

In any case, I wish everyone a happy and safe beginning of term.

Are philosophical classics too difficult for students?

Say you would like to learn something about Kant, should you start by reading one of his books or rather get a good introduction to Kant? Personally, I think it’s good to start with primary texts, get confused, ask questions, and then look at the introductions to see some of your questions discussed. Why? Well, I guess it’s better to have a genuine question before looking for answers. However, even before the latest controversy on Twitter (amongst others between Zena Hitz and Kevin Zollman) took off, I have been confronted with quite different views. Taken as an opposition between extreme views, you could ask whether you want to make philosophy about ideas or people (and their writings). It’s probably inevitable that philosophy ends up being about both, but there is still the question of what we should prioritise.

Arguably, if you expose students to the difficult original texts, you might frighten them off. Thus, Kevin Zollman writes: “If I wanted someone to learn about Kant, I would not send them to read Kant first. Kant is a terrible writer, and is impossible for a novice to understand.” Accordingly, he argues that what should be prioritised is the ideas. In response Zena Hitz raises a different educational worry: “You’re telling young people (and others) that serious reading is not for them, but only for special experts.” Accordingly, she argues for prioritising the original texts. As Jef Delvaux shows in an extensive reflection, both views touch on deeper problems relating to epistemic justice. A crucial point in his discussion is that we never come purely or unprepared to a primary text anyway. So an emphasis on the primary literature might be prone to a sort of givenism about original texts.

I think that all sides have a point, but when it comes to students wanting to learn about historical texts, there is no way around looking at the original. Let me illustrate my point with a little analogy:

Imagine you want to study music and your main instrument is guitar. It is with great excitement that you attend courses on the music of Bach whom you adore. The first part is supposed to be on his organ works, but already the first day is a disappointment. Your instructor tells you that you shouldn’t listen to Bach’s organ pieces themselves, since they might be far too difficult. Instead you’re presented with a transcription for guitar. Well, that’s actually quite nice because this is indeed more accessible even if it sounds a bit odd. (Taken as an analogy to reading philosophy, this could be a translation of an original source.) But then you look at he sheets. What is this? “Well”, the instructor goes on, “I’ve reduced the accompaniment to the three basic chords. That makes it easier to reproduce it in the exam, too. And we’ll only look at the main melodic motif. In fact, let’s focus on the little motif around the tonic chord. So, if you can reproduce the C major arpeggio, that will be good enough. And it will be a good preparation for my master class on tonic chords in the pre-classic period.” Leaving this music school, you’ll never have listened to any Bach pieces, but you have wonderful three-chord transcriptions for guitar, and after your degree you can set out on writing three-chord pieces yourself. If only there were still people interested in Punk!

Of course, this is a bit hyperbolic. But the main point is that too much focus on cutting things to ‘student size’ will create an artificial entity that has no relation to anything outside the lecture hall. But while I thus agree with Zena Hitz that shunning the texts because of their difficulties sends all sorts of odd messages, I also think that this depends on the purpose at hand. If you want to learn about Kant, you should read Kant just like you should listen to Bach himself. But what if you’re not really interested in Kant, but in a sort of Kantianism under discussion in a current debate? In this case, the purpose is not to study Kant, but some concepts deriving from a certain tradition.  In this case, you might be more like a jazz player who is interested in building a vocabulary. Then you might be interested, for instance, in how Bach dealt with phrases over diminished chords and focus on this aspect first. Of course, philosophical education should comprise both a focus on texts and on ideas, but I’d prioritise them in accordance with different purposes.

That said, everything in philosophy is quite difficult. As I see it, a crucial point in teaching is to convey means to find out where exactly the difficulties lie and why they arise. That requires all sorts of texts, primary, secondary, tertiary etc.

Why we shouldn’t study what we love

I recognize that I could only start to write about this … once I related to it. I dislike myself for this; my scholarly pride likes to think I can write about the unrelatable, too. Eric Schliesser

Philosophy students often receive the advice that they should focus on topics that they have a passion for. So if you have fallen for Sartre, ancient scepticism or theories of justice, the general advice is to go for one of those. On the face of it, this seems quite reasonable. A strong motivation might predict good results which, in turn, might motivate you further. However, I think that you might actually learn more by exposing yourself to material, topics and questions that you initially find remote, unwieldy or even boring. In what follows, I’d like to counter the common idea that you should follow your passions and interests, and try to explain why it might help to study things that feel remote.

Let me begin by admitting that this approach is partly motivated by my own experience as a student. I loved and still love to read Nietzsche, especially his aphorisms in The Gay Science. There is something about his prose that just clicks. Yet, I was always sure that I couldn’t write anything interesting about his work. Instead, I began to study Wittgenstein’s Tractatus and works from the Vienna Circle. During my first year, most of these writings didn’t make sense to me: I didn’t see why they found what they said significant; most of the terminology and writing style was unfamiliar. In my second year, I made things worse by diving into medieval philosophy, especially Ockham’s Summa Logicae and Quodlibeta. Again, not because I loved these works. In fact, I found them unwieldy and sometimes outright boring. So why would I expose myself to these things? Already at the time, I felt that I was actually learning something: I began to understand concerns that were alien to me; I learned new terminology; I learned to read Latin. Moreover, I needed to use tools, secondary literature and dictionaries. And for Ockham’s technical terms, there often were no translations. So I learned moving around in the dark. There was no passion for the topics or texts. But speaking with hindsight (and ignoring a lot of frustration along the way), I think I discovered techniques and ultimately even a passion for learning, for familiarising myself with stuff that didn’t resonate with me in the least. (In a way, it seemed to turn out that it’s a lot easier to say interesting things about boring texts than to say even boring things about interesting texts.)

Looking back at these early years of study, I’d now say that I discovered a certain form of scholarly explanation. While reading works I liked was based on a largely unquestioned understanding, reading these unwieldy new texts required me to explain them to myself. This in turn, prompted two things: To explain these texts (to myself), I needed to learn about the new terminology etc. Additionally, I began to learn something new about myself. Discovering that certain things felt unfamiliar to me, while others seemed familiar meant that I belonged to one kind of tradition rather than another. Make no mistake: Although I read Nietzsche with an unquestioned familiarity, this doesn’t mean that I could have explained, say, his aphorisms any better than the strange lines of Wittgenstein’s Tractatus. The fact that I thought I understood Nietzsche didn’t give me any scholarly insights about his work. So on top of my newly discovered form of explanation I also found myself in a new relation to myself or to my preferences. I began to learn that it was one thing to like Nietzsche and quite another to explain Nietzsche’s work, and still another to explain one’s own liking (perhaps as being part of a tradition).

So my point about not studying what you like is a point about learning, learning to get oneself into a certain mode of reading. Put more fancily: learning to do a certain way of (history of) philosophy. Being passionate about some work or way of thinking is something that is in need of explanation, just as much as not being passionate and feeling unfamiliar about something needs explaining. Such explanations are greatly aided by alienation. As I said in an earlier post, a crucial effect of alienation is a shift of focus. You can concentrate on things that normally escape your attention: the logical or conceptual structures for instance, ambiguities, things that seemed clear get blurred and vice versa. In this sense, logical formalisation or translation are great tools of alienation that help you to raise questions, and generally take an explanatory stance, even to your most cherished texts.

As a student, discovering this mode of scholarly explanation instilled pride, a pride that can be hurt when explanations fail or evade us. It was remembering this kind of pain, described in the motto of this post, that prompted these musings. There is a lot to be said for aloof scholarship and the pride that comes with it, but sometimes it just doesn’t add up. Because there are some texts that require a more passionate or intuitive relation before we can attain a scholarly stance towards them. If the passion can’t be found, it might have to be sought. Just like our ears have to be trained before we can appreciate some forms of, say, very modern music “intuitively”.

Must we claim what we say? A quick way of revising essays

When writing papers, students and advanced philosophers alike are often expected to take a position within a debate and to argue for or against a particular claim. But what if we merely wish to explore positions and look for hidden assumptions, rather than defend a claim? Let’s say you look at a debate and then identify an unaddressed but nevertheless important issue, a commitment left implicit in the debate, let’s call it ‘X’. Writing up your findings, the paper might take the shape of a description of that debate plus an identification of the implicit X. But the typical feedback to such an exploration can be discouraging: It’s often pointed out that the thesis could have been more substantive and that a paper written this way is not publishable unless supplemented with an argument for or against X. Such comments all boil down to the same problem: You should have taken a position within the debate you were describing, but you have failed to do so.

But hang on! We’re all learning together, right? So why is it not ok to have one paper do the work of describing and analysing a debate, highlighting, for instance, some unaddressed X, so that another paper may attempt an answer to the questions about X and come up with a position? Why must we all do the same thing and, for instance, defend an answer on top of everything else? Discussing this issue, we* wondered what this dissatisfaction meant and how to react to it. Is it true? Should you always take a position in a debate when writing a paper? Or is there a way of giving more space to other approaches, such as identifying an unaddressed X?

One way of responding to these worries is to dissect and extend the paper model, for instance, by having students try other genres, such as commentaries, annotated translations, reviews, or structured questions. (A number of posts on this blog are devoted to this.) However, for the purposes of this post, we’d like to suggest and illustrate a different idea. We assume that the current paper model (defending a position) does not differ substantially from other genres of scholarly inquiry. Rather, the difference between, say, a commentary or the description of a debate, on the one hand, and the argument for a claim, on the other, is merely a stylistic one. Now our aim is not to present an elaborate defense of this idea, but to try out how this might help in practice.

To test and illustrate the idea (below), we have dug out some papers and rewritten sections of them. Before presenting one sample, let’s provide a brief manual. The idea rests on the, admittedly somewhat contentious, tenets that

  • any description or analysis can be reformulated as a claim,
  • the evidence provided in a description can be dressed up as an argument for the claim.

But how do you go about it? In describing a debate, you typically identify a number of positions. So what if you don’t want to adopt and argue for one of them? There is something to be said for just picking a side anyway, but if that feels too random, here is a different approach:

(a) One thing you can always do is defend a claim about the nature of the disagreement in the debate. Taken this way, the summary of your description or analysis becomes the claim about the nature of the disagreement, while the analysis of the individual positions functions as an argument / evidence for this claim. This is not a cheap trick; it’s just a pointed way of presenting your material.

(b) A second step consists in actually labelling steps as claims, arguments, evaluations etc. Using such words doesn’t change the content, but it signals even to a hasty reader where your crucial steps begin and end.

Let’s now look at a passage from the conclusion of a paper. Please abstract away from the content of discussion. We’re just interested in identifying pertinent steps. Here is the initial text:

“… Thus, I have dedicated this essay to underscoring the importance of this problem. I have first discussed two of the most prominent levels accounts, namely O&P’s layer-cake account, and Craver and Bechtel’s mechanistic levels, and shown that they both provide radically different levels accounts. I addressed the problems with each account, and it became clear that what is considered to be a problem by some, is considered to be a virtue by others. This led us to uncover a deeper disagreement, namely about what the function of a levels account is supposed to be and what the term “level” means.”

Here is the rewritten version (underlined sections indicate more severe changes or additions):

“… But why is this problem significant? I have first discussed two of the most prominent levels accounts, namely O&P’s layer-cake account, and Craver and Bechtel’s mechanistic levels, and shown that they both provide radically different levels accounts. I addressed the problems with each account, and it became clear that what is considered to be a problem by some, is considered to be a virtue by others. This is in keeping with my second-order thesis that the dispute is less about content but rather about defining criteria. However, this raises the question of what to make of levels on any set of criteria. Answering this question led me to defend my main (first-order) thesis: If we look at the different sets of criteria, we uncover a deeper disagreement, namely about what the function of a levels account is supposed to be and what the term “level” means. Accordingly, I claim that disparate accounts of levels indicate different functions of levels.

We consider neither passage a piece of beauty. The point is merely to take some work in progress and see what happens if you follow the two steps suggested above: (a) articulate claims; (b) label items as such. – What can we learn from this small exercise? We think that the contrast between these two versions shows just how big of an impact the manner of presentation can have, not least on the perceived strength of a text. The desired effect would be that a reader can easily identify what is at stake for the author. Content-wise, both versions say the same thing. However, the first version strikes us as a bit detached and descriptive in character, whereas the second version seems more engaged and embracing a position. What used to be a text about a debate has now become a text partaking in a debate.  (Of course, your impressions might differ. So we’d be interested to hear about them!) Another thing we saw confirmed in this exercise is that you always already have a position, because you end up highlighting what matters to you. Having something to say about a debate still amounts a position. Arguably, it’s also worth to be presented as such.

Where do we go from here? Once you have reformulated such a chunk and labelled some of your ideas (say, as first and second order claims etc.), you can rewrite the rest of your text accordingly. Identify these items in the introduction, and clarify which of those items you argue for in the individual sections of your paper, such that they lead up to these final paragraphs. That will probably allow you (and the reader) to highlight the rough argumentative structure of your paper. Once this is established, it will be much easier to polish individual sections.

____

*Co-authored by Sabine van Haaren and Martin Lenz

On self-censorship

For a few years during the 80s, Modern Talking was one of the most well known pop bands in Germany. But although their first single “You’re my heart, you’re my soul” was sold over eight million times, no one admitted to having bought it. Luckily, my dislike of their music was authentic, so I never had to suffer that particular embarrassment. Yet, imagine all these people alone in their homes, listening to their favourite tune but never daring to acknowledge it openly. Enjoying kitsch of any sort brings the whole drama of self-censorship to the fore. You might be moved deeply, but the loss of face is more unbearable than remaining in hiding. What’s going on here? Depending on what precisely is at stake, people feel very differently about this phenomenon. Some will say that self-censorship just maintains an acceptable level of decency or tact; others will say that it reflects political oppression or, ahem, correctness. At some point, however, you might let go of all shame. Perhaps you’ve got tenure and start blogging or something like that … While some people think it’s a feature of the current “cancel culture”, left or right, I think it’s more important to see the different kinds of reasons behind self-censorship. In some cases, there really is oppression at work; in other cases, it’s peer pressure. Neither is fun. In any case, it’s in the nature of this phenomenon that it is hard to track in a methodologically sound way. So rather than draw a general conclusion, it might be better to go through some very different stories.

Bad thoughts. – Do you remember how you, as a child, entertained the idea that your thoughts might have horrible consequences? My memory is faint, but I still remember assuming that thinking of swear words might entail my parents having an accident. So I felt guilty for thinking these words, and tried to break the curse by uttering them to my parents. But somehow I failed to convince them of the actual function of my utterance, and so they thought I was just calling them names. Today, I know that this is something that happens to occur in children, sometimes even pathologically strong and thus known as “intrusive thoughts” within an “obsessive compulsory disorder”. Whatever the psychological assessment, my experience was that of “forbidden” thoughts and, simultaneously, the inability to explain myself properly. Luckily, it didn’t haunt me, but I can imagine it becoming problematic.

One emergence of the free speech debate. – When I was between 7 and 10 years old (thus in the 1970s), I sometimes visited a lonely elderly woman. She was an acquaintance of my mother, well in her 70s and happy to receive some help. When no one else was around she often explained her political views to me. She was a great admirer of Franz Josef Strauß whom she described to me as a “small Hitler – something that Germany really needs again”. She hastened to explain that, of course, the real Hitler would be too much, but a “small” one would be quite alright. She then praised how, back in the day, women could still go for walks after dark etc. Listening to other people of that generation, I got the impression that many people in Germany shared these ideas. In 2007, the news presenter Eva Herman explicitly praised the family values of Nazi Germany and was dismissed from her position. The current rise of fascism in Germany strikes me as continuous with the sentiments I found around me early on. And if I’m not mistaken these sentiments date back at least to the 1930s and 1940s. In my experience, Nazism was never just an abstract political view. Early on did I realise that otherwise seemingly “decent” people could be taken by it. But this concrete personal dimension made the sweaty and simplistic attitude to other people all the more repulsive. In any case, I personally found that people in the vicinity of that ideology are the most vocal people who like to portray themselves as “victims” of censorship, though they are certainly not censoring themselves. (When it comes to questions of free speech, I am always surprised that whistleblowers such as Snowden are not mentioned.)

Peer pressure and classism. – I recently hosted a guest post on being a first generation student that really made me want to write about this issue myself. But often when I think about this topic, I still feel uncomfortable writing about it. In some ways, it’s all quite undramatic in that the transition to academia was made very easy by my friends. For what shouldn’t be forgotten is that it’s not only your parents and teachers who educate you. In my case at least, I tacitly picked up many of the relevant habits from my friends and glided into being a new persona. Although I hold no personal grudges, I know that “clothes make people” or “the man” as Gottfried Keller’s story is sometimes translated. What I noticed most is that people from other backgrounds often have a different kind of confidence being around academics. Whether that is an advantage across the board I don’t know. What I do know is that I took great care to keep my own background hidden from most colleagues, at least before getting a tenured job.

Opportunism and tenure. – Personally, I believe that I wouldn’t dare publishing this very post or indeed any of my posts, had I not obtained a tenured position. Saying this, I don’t want to impart advice. All I want to say is that getting this kind of job is what personally freed me to speak openly about certain things. But the existential weight of this fact makes me think that the greatest problem about self-censorship lies in the different socio-economic status that people find themselves in. This is just my experience, but perhaps it’s worth sharing. So what is it about, you might wonder? There is no particular truth that I would not have told before but would tell now. It’s not a matter of any particular opinion, be it left or right. Rather, it affects just about everything I say. The fact that I feel free to talk about my tastes, about the kitsch I adore, about the music I dislike, about the artworks I find dull, alongside the political inclinations I have – talking about all of this openly, not just politics, is affected by the fact that I cannot be fired just so and that I do not have to impress anyone I don’t want to impress. It is this freedom that I think does not only allow us to speak but also requires us to speak up when others will remain silent out of fear.

The myth of authenticity. – The fact that many of us feel they have to withhold something creates the idea that there might be a vast amount of unspoken truths under the surface. “Yes”, you might be inclined to ask, “but what do you really think?” This reminds me of the assumption that, in our hearts, we speak a private language that we cannot make intelligible to others. Or of the questions immigrants get to hear when people inquire where they really come from. It doesn’t really make sense. While it is likely that many people do not say what they would say if their situation were different, I don’t think it’s right to construe this as a situation of hidden truths or lies. (Some people construe the fact that we might hide conceal our opinions as lies. But I doubt that’s a pertinent description.) For better or worse, the world we live in is all we have when it comes to questions of authenticity. If you choose to remain silent, there is no hidden truth left unspoken. It just is what it is: you’re not speaking up and you might be in agony about that. You might conceal what you think. But then it is the concealing that shapes the world and yourself, not the stuff left unspoken. Put differently, there are no truths, no hidden selves, authentic or not, that persist without some relation to interlocutors.

***

Speaking of which, I want to finish this post with a word of thanks. It’s now two years ago that I started this blog. By now I have written 118 posts. If I include the guest posts, it adds up to 131. Besides having the pleasure of hosting great guest authors, I feel enormously privileged to write for you openly. On the one hand, this is enabled by the relatively comfortable situation that I am in. On the other hand, none of this would add up to anything if it weren’t for you, dear interlocutors.

Why using quotation marks doesn’t cancel racism or sexism. With a brief response to Agnes Callard

Would you show an ISIS video, depicting a brutal killing of hostages, to the survivor of their murders? Of if you prefer a linguistic medium: would you read Breivik’s Manifesto to a survivor of his massacre? – Asking these questions, I’m assuming that none of you would be inclined to endorse these items. That’s not the point. The question is why you would not present such items to a survivor or perhaps indeed to anyone. My hunch is that you would not want to hurt or harm your audience. Am I right? Well, if this is even remotely correct, why do so many people insist on continuing to present racist, sexist or other dehumanising expressions, such as the n-word, to others? And why do we decry the take-down of past authors as racists and sexists? Under the label of free speech, of all things? I shall suggest that this kind of insistence relies on what I call the quotation illusion and hope to show that this distinction doesn’t really work for this purpose.

Many people assume that there is a clear distinction between use and mention. When saying, “stop” has four letters, I’m not using the expression (to stop or alert you). Rather, I am merely mentioning the word to talk about it. Similarly, embedding a video or passages from a text into a context in which I talk about these items is not a straightforward use of them. I’m not endorsing what these things supposedly intend to express or achieve. Rather, I am embedding them in a context in which I might, for instance, talk about the effects of propaganda. It is often assumed that this kind of “going meta” or mentioning is categorically different from using expressions or endorsing statements. As I noted in an earlier post, if I use an insult or sincerely threaten people by verbal means, I act and cause harm. But if I consider a counterfactual possibility or quote someone’s words, my expressions are clearly detached from action. However, the relation to possible action is what contributes to making language meaningful in the first place. Even if I merely quote an insult, you still understand that quotation in virtue of understanding real insults. In other words, understanding such embeddings or mentions rides piggy-back on understanding straightforward uses.

If this is correct, then the difference between use and mention is not a categorical one but one of degrees. Thus, the idea that quotations are completely detached from what they express strikes me as illusory. Of course, we can and should study all kinds of expressions, also expressions of violence. But their mention or embedding should never be casual or justified by mere convention or tradition. If you considered showing that ISIS video, you would probably preface your act with a warning. – No? You’re against trigger warnings? So would you explain to your audience that you were just quoting or ask them to stop shunning our history? And would you perhaps preface your admonitions with a defense of free speech? – As I see it, embedded mentions of dehumanising expressions do carry some of the demeaning attitudes. So exposing others to them merely to make a point about free speech strikes me as verbal bullying. However, this doesn’t mean that we should stop quoting or mentioning problematic texts (or videos). It just means that prefacing such quotations with pertinent warnings is an act of basic courtesy, not coddling.

The upshot is that we cannot simply rely on a clear distinction between quotation and endorsement, or mention and use. But if this correct, then what about reading racist or sexist classics? As I have noted earlier, the point would not be to simply shun Aristotle or others for their bigotry. Rather, we should note their moral shortcomings as much as we should look into ours. For since we live in some continuity with our canon, we are to some degree complicit in their racism and sexism.

Yet instead of acknowledging our own involvement in our history, the treatment of problematic authors is often justified by claiming that we are able to detach ourselves from their involvement, usually by helping ourselves to the use-mention distinction. A recent and intriguing response to this challenge comes from Agnes Callard, who claims that we can treat someone like Aristotle as if he were an “alien”. We can detach ourselves, she claims, by interpreting his language “literally”, i.e. as a vehicle “purely for the contents of his belief” and as opposed to “messaging”, “situated within some kind of power struggle”. Taken this way, we can grasp his beliefs “without hostility”, and the benefits of reading come “without costs”. This isn’t exactly the use-mention distinction. Rather, it is the idea that we can entertain or consider ideas without involvement, force or attitude. In this sense, it is a variant of the quotation illusion: Even if I believe that your claims are false or unintelligible, I can quote you – without adding my own view. I can say that you said “it’s raining” without believing it. Of course I can also use an indirect quote or a paraphrase, a translation and so on. Based on this convenient feature of language, historians of philosophy (often including myself) fall prey to the illusion that they can present past ideas without imparting judgment. Does this work?

Personally, I doubt that the literal reading Callard suggests really works. Let me be clear: I don’t doubt that Callard is an enormously good scholar. Quite the contrary. But I’m not convinced that she does justice to the study that she and others are involved in when specifying it as a literal reading. Firstly, we don’t really hear Aristotle literally but mediated through various traditions, including quite modern ones, that partly even use his works to justify their bigoted views. Secondly, even if we could switch off Aristotle’s political attitudes and grasp his pure thoughts, without his hostility, I doubt that we could shun our own attitudes. Again, could you read Breivik’s Manifesto, ignoring Breivik’s actions, and merely grasp his thoughts? Of course, Aristotle is not Breivik. But if literal reading is possible for one, then why not for the other?

The upshot is: once I understand that a way of speaking is racist or sexist, I cannot unlearn this. If I know that ways of speaking hurt or harm others, I should refrain from speaking this way. If I have scholarly or other good reasons to quote such speech, I shouldn’t do so without a pertinent comment. But I agree with Callard’s conclusion: We shouldn’t simply “cancel” such speech or indeed their authors. Rather, we should engage with it, try and contextualise it properly. And also try and see the extent of our own involvement and complicity. The world is a messy place. So are language and history.

“We don’t need no …” On linguistic inequality

Deviations from so-called standard forms of language (such as the double negative) make you stand out immediately. Try and use double negatives consistently in your university courses or at the next job interview and see how people react. Even if people won’t correct you explicitly, many will do so tacitly. Such features of language function as social markers and evoke pertinent gut reactions. Arguably, this is not only true of grammatical or lexical features, but also of broader stylistic features in writing, speech and even non-linguistic conduct. Some ways of phrasing may sound like heavy boots. Depending on our upbringing, we are familiar with quite different linguistic features. While none of this might be news, it raises crucial questions about teaching that I see rarely addressed. How do we respond to linguistic and stylistic diversity? When we say that certain students “are struggling”, we often mean that they deviate from our stylistic expectations. A common reaction is to impart techniques that help them in conforming to such expectations. But should we perhaps respond by trying to understand the “deviant” style?

Reading the double negative “We don’t need no …”, you might see quite different things: (1) a grammatically incorrect phrase in English; (2) a grammatically correct phrase in English; (3) part of a famous song by Pink Floyd. Assuming that many of us recognise these things, some will want to hasten to add that (2) contradicts (1). A seemingly obvious way to resolve this is to say that reading (1) applies to what is called the standard dialect of English (British English), while (2) applies to some dialects of English (e.g. African-American Vernacular English). This solution prioritises one standard over other “deviant” forms that are deemed incorrect or informal etc. It is obvious that this hierarchy goes hand in hand with social tensions. At German schools and universities, for instance, you can find numerous students and lecturers who hide their dialects or accents. In linguistics, the disadvantages of regional dialect speakers have long been acknowledged. Even if the prescriptive approach has long been challenged, it’s driving much of the implicit culture in education.

But the distinction between standard and deviant forms of language ignores the fact that the latter often come with long-standing rules of their own. Adjusting to the style of your teacher might then require you to deviate from the language of your parents. Thus another solution is to say that there are different English languages. Accordingly, we can acknowledge reading (2) and call African-American Vernacular English (AAVE) a language. The precise status and genealogy is a matter of linguistic controversy. However, the social and political repercussions of this solution come most clearly into view when we consider the public debate about teaching what is called “Ebonics” at school in the 90s (Here is a very instructive video about this debate). If we acknowledge reading (2), it means, mutatis mutandis, that many English speakers raised with AAVE can be considered bilingual. Educators realised that teaching standard forms of English can be aided greatly by using AAVE as the language of instruction. Yet, trying to implement this as a policy at school soon resulted in a debate about a “political correctness exemplar gone out of control” and abandoning the “language of Shakespeare”. The bottom-line is: Non-hierarchical acknowledgement of different standards quickly spirals into defences of the supposed status quo by the dominant social group.

Supposed standards and deviations readily extend to styles of writing and conduct in academic philosophy. We all have a rough idea what a typical lecture looks like, how a discussion goes and how a paper should be structured. Accordingly, attempts at diversification are met with suspicion. Will they be as good as our standards? Won’t they undermine the clarity we have achieved in our styles of reasoning? A more traditional division is that between so-called analytic and continental philosophy. Given the social gut reactions to diversifying linguistic standards, it might not come as a surprise that we find equal responses among philosophers: Shortly before the University of Cambridge awarded a honorary degree to Derrida in 1992, a group of philosophers published an open letter protesting that “Derrida’s work does not meet accepted standards of clarity and rigour.” (Eric Schliesser has a succinct analysis of the letter.) Rather than acknowledging that there might be various standards emerging from different traditions, the supposedly dominant standard of clarity is often defended like an eternal Platonic idea.

While it is easy to see and criticise this, it is much more difficult to find a way of dealing with it in the messy real world. My historically minded self has had and has the luxury to engage with a variety of styles without having to pass judgment, at least not explicitly. More importantly, when teaching students I have to strike a balance between acknowledging variety and preparing them for situations in which such acknowledgement won’t be welcome. In other words, I try to teach “the standard”, while trying to show its limits within an array of alternatives. My goal in teaching, then, would not be to drive out “deviant” stylistic features, but to point to various resources required in different contexts. History (of philosophy) clearly helps with that. But the real resources are provided by the students themselves. Ultimately, I would hope, not to teach them how to write, but how to find their own voices within their various backgrounds and learn to gear them towards different purposes.

But to do so, I have to learn, to some degree, the idioms of my students and try to understand the deep structure of their ways of expression. Not as superior, not as inferior, but as resourceful within contexts yet unknown to me. On the other hand, I cannot but also lay open my own reactions and those of the traditions I am part of. – Returning to the fact that language comes with social markers, perhaps one of the most important aspects of teaching is to convey a variety of means to understand and express oneself through language. Our gut reactions run very deep, and what is perceived as linguistic ‘shortcomings’ will move people, one way or another. But there is a double truth: Although we often cannot but go along with our standards, they will very soon be out of date. New standards and styles will emerge. And we, or I should say “I”, will just sound old-fashioned at best. Memento mori.

You don’t get what you deserve. Part II: diversity versus meritocracy?

“I’m all for diversity. That said, I don’t want to lower the bar.” – If you have been part of a hiring committee, you will probably have heard some version of that phrase. The first sentence expresses a commitment to diversity. The second sentence qualifies it: diversity shouldn’t get in the way of merit. Interestingly, the same phrase can be heard in opposing ways. A staunch defender of meritocracy will find the second sentence (about not lowering the bar) disingenuous. He will argue that, if you’re committed to diversity, you might be disinclined to hire the “best candidate”. By contrast, a defender of diversity will find the first sentence disingenuous. If you’re going in for meritocratic principles, you will just follow your biases and ultimately take the properties of “white” and “male” as a proxy of merit. – This kind of discussion often runs into a stalemate. As I see it, the problem is to treat diversity and meritocracy as an opposition. I will suggest that this kind of discussion can be more fruitful if we see that diversity is not a property of job candidates but of teams, and thus not to be seen in opposition to meritocratic principles.

Let’s begin with a clarification. I assume that it’s false and harmful to believe that we live in a meritocracy. But that doesn’t mean that meritocratic ideas themselves are bad. If it is simply taken as the idea that one gets a job based on their pertinent qualifications, then I am all for meritocratic principles. However, a great problem in applying such principles is that, arguably, the structure of hiring processes makes it difficult to discern qualifications. Why? Because qualifications are often taken to be indicated by other factors such as prestige etc. But prestige, in turn, might be said to correlate with race, gender, class or whatever, rather than with qualifications. At the end of the day, an adherent of diversity can accuse adherents of meritocracy of the same vices that she finds herself accused of. So when merit and diversity are taken as being in opposition, we tend to end up in the following tangle:

  • Adherents of diversity think that meritocracy is ultimately non-meritocratic, racist, sexist, classist etc.
  • Adherents of meritocracy think that diversity is non-meritocratic, racist, sexist, classist etc.*

What can we do in such a stalemate? How can the discussion be decided? Something that typically gets pointed out is homogeneity. The adherent of diversity will point to the homogeneity of people. Most departments in my profession, for instance, are populated with white men. The homogeneity points to a lack of diversity. Whether this correlates to a homogeneity of merit is certainly questionable. Therefore, the next step in the discussion is typically an epistemological one: How can we know whether the candidates are qualified? More importantly, can we discern quality independently from features such as race, gender or class? – In this situation, adherents of diversity typically refer to studies that reveal implicit biases. Identical CVs, for instance, have been shown to be treated as more or less favourable depending on the features of the name on the CV. Meritocratists, by contrast, will typically insist that they can discern quality objectively or correct for biases. Again, both sides seem to have a point. We might be subject to biases, but if we don’t leave decisions to individuals but to, say, committees, then we can perhaps correct for biases. At least if these committees are sufficiently diverse, one might add. – However, I think the stalemate will get passed indefinitely to different levels, as long as we treat merit and diversity as an opposition. So how can we move forward?

We try to correct for biases, for instance, by making a committee diverse. While this is a helpful step, it also reveals a crucial feature about diversity that is typically ignored in such discussions. Diversity is a feature of a team or group, not of an individual. The merit or qualification of a candidate is something pertaining to that candidate. If we look for a Latinist, for instance, knowledge of Latin will be a meritorious qualification. Diversity, by contrast, is not a feature, to be found in the candidate. Rather, it is a feature of the group that the candidate will be part of. Adding a woman to all-male team will make the team more diverse, but that is not a feature of the candidate. Therefore, accusing adherents of diversity of sexism or racism is fallacious. Trying to build a more diverse team rather than favouring one category strikes me as a means to counter such phenomena.

Now if we accept that there is such a thing as qualification (or merit), it makes sense to say that in choosing a candidate for a job we will take qualifications into account as a necessary condition. But one rarely merely hires a candidate; one builds a team, and thus further considerations apply. One might end up with a number of highly qualified candidates. But then one has to consider other questions, such as the team one is trying to build. And then it seems apt to consider the composition of the team. But that does not mean that merit and diversity are opposed to one another.

Nevertheless, prioritising considerations about the team over the considerations about the candidates are often met with suspicion. “She only got the job because …” Such an allegation is indeed sexist, because it construes a diversity consideration applicable to a team as the reason for hiring, as if it were the qualification of an individual. But no matter how suspicious one is, qualification and diversity are not on a par, nor can they be opposing features.

Compare: A singer might complain that the choir hired a soprano rather than him, a tenor. But the choir wasn’t merely looking for a singer but for a soprano. Now that doesn’t make the soprano a better singer than the tenor, nor does it make the tenor better than the soprano. Hiring a soprano is relevant to the quality of the group; it doesn’t reflect the quality of the individual.

____

* However, making such a claim, an adherent of meritocracy will probably rely on the assumption that there is such a thing as “inverted racism or sexism”. In the light of our historical sitation, this strikes me as very difficult to argue, at least with regard to institutional structures. It’s seems like saying that certain doctrines and practices of the Catholic Church are not sexist, simply because there are movements aiming at reform.