Teaching online versus online teaching

Having heard many stories and tips about online teaching I was very apprehensive about teaching my first class of this term. How could I not fail? At the same time, I’m enormously grateful to all the people and my university for sharing great advice and best practice examples (see e.g. here). Thinking through the scenarios not only got me worried but also gave me a lot of headspace to anticipate and navigate through a number of (possible) obstacles before plunging into the unknown. What can I say? I went through all sorts of worries and finally came up with a decision that, so far, worked for me: I teach online rather than do online teaching. This means that I teach much in the same way I would have done offline, with the small exception that it’s happening online. What does this entail for students and myself?

Let me address students first. I’ve heard many people say that you, students, tend to keep your cameras and microphones off, and use the chat instead. Although I personally prefer to see people’s faces and hear their voices, I think this would be ok for at least some part of the interaction. The reason I recommend switching your gear on and making yourself heard and seen is that you should take your space. Online teaching is often presented as a challenge that deprives us of direct interaction which, in turn, has to be compensated with all sorts of “tools”. Yes, of course, we are used to physical cues in communication, the perception of which we now often simply miss out on. But what strikes me as crucial is that we participate. I don’t think any amount of online tools or refined environment makes up for your participation in the conversation. I find reading the chat more cumbersome than listening to you. I also prefer speaking over writing. But I’ll get used to the chat and learn to change my ways happily, as long as you participate. The crucial aspect is not how it’s done, online or offline, but that you do it. Make your contributions, ask you questions etc. as before.

That said, I also think that you can participate more fully if you use all the devices available. The online space is not just a toolbox or learning environment. It’s first of all a political space with all the power imbalances and hierarchies that we have offline. It’s at least partly our choice whether we want to amplify or adjust the old space as we’re moving online. This is why I’d recommend using all the resources that enhance your presence for participation. You’re not a passive receptor and instructors are not emulating youtube. We’re still at the university, a public and democratic space for academic exchange.

This morning, I was quite nervous whether my “strategy” would work. It’s way too early to assess this, but what I find worth reporting is that, for me at least, teaching this course online was much the same experience as teaching it offline. I thought it would be awkward to be talking to a screen, but then people who know me also know that I often speak with my eyes closed… The silence after asking a question might be slightly longer, but we all know that the situation is special, so it’s fine. Of course, I miss cues, but I noticed I can ask for them more often, if need be.

For the time being, I have also decided not to record my teaching events. If someone misses a class, they will miss it. More importantly, I know that recording the stuff would change the stakes for the students and myself. Over and above the well-known privacy issues and unintended misuse of recorded material, watching a lecture is different even from silently participating in it, let alone giving it. (The fancy word for this is “synchronous” teaching, I guess. But that would be misleading. Even if the lectures cannot be viewed later on, students will still experience “asynchronous” teaching. That is, they will still have to do the reading, thinking and discussing before and after. Of course, there is nothing wrong with the “flipped classroom”, but to my mind this is different from actually teaching students.)

Is that so? On the face of it, one might think there is not much difference between watching a recorded lecture and silently taking part in an online lecture, especially if the audience is rather large. I beg to differ. Even if you don’t want to join in, participating in an actual lecture still gives you the opportunity to interfere. At least for me, such an opportinity changes the level and quality of attention, even if (or especially when) I decide not to raise my hand. Seen this way, recording lectures is a way of providing material on top of other material, such as readings, videos and podcasts. It goes without saying that we shouldn’t underestimate the value of such materials. After all, teaching is no replacement for learning, that is: independent self-study is still a, sometimes underestimated, component of the educational process. But the moments of interaction, the so-called contact hours, remain special: they come and go. And with them the opportunities for participation, for noticing others in relation to yourself come and go. My hunch is that these moments thrive on the fact that they cannot be repeated. In this sense, watching a recorded class is not the same as taking part in a given class.

All that said, this is in no way dismissing the fantastic ideas and tools for proper online teaching. Yet it is crucial to be reminded that, at this moment, most of us teach online (if they have this liberty, that is) because the pandemic causes an emergency. But just as new social media competence is rooted in old-fashioned reading and writing, the quality of teaching is rooted in our resources of offline thinking and interaction.

In any case, I wish everyone a happy and safe beginning of term.

Are philosophical classics too difficult for students?

Say you would like to learn something about Kant, should you start by reading one of his books or rather get a good introduction to Kant? Personally, I think it’s good to start with primary texts, get confused, ask questions, and then look at the introductions to see some of your questions discussed. Why? Well, I guess it’s better to have a genuine question before looking for answers. However, even before the latest controversy on Twitter (amongst others between Zena Hitz and Kevin Zollman) took off, I have been confronted with quite different views. Taken as an opposition between extreme views, you could ask whether you want to make philosophy about ideas or people (and their writings). It’s probably inevitable that philosophy ends up being about both, but there is still the question of what we should prioritise.

Arguably, if you expose students to the difficult original texts, you might frighten them off. Thus, Kevin Zollman writes: “If I wanted someone to learn about Kant, I would not send them to read Kant first. Kant is a terrible writer, and is impossible for a novice to understand.” Accordingly, he argues that what should be prioritised is the ideas. In response Zena Hitz raises a different educational worry: “You’re telling young people (and others) that serious reading is not for them, but only for special experts.” Accordingly, she argues for prioritising the original texts. As Jef Delvaux shows in an extensive reflection, both views touch on deeper problems relating to epistemic justice. A crucial point in his discussion is that we never come purely or unprepared to a primary text anyway. So an emphasis on the primary literature might be prone to a sort of givenism about original texts.

I think that all sides have a point, but when it comes to students wanting to learn about historical texts, there is no way around looking at the original. Let me illustrate my point with a little analogy:

Imagine you want to study music and your main instrument is guitar. It is with great excitement that you attend courses on the music of Bach whom you adore. The first part is supposed to be on his organ works, but already the first day is a disappointment. Your instructor tells you that you shouldn’t listen to Bach’s organ pieces themselves, since they might be far too difficult. Instead you’re presented with a transcription for guitar. Well, that’s actually quite nice because this is indeed more accessible even if it sounds a bit odd. (Taken as an analogy to reading philosophy, this could be a translation of an original source.) But then you look at he sheets. What is this? “Well”, the instructor goes on, “I’ve reduced the accompaniment to the three basic chords. That makes it easier to reproduce it in the exam, too. And we’ll only look at the main melodic motif. In fact, let’s focus on the little motif around the tonic chord. So, if you can reproduce the C major arpeggio, that will be good enough. And it will be a good preparation for my master class on tonic chords in the pre-classic period.” Leaving this music school, you’ll never have listened to any Bach pieces, but you have wonderful three-chord transcriptions for guitar, and after your degree you can set out on writing three-chord pieces yourself. If only there were still people interested in Punk!

Of course, this is a bit hyperbolic. But the main point is that too much focus on cutting things to ‘student size’ will create an artificial entity that has no relation to anything outside the lecture hall. But while I thus agree with Zena Hitz that shunning the texts because of their difficulties sends all sorts of odd messages, I also think that this depends on the purpose at hand. If you want to learn about Kant, you should read Kant just like you should listen to Bach himself. But what if you’re not really interested in Kant, but in a sort of Kantianism under discussion in a current debate? In this case, the purpose is not to study Kant, but some concepts deriving from a certain tradition.  In this case, you might be more like a jazz player who is interested in building a vocabulary. Then you might be interested, for instance, in how Bach dealt with phrases over diminished chords and focus on this aspect first. Of course, philosophical education should comprise both a focus on texts and on ideas, but I’d prioritise them in accordance with different purposes.

That said, everything in philosophy is quite difficult. As I see it, a crucial point in teaching is to convey means to find out where exactly the difficulties lie and why they arise. That requires all sorts of texts, primary, secondary, tertiary etc.

Why we shouldn’t study what we love

I recognize that I could only start to write about this … once I related to it. I dislike myself for this; my scholarly pride likes to think I can write about the unrelatable, too. Eric Schliesser

Philosophy students often receive the advice that they should focus on topics that they have a passion for. So if you have fallen for Sartre, ancient scepticism or theories of justice, the general advice is to go for one of those. On the face of it, this seems quite reasonable. A strong motivation might predict good results which, in turn, might motivate you further. However, I think that you might actually learn more by exposing yourself to material, topics and questions that you initially find remote, unwieldy or even boring. In what follows, I’d like to counter the common idea that you should follow your passions and interests, and try to explain why it might help to study things that feel remote.

Let me begin by admitting that this approach is partly motivated by my own experience as a student. I loved and still love to read Nietzsche, especially his aphorisms in The Gay Science. There is something about his prose that just clicks. Yet, I was always sure that I couldn’t write anything interesting about his work. Instead, I began to study Wittgenstein’s Tractatus and works from the Vienna Circle. During my first year, most of these writings didn’t make sense to me: I didn’t see why they found what they said significant; most of the terminology and writing style was unfamiliar. In my second year, I made things worse by diving into medieval philosophy, especially Ockham’s Summa Logicae and Quodlibeta. Again, not because I loved these works. In fact, I found them unwieldy and sometimes outright boring. So why would I expose myself to these things? Already at the time, I felt that I was actually learning something: I began to understand concerns that were alien to me; I learned new terminology; I learned to read Latin. Moreover, I needed to use tools, secondary literature and dictionaries. And for Ockham’s technical terms, there often were no translations. So I learned moving around in the dark. There was no passion for the topics or texts. But speaking with hindsight (and ignoring a lot of frustration along the way), I think I discovered techniques and ultimately even a passion for learning, for familiarising myself with stuff that didn’t resonate with me in the least. (In a way, it seemed to turn out that it’s a lot easier to say interesting things about boring texts than to say even boring things about interesting texts.)

Looking back at these early years of study, I’d now say that I discovered a certain form of scholarly explanation. While reading works I liked was based on a largely unquestioned understanding, reading these unwieldy new texts required me to explain them to myself. This in turn, prompted two things: To explain these texts (to myself), I needed to learn about the new terminology etc. Additionally, I began to learn something new about myself. Discovering that certain things felt unfamiliar to me, while others seemed familiar meant that I belonged to one kind of tradition rather than another. Make no mistake: Although I read Nietzsche with an unquestioned familiarity, this doesn’t mean that I could have explained, say, his aphorisms any better than the strange lines of Wittgenstein’s Tractatus. The fact that I thought I understood Nietzsche didn’t give me any scholarly insights about his work. So on top of my newly discovered form of explanation I also found myself in a new relation to myself or to my preferences. I began to learn that it was one thing to like Nietzsche and quite another to explain Nietzsche’s work, and still another to explain one’s own liking (perhaps as being part of a tradition).

So my point about not studying what you like is a point about learning, learning to get oneself into a certain mode of reading. Put more fancily: learning to do a certain way of (history of) philosophy. Being passionate about some work or way of thinking is something that is in need of explanation, just as much as not being passionate and feeling unfamiliar about something needs explaining. Such explanations are greatly aided by alienation. As I said in an earlier post, a crucial effect of alienation is a shift of focus. You can concentrate on things that normally escape your attention: the logical or conceptual structures for instance, ambiguities, things that seemed clear get blurred and vice versa. In this sense, logical formalisation or translation are great tools of alienation that help you to raise questions, and generally take an explanatory stance, even to your most cherished texts.

As a student, discovering this mode of scholarly explanation instilled pride, a pride that can be hurt when explanations fail or evade us. It was remembering this kind of pain, described in the motto of this post, that prompted these musings. There is a lot to be said for aloof scholarship and the pride that comes with it, but sometimes it just doesn’t add up. Because there are some texts that require a more passionate or intuitive relation before we can attain a scholarly stance towards them. If the passion can’t be found, it might have to be sought. Just like our ears have to be trained before we can appreciate some forms of, say, very modern music “intuitively”.

Must we claim what we say? A quick way of revising essays

When writing papers, students and advanced philosophers alike are often expected to take a position within a debate and to argue for or against a particular claim. But what if we merely wish to explore positions and look for hidden assumptions, rather than defend a claim? Let’s say you look at a debate and then identify an unaddressed but nevertheless important issue, a commitment left implicit in the debate, let’s call it ‘X’. Writing up your findings, the paper might take the shape of a description of that debate plus an identification of the implicit X. But the typical feedback to such an exploration can be discouraging: It’s often pointed out that the thesis could have been more substantive and that a paper written this way is not publishable unless supplemented with an argument for or against X. Such comments all boil down to the same problem: You should have taken a position within the debate you were describing, but you have failed to do so.

But hang on! We’re all learning together, right? So why is it not ok to have one paper do the work of describing and analysing a debate, highlighting, for instance, some unaddressed X, so that another paper may attempt an answer to the questions about X and come up with a position? Why must we all do the same thing and, for instance, defend an answer on top of everything else? Discussing this issue, we* wondered what this dissatisfaction meant and how to react to it. Is it true? Should you always take a position in a debate when writing a paper? Or is there a way of giving more space to other approaches, such as identifying an unaddressed X?

One way of responding to these worries is to dissect and extend the paper model, for instance, by having students try other genres, such as commentaries, annotated translations, reviews, or structured questions. (A number of posts on this blog are devoted to this.) However, for the purposes of this post, we’d like to suggest and illustrate a different idea. We assume that the current paper model (defending a position) does not differ substantially from other genres of scholarly inquiry. Rather, the difference between, say, a commentary or the description of a debate, on the one hand, and the argument for a claim, on the other, is merely a stylistic one. Now our aim is not to present an elaborate defense of this idea, but to try out how this might help in practice.

To test and illustrate the idea (below), we have dug out some papers and rewritten sections of them. Before presenting one sample, let’s provide a brief manual. The idea rests on the, admittedly somewhat contentious, tenets that

  • any description or analysis can be reformulated as a claim,
  • the evidence provided in a description can be dressed up as an argument for the claim.

But how do you go about it? In describing a debate, you typically identify a number of positions. So what if you don’t want to adopt and argue for one of them? There is something to be said for just picking a side anyway, but if that feels too random, here is a different approach:

(a) One thing you can always do is defend a claim about the nature of the disagreement in the debate. Taken this way, the summary of your description or analysis becomes the claim about the nature of the disagreement, while the analysis of the individual positions functions as an argument / evidence for this claim. This is not a cheap trick; it’s just a pointed way of presenting your material.

(b) A second step consists in actually labelling steps as claims, arguments, evaluations etc. Using such words doesn’t change the content, but it signals even to a hasty reader where your crucial steps begin and end.

Let’s now look at a passage from the conclusion of a paper. Please abstract away from the content of discussion. We’re just interested in identifying pertinent steps. Here is the initial text:

“… Thus, I have dedicated this essay to underscoring the importance of this problem. I have first discussed two of the most prominent levels accounts, namely O&P’s layer-cake account, and Craver and Bechtel’s mechanistic levels, and shown that they both provide radically different levels accounts. I addressed the problems with each account, and it became clear that what is considered to be a problem by some, is considered to be a virtue by others. This led us to uncover a deeper disagreement, namely about what the function of a levels account is supposed to be and what the term “level” means.”

Here is the rewritten version (underlined sections indicate more severe changes or additions):

“… But why is this problem significant? I have first discussed two of the most prominent levels accounts, namely O&P’s layer-cake account, and Craver and Bechtel’s mechanistic levels, and shown that they both provide radically different levels accounts. I addressed the problems with each account, and it became clear that what is considered to be a problem by some, is considered to be a virtue by others. This is in keeping with my second-order thesis that the dispute is less about content but rather about defining criteria. However, this raises the question of what to make of levels on any set of criteria. Answering this question led me to defend my main (first-order) thesis: If we look at the different sets of criteria, we uncover a deeper disagreement, namely about what the function of a levels account is supposed to be and what the term “level” means. Accordingly, I claim that disparate accounts of levels indicate different functions of levels.

We consider neither passage a piece of beauty. The point is merely to take some work in progress and see what happens if you follow the two steps suggested above: (a) articulate claims; (b) label items as such. – What can we learn from this small exercise? We think that the contrast between these two versions shows just how big of an impact the manner of presentation can have, not least on the perceived strength of a text. The desired effect would be that a reader can easily identify what is at stake for the author. Content-wise, both versions say the same thing. However, the first version strikes us as a bit detached and descriptive in character, whereas the second version seems more engaged and embracing a position. What used to be a text about a debate has now become a text partaking in a debate.  (Of course, your impressions might differ. So we’d be interested to hear about them!) Another thing we saw confirmed in this exercise is that you always already have a position, because you end up highlighting what matters to you. Having something to say about a debate still amounts a position. Arguably, it’s also worth to be presented as such.

Where do we go from here? Once you have reformulated such a chunk and labelled some of your ideas (say, as first and second order claims etc.), you can rewrite the rest of your text accordingly. Identify these items in the introduction, and clarify which of those items you argue for in the individual sections of your paper, such that they lead up to these final paragraphs. That will probably allow you (and the reader) to highlight the rough argumentative structure of your paper. Once this is established, it will be much easier to polish individual sections.

____

*Co-authored by Sabine van Haaren and Martin Lenz

On self-censorship

For a few years during the 80s, Modern Talking was one of the most well known pop bands in Germany. But although their first single “You’re my heart, you’re my soul” was sold over eight million times, no one admitted to having bought it. Luckily, my dislike of their music was authentic, so I never had to suffer that particular embarrassment. Yet, imagine all these people alone in their homes, listening to their favourite tune but never daring to acknowledge it openly. Enjoying kitsch of any sort brings the whole drama of self-censorship to the fore. You might be moved deeply, but the loss of face is more unbearable than remaining in hiding. What’s going on here? Depending on what precisely is at stake, people feel very differently about this phenomenon. Some will say that self-censorship just maintains an acceptable level of decency or tact; others will say that it reflects political oppression or, ahem, correctness. At some point, however, you might let go of all shame. Perhaps you’ve got tenure and start blogging or something like that … While some people think it’s a feature of the current “cancel culture”, left or right, I think it’s more important to see the different kinds of reasons behind self-censorship. In some cases, there really is oppression at work; in other cases, it’s peer pressure. Neither is fun. In any case, it’s in the nature of this phenomenon that it is hard to track in a methodologically sound way. So rather than draw a general conclusion, it might be better to go through some very different stories.

Bad thoughts. – Do you remember how you, as a child, entertained the idea that your thoughts might have horrible consequences? My memory is faint, but I still remember assuming that thinking of swear words might entail my parents having an accident. So I felt guilty for thinking these words, and tried to break the curse by uttering them to my parents. But somehow I failed to convince them of the actual function of my utterance, and so they thought I was just calling them names. Today, I know that this is something that happens to occur in children, sometimes even pathologically strong and thus known as “intrusive thoughts” within an “obsessive compulsory disorder”. Whatever the psychological assessment, my experience was that of “forbidden” thoughts and, simultaneously, the inability to explain myself properly. Luckily, it didn’t haunt me, but I can imagine it becoming problematic.

One emergence of the free speech debate. – When I was between 7 and 10 years old (thus in the 1970s), I sometimes visited a lonely elderly woman. She was an acquaintance of my mother, well in her 70s and happy to receive some help. When no one else was around she often explained her political views to me. She was a great admirer of Franz Josef Strauß whom she described to me as a “small Hitler – something that Germany really needs again”. She hastened to explain that, of course, the real Hitler would be too much, but a “small” one would be quite alright. She then praised how, back in the day, women could still go for walks after dark etc. Listening to other people of that generation, I got the impression that many people in Germany shared these ideas. In 2007, the news presenter Eva Herman explicitly praised the family values of Nazi Germany and was dismissed from her position. The current rise of fascism in Germany strikes me as continuous with the sentiments I found around me early on. And if I’m not mistaken these sentiments date back at least to the 1930s and 1940s. In my experience, Nazism was never just an abstract political view. Early on did I realise that otherwise seemingly “decent” people could be taken by it. But this concrete personal dimension made the sweaty and simplistic attitude to other people all the more repulsive. In any case, I personally found that people in the vicinity of that ideology are the most vocal people who like to portray themselves as “victims” of censorship, though they are certainly not censoring themselves. (When it comes to questions of free speech, I am always surprised that whistleblowers such as Snowden are not mentioned.)

Peer pressure and classism. – I recently hosted a guest post on being a first generation student that really made me want to write about this issue myself. But often when I think about this topic, I still feel uncomfortable writing about it. In some ways, it’s all quite undramatic in that the transition to academia was made very easy by my friends. For what shouldn’t be forgotten is that it’s not only your parents and teachers who educate you. In my case at least, I tacitly picked up many of the relevant habits from my friends and glided into being a new persona. Although I hold no personal grudges, I know that “clothes make people” or “the man” as Gottfried Keller’s story is sometimes translated. What I noticed most is that people from other backgrounds often have a different kind of confidence being around academics. Whether that is an advantage across the board I don’t know. What I do know is that I took great care to keep my own background hidden from most colleagues, at least before getting a tenured job.

Opportunism and tenure. – Personally, I believe that I wouldn’t dare publishing this very post or indeed any of my posts, had I not obtained a tenured position. Saying this, I don’t want to impart advice. All I want to say is that getting this kind of job is what personally freed me to speak openly about certain things. But the existential weight of this fact makes me think that the greatest problem about self-censorship lies in the different socio-economic status that people find themselves in. This is just my experience, but perhaps it’s worth sharing. So what is it about, you might wonder? There is no particular truth that I would not have told before but would tell now. It’s not a matter of any particular opinion, be it left or right. Rather, it affects just about everything I say. The fact that I feel free to talk about my tastes, about the kitsch I adore, about the music I dislike, about the artworks I find dull, alongside the political inclinations I have – talking about all of this openly, not just politics, is affected by the fact that I cannot be fired just so and that I do not have to impress anyone I don’t want to impress. It is this freedom that I think does not only allow us to speak but also requires us to speak up when others will remain silent out of fear.

The myth of authenticity. – The fact that many of us feel they have to withhold something creates the idea that there might be a vast amount of unspoken truths under the surface. “Yes”, you might be inclined to ask, “but what do you really think?” This reminds me of the assumption that, in our hearts, we speak a private language that we cannot make intelligible to others. Or of the questions immigrants get to hear when people inquire where they really come from. It doesn’t really make sense. While it is likely that many people do not say what they would say if their situation were different, I don’t think it’s right to construe this as a situation of hidden truths or lies. (Some people construe the fact that we might hide conceal our opinions as lies. But I doubt that’s a pertinent description.) For better or worse, the world we live in is all we have when it comes to questions of authenticity. If you choose to remain silent, there is no hidden truth left unspoken. It just is what it is: you’re not speaking up and you might be in agony about that. You might conceal what you think. But then it is the concealing that shapes the world and yourself, not the stuff left unspoken. Put differently, there are no truths, no hidden selves, authentic or not, that persist without some relation to interlocutors.

***

Speaking of which, I want to finish this post with a word of thanks. It’s now two years ago that I started this blog. By now I have written 118 posts. If I include the guest posts, it adds up to 131. Besides having the pleasure of hosting great guest authors, I feel enormously privileged to write for you openly. On the one hand, this is enabled by the relatively comfortable situation that I am in. On the other hand, none of this would add up to anything if it weren’t for you, dear interlocutors.

Why using quotation marks doesn’t cancel racism or sexism. With a brief response to Agnes Callard

Would you show an ISIS video, depicting a brutal killing of hostages, to the survivor of their murders? Of if you prefer a linguistic medium: would you read Breivik’s Manifesto to a survivor of his massacre? – Asking these questions, I’m assuming that none of you would be inclined to endorse these items. That’s not the point. The question is why you would not present such items to a survivor or perhaps indeed to anyone. My hunch is that you would not want to hurt or harm your audience. Am I right? Well, if this is even remotely correct, why do so many people insist on continuing to present racist, sexist or other dehumanising expressions, such as the n-word, to others? And why do we decry the take-down of past authors as racists and sexists? Under the label of free speech, of all things? I shall suggest that this kind of insistence relies on what I call the quotation illusion and hope to show that this distinction doesn’t really work for this purpose.

Many people assume that there is a clear distinction between use and mention. When saying, “stop” has four letters, I’m not using the expression (to stop or alert you). Rather, I am merely mentioning the word to talk about it. Similarly, embedding a video or passages from a text into a context in which I talk about these items is not a straightforward use of them. I’m not endorsing what these things supposedly intend to express or achieve. Rather, I am embedding them in a context in which I might, for instance, talk about the effects of propaganda. It is often assumed that this kind of “going meta” or mentioning is categorically different from using expressions or endorsing statements. As I noted in an earlier post, if I use an insult or sincerely threaten people by verbal means, I act and cause harm. But if I consider a counterfactual possibility or quote someone’s words, my expressions are clearly detached from action. However, the relation to possible action is what contributes to making language meaningful in the first place. Even if I merely quote an insult, you still understand that quotation in virtue of understanding real insults. In other words, understanding such embeddings or mentions rides piggy-back on understanding straightforward uses.

If this is correct, then the difference between use and mention is not a categorical one but one of degrees. Thus, the idea that quotations are completely detached from what they express strikes me as illusory. Of course, we can and should study all kinds of expressions, also expressions of violence. But their mention or embedding should never be casual or justified by mere convention or tradition. If you considered showing that ISIS video, you would probably preface your act with a warning. – No? You’re against trigger warnings? So would you explain to your audience that you were just quoting or ask them to stop shunning our history? And would you perhaps preface your admonitions with a defense of free speech? – As I see it, embedded mentions of dehumanising expressions do carry some of the demeaning attitudes. So exposing others to them merely to make a point about free speech strikes me as verbal bullying. However, this doesn’t mean that we should stop quoting or mentioning problematic texts (or videos). It just means that prefacing such quotations with pertinent warnings is an act of basic courtesy, not coddling.

The upshot is that we cannot simply rely on a clear distinction between quotation and endorsement, or mention and use. But if this correct, then what about reading racist or sexist classics? As I have noted earlier, the point would not be to simply shun Aristotle or others for their bigotry. Rather, we should note their moral shortcomings as much as we should look into ours. For since we live in some continuity with our canon, we are to some degree complicit in their racism and sexism.

Yet instead of acknowledging our own involvement in our history, the treatment of problematic authors is often justified by claiming that we are able to detach ourselves from their involvement, usually by helping ourselves to the use-mention distinction. A recent and intriguing response to this challenge comes from Agnes Callard, who claims that we can treat someone like Aristotle as if he were an “alien”. We can detach ourselves, she claims, by interpreting his language “literally”, i.e. as a vehicle “purely for the contents of his belief” and as opposed to “messaging”, “situated within some kind of power struggle”. Taken this way, we can grasp his beliefs “without hostility”, and the benefits of reading come “without costs”. This isn’t exactly the use-mention distinction. Rather, it is the idea that we can entertain or consider ideas without involvement, force or attitude. In this sense, it is a variant of the quotation illusion: Even if I believe that your claims are false or unintelligible, I can quote you – without adding my own view. I can say that you said “it’s raining” without believing it. Of course I can also use an indirect quote or a paraphrase, a translation and so on. Based on this convenient feature of language, historians of philosophy (often including myself) fall prey to the illusion that they can present past ideas without imparting judgment. Does this work?

Personally, I doubt that the literal reading Callard suggests really works. Let me be clear: I don’t doubt that Callard is an enormously good scholar. Quite the contrary. But I’m not convinced that she does justice to the study that she and others are involved in when specifying it as a literal reading. Firstly, we don’t really hear Aristotle literally but mediated through various traditions, including quite modern ones, that partly even use his works to justify their bigoted views. Secondly, even if we could switch off Aristotle’s political attitudes and grasp his pure thoughts, without his hostility, I doubt that we could shun our own attitudes. Again, could you read Breivik’s Manifesto, ignoring Breivik’s actions, and merely grasp his thoughts? Of course, Aristotle is not Breivik. But if literal reading is possible for one, then why not for the other?

The upshot is: once I understand that a way of speaking is racist or sexist, I cannot unlearn this. If I know that ways of speaking hurt or harm others, I should refrain from speaking this way. If I have scholarly or other good reasons to quote such speech, I shouldn’t do so without a pertinent comment. But I agree with Callard’s conclusion: We shouldn’t simply “cancel” such speech or indeed their authors. Rather, we should engage with it, try and contextualise it properly. And also try and see the extent of our own involvement and complicity. The world is a messy place. So are language and history.

“We don’t need no …” On linguistic inequality

Deviations from so-called standard forms of language (such as the double negative) make you stand out immediately. Try and use double negatives consistently in your university courses or at the next job interview and see how people react. Even if people won’t correct you explicitly, many will do so tacitly. Such features of language function as social markers and evoke pertinent gut reactions. Arguably, this is not only true of grammatical or lexical features, but also of broader stylistic features in writing, speech and even non-linguistic conduct. Some ways of phrasing may sound like heavy boots. Depending on our upbringing, we are familiar with quite different linguistic features. While none of this might be news, it raises crucial questions about teaching that I see rarely addressed. How do we respond to linguistic and stylistic diversity? When we say that certain students “are struggling”, we often mean that they deviate from our stylistic expectations. A common reaction is to impart techniques that help them in conforming to such expectations. But should we perhaps respond by trying to understand the “deviant” style?

Reading the double negative “We don’t need no …”, you might see quite different things: (1) a grammatically incorrect phrase in English; (2) a grammatically correct phrase in English; (3) part of a famous song by Pink Floyd. Assuming that many of us recognise these things, some will want to hasten to add that (2) contradicts (1). A seemingly obvious way to resolve this is to say that reading (1) applies to what is called the standard dialect of English (British English), while (2) applies to some dialects of English (e.g. African-American Vernacular English). This solution prioritises one standard over other “deviant” forms that are deemed incorrect or informal etc. It is obvious that this hierarchy goes hand in hand with social tensions. At German schools and universities, for instance, you can find numerous students and lecturers who hide their dialects or accents. In linguistics, the disadvantages of regional dialect speakers have long been acknowledged. Even if the prescriptive approach has long been challenged, it’s driving much of the implicit culture in education.

But the distinction between standard and deviant forms of language ignores the fact that the latter often come with long-standing rules of their own. Adjusting to the style of your teacher might then require you to deviate from the language of your parents. Thus another solution is to say that there are different English languages. Accordingly, we can acknowledge reading (2) and call African-American Vernacular English (AAVE) a language. The precise status and genealogy is a matter of linguistic controversy. However, the social and political repercussions of this solution come most clearly into view when we consider the public debate about teaching what is called “Ebonics” at school in the 90s (Here is a very instructive video about this debate). If we acknowledge reading (2), it means, mutatis mutandis, that many English speakers raised with AAVE can be considered bilingual. Educators realised that teaching standard forms of English can be aided greatly by using AAVE as the language of instruction. Yet, trying to implement this as a policy at school soon resulted in a debate about a “political correctness exemplar gone out of control” and abandoning the “language of Shakespeare”. The bottom-line is: Non-hierarchical acknowledgement of different standards quickly spirals into defences of the supposed status quo by the dominant social group.

Supposed standards and deviations readily extend to styles of writing and conduct in academic philosophy. We all have a rough idea what a typical lecture looks like, how a discussion goes and how a paper should be structured. Accordingly, attempts at diversification are met with suspicion. Will they be as good as our standards? Won’t they undermine the clarity we have achieved in our styles of reasoning? A more traditional division is that between so-called analytic and continental philosophy. Given the social gut reactions to diversifying linguistic standards, it might not come as a surprise that we find equal responses among philosophers: Shortly before the University of Cambridge awarded a honorary degree to Derrida in 1992, a group of philosophers published an open letter protesting that “Derrida’s work does not meet accepted standards of clarity and rigour.” (Eric Schliesser has a succinct analysis of the letter.) Rather than acknowledging that there might be various standards emerging from different traditions, the supposedly dominant standard of clarity is often defended like an eternal Platonic idea.

While it is easy to see and criticise this, it is much more difficult to find a way of dealing with it in the messy real world. My historically minded self has had and has the luxury to engage with a variety of styles without having to pass judgment, at least not explicitly. More importantly, when teaching students I have to strike a balance between acknowledging variety and preparing them for situations in which such acknowledgement won’t be welcome. In other words, I try to teach “the standard”, while trying to show its limits within an array of alternatives. My goal in teaching, then, would not be to drive out “deviant” stylistic features, but to point to various resources required in different contexts. History (of philosophy) clearly helps with that. But the real resources are provided by the students themselves. Ultimately, I would hope, not to teach them how to write, but how to find their own voices within their various backgrounds and learn to gear them towards different purposes.

But to do so, I have to learn, to some degree, the idioms of my students and try to understand the deep structure of their ways of expression. Not as superior, not as inferior, but as resourceful within contexts yet unknown to me. On the other hand, I cannot but also lay open my own reactions and those of the traditions I am part of. – Returning to the fact that language comes with social markers, perhaps one of the most important aspects of teaching is to convey a variety of means to understand and express oneself through language. Our gut reactions run very deep, and what is perceived as linguistic ‘shortcomings’ will move people, one way or another. But there is a double truth: Although we often cannot but go along with our standards, they will very soon be out of date. New standards and styles will emerge. And we, or I should say “I”, will just sound old-fashioned at best. Memento mori.

You don’t get what you deserve. Part II: diversity versus meritocracy?

“I’m all for diversity. That said, I don’t want to lower the bar.” – If you have been part of a hiring committee, you will probably have heard some version of that phrase. The first sentence expresses a commitment to diversity. The second sentence qualifies it: diversity shouldn’t get in the way of merit. Interestingly, the same phrase can be heard in opposing ways. A staunch defender of meritocracy will find the second sentence (about not lowering the bar) disingenuous. He will argue that, if you’re committed to diversity, you might be disinclined to hire the “best candidate”. By contrast, a defender of diversity will find the first sentence disingenuous. If you’re going in for meritocratic principles, you will just follow your biases and ultimately take the properties of “white” and “male” as a proxy of merit. – This kind of discussion often runs into a stalemate. As I see it, the problem is to treat diversity and meritocracy as an opposition. I will suggest that this kind of discussion can be more fruitful if we see that diversity is not a property of job candidates but of teams, and thus not to be seen in opposition to meritocratic principles.

Let’s begin with a clarification. I assume that it’s false and harmful to believe that we live in a meritocracy. But that doesn’t mean that meritocratic ideas themselves are bad. If it is simply taken as the idea that one gets a job based on their pertinent qualifications, then I am all for meritocratic principles. However, a great problem in applying such principles is that, arguably, the structure of hiring processes makes it difficult to discern qualifications. Why? Because qualifications are often taken to be indicated by other factors such as prestige etc. But prestige, in turn, might be said to correlate with race, gender, class or whatever, rather than with qualifications. At the end of the day, an adherent of diversity can accuse adherents of meritocracy of the same vices that she finds herself accused of. So when merit and diversity are taken as being in opposition, we tend to end up in the following tangle:

  • Adherents of diversity think that meritocracy is ultimately non-meritocratic, racist, sexist, classist etc.
  • Adherents of meritocracy think that diversity is non-meritocratic, racist, sexist, classist etc.*

What can we do in such a stalemate? How can the discussion be decided? Something that typically gets pointed out is homogeneity. The adherent of diversity will point to the homogeneity of people. Most departments in my profession, for instance, are populated with white men. The homogeneity points to a lack of diversity. Whether this correlates to a homogeneity of merit is certainly questionable. Therefore, the next step in the discussion is typically an epistemological one: How can we know whether the candidates are qualified? More importantly, can we discern quality independently from features such as race, gender or class? – In this situation, adherents of diversity typically refer to studies that reveal implicit biases. Identical CVs, for instance, have been shown to be treated as more or less favourable depending on the features of the name on the CV. Meritocratists, by contrast, will typically insist that they can discern quality objectively or correct for biases. Again, both sides seem to have a point. We might be subject to biases, but if we don’t leave decisions to individuals but to, say, committees, then we can perhaps correct for biases. At least if these committees are sufficiently diverse, one might add. – However, I think the stalemate will get passed indefinitely to different levels, as long as we treat merit and diversity as an opposition. So how can we move forward?

We try to correct for biases, for instance, by making a committee diverse. While this is a helpful step, it also reveals a crucial feature about diversity that is typically ignored in such discussions. Diversity is a feature of a team or group, not of an individual. The merit or qualification of a candidate is something pertaining to that candidate. If we look for a Latinist, for instance, knowledge of Latin will be a meritorious qualification. Diversity, by contrast, is not a feature, to be found in the candidate. Rather, it is a feature of the group that the candidate will be part of. Adding a woman to all-male team will make the team more diverse, but that is not a feature of the candidate. Therefore, accusing adherents of diversity of sexism or racism is fallacious. Trying to build a more diverse team rather than favouring one category strikes me as a means to counter such phenomena.

Now if we accept that there is such a thing as qualification (or merit), it makes sense to say that in choosing a candidate for a job we will take qualifications into account as a necessary condition. But one rarely merely hires a candidate; one builds a team, and thus further considerations apply. One might end up with a number of highly qualified candidates. But then one has to consider other questions, such as the team one is trying to build. And then it seems apt to consider the composition of the team. But that does not mean that merit and diversity are opposed to one another.

Nevertheless, prioritising considerations about the team over the considerations about the candidates are often met with suspicion. “She only got the job because …” Such an allegation is indeed sexist, because it construes a diversity consideration applicable to a team as the reason for hiring, as if it were the qualification of an individual. But no matter how suspicious one is, qualification and diversity are not on a par, nor can they be opposing features.

Compare: A singer might complain that the choir hired a soprano rather than him, a tenor. But the choir wasn’t merely looking for a singer but for a soprano. Now that doesn’t make the soprano a better singer than the tenor, nor does it make the tenor better than the soprano. Hiring a soprano is relevant to the quality of the group; it doesn’t reflect the quality of the individual.

____

* However, making such a claim, an adherent of meritocracy will probably rely on the assumption that there is such a thing as “inverted racism or sexism”. In the light of our historical sitation, this strikes me as very difficult to argue, at least with regard to institutional structures. It’s seems like saying that certain doctrines and practices of the Catholic Church are not sexist, simply because there are movements aiming at reform.

Fit. A Note on Aristotle’s Presence in Academia

Since the so-called Scientific Revolution and the birth of modern science, our Western approach towards the world became quantitative. The precedingly dominant qualitative Aristotelian worldview of the Scholastics was replaced by a quantitative one: everything around us was supposed to be quantifiable and quantified. This, of course, seems to affect nowadays academia, too. We often hear “do this, it will be one more line in your CV!” 

Many will reply “This is not true, quality matters just as much!” Yes, it (sometimes) matters in which journal one publishes; it has to be a good journal; one needs to make sure that the quality of the article is good. And how do we know that the journal is good or not? Because of its ranking. So if you thought I will argue that this is Aristotle’s presence in Academia… you were wrong. The criterion is still quantitative. Of course, we trust more that an article in a respectable (i.e., highly ranked) journal is a good one, but we all know this is not always the case. 

Bringing into discussion the qualitative and quantitative distinction is crucial for assessing job applications and the ensuing hiring process. While it used to be easier for those in a position of power to hire whom they want, it has become a bit more difficult. Imagine you really want to hire someone because he (I will use this pronoun for certain reasons) is brilliant. But his brilliance is not reflected in his publications, presentations, teaching evaluations, grants (the latter because he did not get any)… You cannot even say he is a promising scholar, since that should be visible in something. At the same time, there are a lot of competing applications with an impressive record. So what can one do? Make use of the category ‘better fit’, ‘better fit’ for the position, ‘better fit’ for the department.[1] But when is someone a ‘better fit’, given that the job description did not mention anything to this effect? When their research is in line with the department? No, too much overlap! When it complements the existing areas of research? No, way too different!

And here is where Aristotle comes into the picture. It is not the research that has to fit, but the person. And we know from Aristotle and his followers that gender, race and nationality are the result of the (four elemental) qualities. Who can be more fit for a department mostly composed of men from Western Europe than another man from Western Europe? As a woman coming from Eastern Europe, I have no chance. And Eastern Europe is not even the worst place to come from in this respect. 

There is a caveat though. When more people who fit in the department apply, the committee seeks refuge in positing some ‘occult qualities’ to choose the ‘right’ person. ‘Occult’ in the Scholastic sense means that the quality it is not manifest in any way in the person’s profile.[2]

How much is this different from days when positions were just given away on the basis of personal preference? The difference lies in the charade.[3] The difference is that nowadays a bunch of other people, devoid of occult qualities, though with an impressive array of qualities manifest in their CVs and international recognition, spend time and energy to prepare an application, get frustrated, maybe even get sick, just so that the person with the ‘better fit’ can have the impression that he is much better than all the rest who applied.

So when are we going to give up the Aristotelian-Scholastic elementary and occult qualities and opt for a different set of more inclusive qualities?


[1] Aristotle probably put it in his Categories, but it got lost.

[2] I am rather unfair with this term, because the occult qualities were making themselves present through certain effects.

[3] The Oxford dictionary indeed defines charade as “an absurd pretence intended to create a pleasant or respectable appearance.”

On being a first-generation student

First off: the following is not to be taken as a tale of woe. I am grateful for whatever life has had on offer for me so far, and I am indebted to my teachers – from primary school to university and beyond – in many ways. But I felt that, given that Martin invited me to do so, I should probably provide some context to my comment on his recent post on meritocracy, in which I claimed that my being a first-generation student has had a “profound influence on how I conceive of academia”. So here goes.

I am a first-generation student from a lower-middle-class family. My grandparents on the maternal side owned and operated a small farm, my grandfather on the paternal side worked in a foundry, and his wife – my father’s mother – did off-the-books work as a cleaning woman in order to make ends meet.

When I got my first job as a lecturer in philosophy my monthly income already exceeded that of my mother, who has worked a full-time job in a hospital for more than thirty years. My father, a bricklayer by training, is by now divorced from my mother and declutters homes for a living. Sometimes he calls me in order to tell me about a particularly good bargain he stroke on the flea market.

My parents did not save money for my education. As an undergraduate I was lucky to receive close to the maximum amount of financial assistance afforded by the German Federal Law on Support in Education (BAföG) – still, I had to work in order to be able to fully support myself (tuition fees, which had just been introduced when I began my studies, did not help). At the worst time, I juggled three jobs on the side. I have work experience as a call center agent (bad), cleaning woman (not as bad), fitness club receptionist (strange), private tutor (okay), and teaching assistant (by far the nicest experience).

Not every middle-class family is the same, of course. Nor is every family in which both parents are non-academics. Here is one way in which the latter differ: There are those parents who encourage – or, sometimes, push – their children to do better than themselves, who emphasize the value of higher education, who make sure their children acquire certain skills that are tied to a particular habitus (like playing the piano), who provide age-appropriate books and art experiences. My parents were not like that. “Doing well”, especially for my father, meant having a secure and “down-to-earth” job, ideally for a lifetime. For a boy, this would have been a craft. Girls, ostensibly being less well-suited for handiwork, should strive for a desk job – or aim “to be provided for”. My father had strong reservations about my going to grammar school, even though I did well in primary school and despite my teacher’s unambiguous recommendation. I think it never occurred to him that I could want to attend university – academia was a world too far removed from his own to even consider that possibility.

I think that my upbringing has shaped – and shapes – my experience of academia in many ways. Some of these I consider good, others I have considered stifling at times. And some might even be loosely related to Martin’s blogpost about meritocracy. Let me mention a few points (much of what follows is not news, and has been put more eloquently by others):

  • Estrangement. An awareness of the ways in which the experiences of my childhood and youth, my interests and preferences, my habits and skills differ from what I consider a prototypical academic philosopher – and I concede that my picture of said prototype might be somewhat exaggerated – has often made me feel “not quite at home” in academia. At the same time, my “professional advancement” has been accompanied by a growing estrangement from my family. This is something that, to my knowledge, many first-generation students testify to, and which can be painful at times. My day-to-day life does not have much in common with my parents’ life, my struggles (Will this or that paper ever get published?) must seem alien, if not ridiculous, to them. They have no clear idea of what it is that I do, other than that it consists of a lot of desk-sitting, reading, and typing. And I think it is hard for them to understand why anyone would even want to do something like this. One thing I am pretty sure of is that academia is, indeed, or in one sense at least, a comparatively cozy bubble. And while I deem it admirable to think of ways of how to engage more with the public, I am often unsure about how much of what we actually do can be made intelligible to “the folk”, or justified in the face of crushing real-world problems.
  • Empathy. One reason why I am grateful for my experiences is that they help me empathize with my students, especially those who seem to be afflicted by some kind of hardship – or so I think. I believe that I am a reasonably good and well-liked teacher, and I think that part of what makes my teaching good is precisely this: empathy. Also, I believe that my experiences are responsible for a heightened sensibility to mechanisms of inclusion and exclusion, and privilege. I know that – being white, having grown up in a relatively secure small town, being blessed with resilience and a certain kind of stubbornness, and so on – I am still very well-off. And I do not want to pretend that I know what it is like to come from real poverty, or how it feels to be a victim of racism or constant harassment. But I hope that I am reasonably open to others’ stories about these kinds of things.
  • Authority. In my family of origin, the prevailing attitude towards intellectuals was a strange mixture between contempt and reverence. Both sentiments were probably due to a sense of disparity: intellectuals seemed to belong to a kind of people quite different from ourselves. This attitude has, I believe, shaped how I perceived of my teachers when I was a philosophy student. I noticed that our lecturers invited us – me – to engage with them “on equal terms”, but I could not bring myself to do so. I had a clear sense of hierarchy; to me, my teachers were authorities. I did eventually manage to speak up in class, but I often felt at a loss for words outside of the classroom setting with its relatively fixed and easily discernable rules. I also struggled with finding my voice in class papers, with taking up and defending a certain position. I realize that this struggle is probably not unique to first-generation students, or to students from lower-class or lower-middle-class backgrounds, or to students whose parents are immigrants, et cetera – but I believe that the struggle is often aggravated by backgrounds like these. As teachers, I think, we should pay close attention to the different needs our students might have regarding how we engage with them. It should go without saying, but if someone seems shy or reserved, don’t try to push them into a friendly and casual conversation about the model of femininity and its relation to sexuality in the novel you recently read.
  • Merit. Now, how does all this relate to the idea of meritocracy? I think there is a lot to say about meritocracy, much more than can possibly be covered in a (somewhat rambling) blogpost. But let me try to point out at least one aspect. Martin loosely characterizes the belief in meritocracy as the belief that “if you’re good or trying hard enough, you’ll get where you want”. But what does “being good enough” or “trying hard enough” amount to in the first place? Are two students who write equally good term papers working equally hard? What if one of them has two children to care for while the other one still lives with and is supported by her parents? What if one struggles with depression while the other does not? What if one comes equipped with “cultural capital” and a sense of entitlement, while the other feels undeserving and stupid? I am not sure about how to answer these questions. But one thing that has always bothered me is talk of students being “smart” or “not so smart”. Much has been written about this already. And yet, some people still talk that way. Many of the students I teach struggle with writing scientific prose, many of them struggle with understanding the assigned readings, many of them struggle with the task of “making up their own minds” or “finding their voice”. And while I agree that those who do not struggle, or who do not struggle as much, should, of course, be encouraged and supported – I sometimes think that the struggling students might be the ones who benefit the most from our teaching philosophy, and for whom our dedication and encouragement might really make a much-needed difference. It certainly did so for me.