The editors of Vivarium, a leading journal in the history of philosophy, recently published a notice on the retraction of several articles. It comes as no surprise that there was much discussion of the case on social media. Alongside the shock about the incident, it was the retraction notice itself that drew attention of blogs and individual commenters. The gist was that they had done a good job in conscientiously documenting instances of alleged plagiarism and describing the “cut throat nature of academic life”, as Eric Schliesser put it in a timely post on the issue. In what follows, I want to confine myself to the nature of the retraction notice.
What struck me in this notice is an aspect that I would like to call the moral framing of the editorial work in opposition to much of the rest of academia. Here is the passage I have in mind:
“We do not enjoy performing our duty. For marginal fields such as those served by Vivarium, we have seen from experience that the damage wreaked by plagiarism extends to institutions, bringing vulnerable positions, departments, and institutes to the attention of administrators eager to let the rationale of collective punishment direct the evisceration of budgets in Social Sciences and the Humanities. Our colleagues in adjacent fields will seize upon public cases of misconduct as an opportunityto reallocate scarce resources in their favor, thereby ensuring that those who previously lost out to plagiarists in competition for fellowships and positions lose out once again.” (C. Schabel / W. Duba, Notice, Vivarium 2020, 257; italics mine)
What is contrasted here is the unpleasant “duty” of the editors with the, shall we say, moral recklessness of administrators and colleagues. Of course, we are familiar with tirades about academia. But this is a formal notice about the reasons for retraction, in a top journal of the discipline. The conscientious listing of passages that follows makes for a strange contrast to the allusions (“we have seen from experience”) and unverified accusations expressed here. For a journal that rightly prides itself on standards of scholarly evidence, this is not a good look. Let me point out two aspects:
Firstly, it might indeed be the case that there are “administrators” who could be quoted as having used measures of “collective punishment” in such cases. But do we have evidence about this? And is this really evidence about the “eagerness” of administrators or are we looking at an even more structural issue? Most importantly, what is the reason to point this out in the given context? Does it serve to heighten the blameworthiness of what is being documented?
Secondly, I wonder about the reference to “our colleagues”. Since I am a specialist in the pertinent “marginal field”, the expression “our colleagues” should extend to my colleagues. The phrasing according to which they “will seize upon public cases” amounts to a prediction of their behaviour. Have my academic colleagues done such things? Are they likely to do such things? I know that people say all sorts of bad things on Twitter and I know that academia is competitive, but nothing I heard about such cases would bear testimony to the supposed behaviour. Again, would it not be apt to provide at least some evidence for this prediction?
Thus, we might say that the notice has a twofold structure: on the one hand, it outlines the passages and reasons for retraction; on the other hand, it frames this outline in a wider context of academic practices and moral standards. But while the outline fulfils good scholarly standards, the adjacent framing appeals to undocumented experience or hearsay. It is especially this latter part that strikes me as problematic, not least because it treats sociological assumptions about the current academic context as something that does not require reliable evidence.
“The ‘thorough’. – Those who are slow to know think that slowness is an aspect of knowledge.” Friedrich Nietzsche, The Gay Science
“Damn! I should have thought of that reply yesterday afternoon!” Do you know this situation? You’re confronted with an objection or perhaps a somewhat condescending witticism, and you just don’t know what to say. But the next day, you think of the perfect response. Let’s call this intellectual regret, whishing you had had that retort ready at the time or even anticipated the objection when stating your thesis, but you haven’t. Much of our intellectual lives is probably spent somewhere between regret and attempts at anticipating objections. What does this feeling of regret indicate? To a first approximation, we might say it indicates that we think of ourselves as imperfect. When we didn’t anticipate or even reply to an objection, something was missing. What was missing? Arguably, we were lacking what is often called smartness, often construed as the ability to quickly defend ideas against objections. But is that so?
Given our adversarial culture, we often take ourselves as either winning or losing arguments. Thus, we tend to see oppositions to our ideas as competitive. If we say “not p” and someone else advances arguments for “p”, we have the tendency to become defensive rather than embrace “p”. Accordingly, we often structure our work as the defence of a position, say “not p”. The anticipation of the objection “p” is celebrated as a hero narrative, in which p is anticipated and successfully conquered. This is why intellectual regrets might loom large in our mental lives. As I see it, however, the hero narrative is problematic. As I have argued earlier, it instils the desire to pick a side, ideally the right side. But this misses a crucial point of philosophical work: Taking sides is at best transitory; in the end it’s not more battle but peace that awaits us. In what follows, I’d like to suggest that we tend to misconstrue the nature of oppositions in philosophy. Rather than competitive positions, oppositions form an integral part of the ideas they seem to be opposed to. Accordingly, what we miss out on in situations of intellectual regret is not a smart defence of a position or dogma, but a more thorough understanding of our own ideas. But in what way does such an understanding involve oppositions?
Understanding through antonyms. – How do you explain philosophical ideas to someone? A quick way consists in contrasting a term with an opposed term or antonym. Asked what “objectivity” means, a good start is to begin by explaining how we use the term “subjective”. Wishing to explain an “ism”, it helps to look for the opposed ism. So you might explain “externalism” as an idea countering internalism. Now you might object that this shows how philosophers often proceed by carving out positions. It does indeed show as much. But it is equally true that we often need to understand the opposing position in order to understand a claim. So while the philosophical conversation often seems to unfold through positions and counterpositions, understanding them requires the alternatives. Negative theology has turned this into a thorough approach. – Now you might object that this might be a merely instrumental feature of understanding ideas, but it doesn’t show that one idea involves another idea as an integral part. After all, subjectivism is to be seen in opposition to objectivism, and if you’re holding one position, you cannot hold the other. To this I reply that, again, my point is indeed not about positions but about understanding. However, this feature is not merely instrumental. Arguably, at least certain ideas (such as subjectivism or atomism) do not make sense without their oppositions (objectivism or holism). That is, the relation to opposed ideas is part of the identity of certain ideas.
Counterfactual reasoning. – It certainly helps to think of opposed terms when making sense of an idea. But this feature extends to understanding whole situations and indeed our lives as such. Understanding the situation we are in often requires counterfactual reasoning. Appreciating the sunshine might be enhanced by seeing what were the case if it were raining. Understanding our biographies and past involves taking into account a number of what-ifs. Planning ahead involves imagining decidedly what is not the case but should be. Arguably, states such as hope, surprise, angst, and regret would be impossible if it were not for counterfactual ideas. So again, identifying the situation we are in involves alternatives and opposites.
Dialectical reasoning. – You might argue again that this just shows a clear distinction between ideas and their opposites. I do not deny that we might decidedly walk through one situation or idea before entering or imagining another. However, at least for some ideas and situations, encountering their oppositions or alternatives does not only involve understanding their negation. Rather, it leads to a new or reflective understanding of the original idea. You will have a new sense of your physical integrity or health once you’ve been hurt. You understand “doubt” differently once you realise that you cannot doubt everything at once (when seeing that you cannot doubt that you’re doubting) or once you thought through the antisceptical ideas of Spinoza or Wittgenstein. Your jealousy might be altered when you’ve seen others acting out of jealousy. Just like Hume’s vulgar view is a different one once you realise that you cannot let go of it (even after considering the philosophical view). The dynamics of dialectical reasoning don’t just produce alternatives but new, arguably, enriched identities.
Triangulation and identity. – Once we recognise how understanding opposites produce a new identity for the idea or the understanding of our very own lives (the examined life!), we see how this pervades our thinking. Entering a town from the station is different from entering it via the airport or seeing it on a map. Here we have three different and perhaps opposing ways of approaching the same place. But who would think that one position or one way of seeing things is true or better or more advanced than another? All of these ways might be taken as different senses, guiding us to the same referent, or as different perspectives in a triangulation between two different interlocutors. When it comes to understanding what we call reality, there is of course a vast amount of senses or triangulations. But who would think one is better? Must we not say that each perspective adds yet more to the same endeavour? So while we might progress through various positions, I would doubt that they are competitive. Rather they strike me as contributing to the same understanding.
Disagreement. – If this is a viable view of things, it might strike you as a bit neoplatonist. But there is nothing wrong with that. But what about real disagreements? Where can we place disagreements, if all oppositions are ultimately taken to resolve into one understanding? Given how stubborn humans are, it would be odd to say that oppositions are merely transient. My hunch is that there can be real disagreement. How? Whenever we have a position or perspective we can benefit from another perspective on the same thing or issue or idea. In keeping with what I said above, encountering such a different perspective would not be a disagreement but, ultimately, an enrichment. That said, we can doubt whether our different perspectives really concern the same thing or idea. We might assume that we now enter the same town we saw on the map earlier. But perhaps we in fact enter a different town. Assuming sameness is a presupposition of making sense or having a sensible disagreement about the same. But our attempt of tracing sameness might fail.
Returning to our intellectual regrets, what is mostly missing is not the smartness to outperform our interlocutors. Rather, we might lack a more full understanding that only the perspectives of others can afford us. In this sense, the oppositions in philosophy are transient. If we have to wait and listen to others, that slowness is indeed an aspect of knowledge.
“… solipsism strictly carried out coincides with pure realism. The I in solipsism shrinks to an extensionless point and there remains the reality co-ordinated with it.” Wittgenstein, TLP 5.64
When was the last time you felt really and wholly understood? If this question is meaningful, then there are such moments. I’d say, it does happen, but very rarely. If things move in a good direction, there is an overlap or some contiguity or a fruitful friction in your conversation. Much of the time, though, I feel misunderstood or I feel that I have misunderstood others. – Starting from such doubts, you could take this view to its extremes and argue that only you understand yourself or, more extreme still, that there is nothing external to your own mind. But I have to admit that I find these extreme brands of solipsism, as often discussed in philosophy, rather boring. They are highly implausible and don’t capture what I think is a crucial idea in solipsism. What I find crucial is the idea that each of us is fundamentally alone. However, it’s important to understand in what sense we are alone. As I see it, I am not alone in the sense that only I know myself or only my mind exists. Rather, I am alone insofar as I am different from others. Solitude, then, is not merely a feeling but also a fact about the way we are.* In what follows, I’d like to suggest reasons for embracing this view and how its acknowledgement might actually make us more social.
Throwing the baby out with the bathwater. – In 20th-century philosophy, solipsism has often had a bad name. Solipsism was and is mostly construed as the view that subjective experience is foundational. So you might think that you can only be sure about what’s going on in your own mind. If you hold that view, people will ridicule you as running into a self-defeating position, because subjective states afford no criteria to distinguish between what seems and what is right. Rejecting subjective experience as a foundation for knowledge or theories of linguistic meaning, many people seemed to think it was a bad idea altogether. This led to an expulsion of experience from many fields in philosophy. Yes, it does seem misguided to build knowledge or meaning on subjective experience. But that doesn’t stop experience from playing an important part in our (mental) lives. Let me illustrate this issue a bit more so as to show where I see the problem. Take the word “station”. For the (public) meaning of this word, it doesn’t matter what your personal associations are. You might think of steam trains or find the sound of the word a bit harsh, but arguably nothing of this matters for understanding what the word means. And indeed, it would seem a bit much if my association of steam trains would be a necessary ingredient for mastering the concept or using it in communication. This is a bit like saying: If we want to use the word “station” to arrange a meeting point, it doesn’t matter whether you walk to the station through the village or take the shortcut across the field. And yes, it doesn’t matter for the meaning or success of our use of the word whether you cut across the field. But hang on! While it doesn’t matter for understanding the use of the word, it does matter for understanding my interlocutor. Thinking of steam trains is different from not thinking of them. Cutting across the field is different from walking through the village. This is a clear way in which the experience of interlocutors matters. Why? Well, because it is different. As speakers, we have a shared understanding of the word “station”; as interlocutors we have different experiences and associations we connect with that word. As I see it, it’s fine to say that experience doesn’t figure in the (public) meaning. But it is problematic to deny that the difference in experience matters.
A typical objection to this point is that private or subjective experience cannot be constitutive for meaning. But this goes only so far. As interlocutors, we are not only interested in understanding the language that someone uses, but also the interlocutor who is using it. This is not an easy task. For understanding language is rooted in grasping sameness across different contexts, while understanding my interlocutor is rooted in acknowledging difference (in using the same words). This is not a point about emphatic privacy or the idea that our experience were to constitute meaning (it doesn’t). It’s a point about how differences can play out in practical interaction. To return to the earlier example “Let’s go to the station” can mean very different things, if one of you wants to go jointly but it turns out you have different routes in mind. So understanding the interlocutor involves not only a parsing of the sentence, but an acknowledgement of the differences in association. It requires acknowledging that we relate different experiences or expectations to this speech act. So while we have a shared understanding of language, we often lack agreement in associations. It is this lack of agreement that can make me vastly different from others. Accordingly, what matters in my understanding of solipsism is not that we have no public language (we do), but that we are alone (to some degree) with our associations and experiences.
Arguably, these differences matter greatly in understanding or misunderstanding others. Let me give an example: Since I started blogging, I can see how often people pick one or two ideas and run. Social media allow you to test this easily. Express an opinion and try to predict whether you’ll find yourself in agreement with at least a fair amount of people. Some of my predictions failed really miserably. But even if predictions are fulfilled, most communication situations lack a certain depth of understanding. Why is this the case? A common response (especially amongst analytically inclined philosophers) is that our communication lacks clarity. If this were true, we should improve our ways of communicating. But if I am right, this doesn’t help. What would help is acknowledging the differences in experience. Accordingly, my kind of solipsism is not saying: Only I know myself. Or: Only my mind exists. Rather it says: I am different (from others).
This “differential solipsism” is clearly related to perspectivism and even standpoint theory. However, in emerging from the acknowledgement of solitude, it has a decidedly existential dimension. If a bit of speculation is in order, I would even say that the tendency to shun solipsism might be rooted in the desire to escape from solitude by denying it. It’s one thing to acknowledge solitude (rooted in difference); it’s another thing to accept the solitary aspects of our (mental) lives. Let’s look more closely how these aspects play out.
Even if philosophers think that experience doesn’t figure in the foundations of knowledge and meaning, it figures greatly in many of our interactions.** We might both claim to like jazz, but if we go to a concert, it might be a disappointment when it turns out that we like it for very different reasons. So you might like the improvisations, while I don’t really care about this aspect, but am keen on the typical sound of a jazz combo. If the concert turns out to feature one but not the other aspect, our differences will result in disagreement. Likewise, we might disagree about our way to the station, about the ways of eating dinner etc. Now as I see it, the solitude or differences we experience in such moments doesn’t sting because of the differences themselves. What makes such moments painful is rather when we endure and paste over these differences without acknowledging them.
If I am right, then I don’t feel misunderstood because you don’t happen to care about the sound of the combo. I feel misunderstood, because the difference remains unacknowledged. Such a situation can typically spiral into a silly kind of argument about “what really matters”: the sound or the improvisation. But this is just silly: what matters for our mutual understanding is the difference, not one of the two perspectives. In a nutshell: True understanding does not lie in agreement, but in the detailed acknowledgement of disagreement.***
But why, you might ask, should this be right? Why would zooming in on differences in association or experience really amend the situation? The reason might be given in Wittgenstein’s claim that solipsism ultimately coincides with realism. How so? Well, acknowledging the different perspectives should hopefully end the struggle over the question which of the perspectives is more legitimate. Can we decide on the right way to the station? Or on the most salient aspect in a jazz concert? No. What we can do is articulate all the perspectives, acknowledging the reality that each view brings to the fore. (If you like, you can imagine all the people in the world articulating their different experiences, thereby bringing out “everything that is the case.”)
Writing this, I am reminded of a claim Evelina Miteva made in a conversation about writing literature: The more personal the description of events is, the more universal it might turn out to be. While this sounds paradoxical, the realism of differential solipsism makes palpable why this is true. The clear articulation of a unique experience does not block understanding. Quite the contrary: It allows for localising it in opposition to different experiences of the same phenomenon. In all these cases, we might experience solitude through difference, but we will not feel lonely for being invisible.
* Of course, the title “Solitude standing” is also a nod to the great tune by Suzanne Vega:
*** And once again, I am reminded of Eric Schliesser’s discussion of Liam Brights’s post on subjectivism, hitting the nail on the following head: “Liam’s post (which echoes the loveliest parts of Carnap’s program with a surprisingly Husserlian/Levinasian sensibility) opens the door to a much more humanistic understanding of philosophy. The very point of the enterprise would be to facilitate mutual understanding. From the philosophical analyst’s perspective the point of analysis or conceptual engineering, then, is not getting the concepts right (or to design them for ameliorative and feasible political programs), but to find ways to understand, or enter into, one’s interlocutor life world.”
Recently, I became interested (again) in the way our upbringing affects our values. Considering how groups, especially in academia, often manage to suppress criticism of misconduct, I began to wonder which values we associate with criticism more generally. First, I noticed a strange ambivalence. Just think about the ambivalent portrayal of whistle blowers like Edward Snowden! The ambivalence is captured in values like loyalty that mostly pertain to a group and are not taken to be universal. Then, it hit me. Yes, truth telling is nice. But in-groups ostracise you as a snitch, a rat or a tattletale! Denouncing “virtue signalling” or “cancel culture” seems to be on a par with this verdict. So while criticism of mismanagement or misconduct is often invited as an opportunity for improvement, it is mostly received as a cause of reputational damage.
Now I wrote up a small piece for Zoon Politikon.* In this blog post, I just want to share what I take to be the main idea.
The ambivalence of criticism in academia seems to be rooted in an on-going tension between academic and managerial hierarchies. While they are intertwined, they are founded on very different lines of justification. If I happen to be your department chair, this authority weighs nothing in the setting of, say, an academic conference. Such hierarchies might be justifiable in principle. But while the goals of academic work and thus hierarchies are to some degree in the control of the actual agents involved, managerial hierarchies cannot be justified in the same way. A helpful illustration is the way qualitative and quantitative assessment of our work come apart: A single paper might take years of research and end up being a game-changer in the field of specialisation, but if it happens to be the only paper published in the course of three years, it won’t count as sufficient output. So while my senior colleague might have great respect for my work as an academic, she might find herself confronted with incentives to admonish and perhaps even fire me.
What does this mean for the status of criticism? The twofold nature of hierarchies leaves us with two entirely disparate justifications of criticism. But these disparate lines of justification are themselves a constant reason for criticism. The fact that a field-changing paper and a mediocre report both make one single line in a CV bears testimony to this. But here’s the thing: we seemingly delegitimise such criticism by tolerating and ultimately accepting the imperfect status quo. Of course, most academics are aware of a tension: The quantification of our work is an almost constant reason for shared grievance. But as employees we find ourselves often enough buying into it as a “necessary evil”. Now, if we accept it as a necessary evil, we seem to give up on our right to criticise it. Or don’t we? Of course not, and the situation is a lot more dynamic than I can capture here. To understand how “buying into” an imperfect situation (a necessary evil) might seemingly delegitimise criticism, it is crucial to pause and briefly zoom in on the shared grievance I just mentioned.
Let me begin by summarising the main idea: The shared grievance constitutes our status quo and, in turn, provides social cohesion among academics. Criticism will turn out to be a disturbance of that social cohesion. Thus, critics of the status quo will likely be ostracised as “telling on” us.
One might portray the fact that we live with an academic and a managerial hierarchy simply as unjust. One hierarchy is justified, the other isn’t (isn’t really, that is). Perhaps, in a perfect world, the two hierarchies would coincide. But in fact we accept that, with academia being part of the capitalist world at large, they will never coincide. This means that both hierarchies can be justified: one as rooted in academic acclaim; the other as a necessary evil of organising work. If this is correct and if we accept that the world is never perfect, we will find ourselves in an on-going oscillation and vacillation. We oscillate between the two hierarchies. And we vacillate between criticising and accepting the imperfection of this situation. This vacillation is, I submit, what makes criticism truly ambivalent. On the one hand, we can see our work-relations from the different perspectives; on the other hand, we have no clear means to decide which side is truly justified. The result of this vacillation is thus not some sort of solution but a shared grievance. A grievance acknowledging both the injustices and the persisting imperfection. There are two crucial factors in this: The fact that we accept the imperfect situation to some degree; and the fact that this acceptance is a collective status, it is our status quo. Now, I alone could not accept on-going injustices in that status quo, if my colleagues were to continuously rebel against it. Thus, one might assume that, in sharing such an acceptance, we share a form of grievance about the remaining vacillation.
It is of course difficult to pin down such a phenomenon, as it obtains mostly tacitly. But we might notice it in our daily interactions when we mutually accept that we see a tension, for instance, between the qualitative and quantitative assessment of our work. This shared acceptance, then, gives us some social cohesion. We form a group that is tied together neither by purely academic nor by purely managerial hierarchies and relations. There might be a growing sense of complicity in dynamic structures that are and aren’t justified but continue to obtain. So what forms social cohesion between academics are not merely factors of formal appraisal or informal friendship. Rather, a further crucial factor is the shared acceptance of the imperfection of the status quo. The acceptance is crucial in that it acknowledges the vacillation and informs what one might call the “morale” of the group.
If this is correct, academics do indeed form a kind of group through acceptance of commonly perceived imperfections. Now if we form such a group, it means that criticism will be seen as both justified but also as threatening the shared acceptance. We know that a critic of quantitative work measures is justified. But we also feel that we gave in and accepted this imperfection a while ago. The critic seemingly breaks with this tacit consent and will be seen like someone snitching or “telling on us”. As I see it, it is this departure from an in-group consensus that makes criticism appear as snitching. And while revealing a truth about the group might count as virtuous, it makes the critic seemingly depart from the in-group. Of course, companies and universities enjoy also some legal protection. Even if you find out about something blameworthy, you might be bound by rules about confidentiality. This is why whistle blowers do indeed have an ambivalent reputation, too. But I guess that the legal component alone does not account for the force of the in-group mentality at work in suppressing criticism.
This mode of suppressing criticism has pernicious effects. The intertwined academic and managerial hierarchies often come with inverse perceptions of criticism: your professorial colleague might be happy to learn from your objections, while your department chair might shun your criticism and even retaliate against you. Yet, they might be the same person. Considering the ubiquitous histories of suppressing critics of sexism, racism and other kinds of misconduct, we do not need to look far to find evidence for ostracism or retaliation against critics. I think that it’s hard to explain this level of complicity with wrongdoers merely by referring to bad intentions, on the one hand, or formal agreements such as confidentiality, on the other. Rather, I think, it is worthwhile to consider the deep-rooted in-group consensus that renders criticism as snitching. One reason is that snitching counts, at least in a good number of cultures, as a bad action. But while this might be explained with concerns about social cohesion, it certainly remains a morally dubious verdict, given that snitching is truth-conducive and should thus be aligned with values such as transparency. Going by personal anecdotes, however, I witnessed that snitching was often condemned even by school teachers, who often seemed to worry about social cohesion no less than about truthfulness. In other words, we don’t seem to like that the truth be told when it threatens our status quo.
In sum, we see that the ambivalent status of criticism is rooted in a twofold hierarchy that, in turn, comes with disparate sets of values. Shared acceptance of these disparate sets as an unavoidable imperfection binds together an in-group that will sanction explicit criticism of this imperfection as a deviation from the consensus. The current charges against so-called “virtue signalling”, a “call out culture” or “cancel culture” on social media strike me as instances of such sanctions. If we ask what makes the inclinations to sanction in-group norm violations so strong, it seems helpful to consider the deep-rooted code against snitching. While the moral status of sanctioning snitching is certainly questionable, it can shed light on the pervasive motivation and strikingly ready acceptance of such behaviour.
* Following a discussion of a blog post on silence in academia, Izabela Wagner kindly invited me to contribute to a special issue in Zoon Politikon. I am enormously grateful to her for the exchanges and for providing this opportunity. Moreover, I have benefitted greatly from advice by Lisa Herzog, Pietro Ingallina, Mariya Ivancheva, Christopher Quinatana, Rineke Verbrugge, and Justin Weinberg.
This is the first installment of my new video series Philosophical Chats. In this episode, I have a conversation (for about 40 minutes) with my old friend Kai Ivo Baulitz, an actor and playwright, who is currently in Prague for a film shooting, but has to quarantine most of the time.* – We talk about how the crisis changed our minds and ways, about Kai’s situation in Prague, about being under surveillance, about anger and guilt, about acting and kissing, about how doing philosophy is like having a midlife crisis, about embracing fatalism, and about how we end up feeling inconsistent much of the time. Enjoy!
* Fun fact: This year, Kai and I would have celebrated our 30th school leaving anniversary (Abiturfeier), but corona took care of preventing that. Perhaps this conversation makes up for that a bit.
Those who know us from our school days might find it particularly ironic that Kai is currently stuck in Prague.
Having heard many stories and tips about online teaching I was very apprehensive about teaching my first class of this term. How could I not fail? At the same time, I’m enormously grateful to all the people and my university for sharing great advice and best practice examples (see e.g. here). Thinking through the scenarios not only got me worried but also gave me a lot of headspace to anticipate and navigate through a number of (possible) obstacles before plunging into the unknown. What can I say? I went through all sorts of worries and finally came up with a decision that, so far, worked for me: I teach online rather than do online teaching. This means that I teach much in the same way I would have done offline, with the small exception that it’s happening online. What does this entail for students and myself?
Let me address students first. I’ve heard many people say that you, students, tend to keep your cameras and microphones off, and use the chat instead. Although I personally prefer to see people’s faces and hear their voices, I think this would be ok for at least some part of the interaction. The reason I recommend switching your gear on and making yourself heard and seen is that you should take your space. Online teaching is often presented as a challenge that deprives us of direct interaction which, in turn, has to be compensated with all sorts of “tools”. Yes, of course, we are used to physical cues in communication, the perception of which we now often simply miss out on. But what strikes me as crucial is that we participate. I don’t think any amount of online tools or refined environment makes up for your participation in the conversation. I find reading the chat more cumbersome than listening to you. I also prefer speaking over writing. But I’ll get used to the chat and learn to change my ways happily, as long as you participate. The crucial aspect is not how it’s done, online or offline, but that you do it. Make your contributions, ask you questions etc. as before.
That said, I also think that you can participate more fully if you use all the devices available. The online space is not just a toolbox or learning environment. It’s first of all a political space with all the power imbalances and hierarchies that we have offline. It’s at least partly our choice whether we want to amplify or adjust the old space as we’re moving online. This is why I’d recommend using all the resources that enhance your presence for participation. You’re not a passive receptor and instructors are not emulating youtube. We’re still at the university, a public and democratic space for academic exchange.
This morning, I was quite nervous whether my “strategy” would work. It’s way too early to assess this, but what I find worth reporting is that, for me at least, teaching this course online was much the same experience as teaching it offline. I thought it would be awkward to be talking to a screen, but then people who know me also know that I often speak with my eyes closed… The silence after asking a question might be slightly longer, but we all know that the situation is special, so it’s fine. Of course, I miss cues, but I noticed I can ask for them more often, if need be.
For the time being, I have also decided not to record my teaching events. If someone misses a class, they will miss it. More importantly, I know that recording the stuff would change the stakes for the students and myself. Over and above the well-known privacy issues and unintended misuse of recorded material, watching a lecture is different even from silently participating in it, let alone giving it. (The fancy word for this is “synchronous” teaching, I guess. But that would be misleading. Even if the lectures cannot be viewed later on, students will still experience “asynchronous” teaching. That is, they will still have to do the reading, thinking and discussing before and after. Of course, there is nothing wrong with the “flipped classroom”, but to my mind this is different from actually teaching students.)
Is that so? On the face of it, one might think there is not much difference between watching a recorded lecture and silently taking part in an online lecture, especially if the audience is rather large. I beg to differ. Even if you don’t want to join in, participating in an actual lecture still gives you the opportunity to interfere. At least for me, such an opportinity changes the level and quality of attention, even if (or especially when) I decide not to raise my hand. Seen this way, recording lectures is a way of providing material on top of other material, such as readings, videos and podcasts. It goes without saying that we shouldn’t underestimate the value of such materials. After all, teaching is no replacement for learning, that is: independent self-study is still a, sometimes underestimated, component of the educational process. But the moments of interaction, the so-called contact hours, remain special: they come and go. And with them the opportunities for participation, for noticing others in relation to yourself come and go. My hunch is that these moments thrive on the fact that they cannot be repeated. In this sense, watching a recorded class is not the same as taking part in a given class.
All that said, this is in no way dismissing the fantastic ideas and tools for proper online teaching. Yet it is crucial to be reminded that, at this moment, most of us teach online (if they have this liberty, that is) because the pandemic causes an emergency. But just as new social media competence is rooted in old-fashioned reading and writing, the quality of teaching is rooted in our resources of offline thinking and interaction.
In any case, I wish everyone a happy and safe beginning of term.
Say you would like to learn something about Kant, should you start by reading one of his books or rather get a good introduction to Kant? Personally, I think it’s good to start with primary texts, get confused, ask questions, and then look at the introductions to see some of your questions discussed. Why? Well, I guess it’s better to have a genuine question before looking for answers. However, even before the latest controversy on Twitter (amongst others between Zena Hitz and Kevin Zollman) took off, I have been confronted with quite different views. Taken as an opposition between extreme views, you could ask whether you want to make philosophy about ideas or people (and their writings). It’s probably inevitable that philosophy ends up being about both, but there is still the question of what we should prioritise.
Arguably, if you expose students to the difficult original texts, you might frighten them off. Thus, Kevin Zollman writes: “If I wanted someone to learn about Kant, I would not send them to read Kant first. Kant is a terrible writer, and is impossible for a novice to understand.” Accordingly, he argues that what should be prioritised is the ideas. In response Zena Hitz raises a different educational worry: “You’re telling young people (and others) that serious reading is not for them, but only for special experts.” Accordingly, she argues for prioritising the original texts. As Jef Delvaux shows in an extensive reflection, both views touch on deeper problems relating to epistemic justice. A crucial point in his discussion is that we never come purely or unprepared to a primary text anyway. So an emphasis on the primary literature might be prone to a sort of givenism about original texts.
I think that all sides have a point, but when it comes to students wanting to learn about historical texts, there is no way around looking at the original. Let me illustrate my point with a little analogy:
Imagine you want to study music and your main instrument is guitar. It is with great excitement that you attend courses on the music of Bach whom you adore. The first part is supposed to be on his organ works, but already the first day is a disappointment. Your instructor tells you that you shouldn’t listen to Bach’s organ pieces themselves, since they might be far too difficult. Instead you’re presented with a transcription for guitar. Well, that’s actually quite nice because this is indeed more accessible even if it sounds a bit odd. (Taken as an analogy to reading philosophy, this could be a translation of an original source.) But then you look at he sheets. What is this? “Well”, the instructor goes on, “I’ve reduced the accompaniment to the three basic chords. That makes it easier to reproduce it in the exam, too. And we’ll only look at the main melodic motif. In fact, let’s focus on the little motif around the tonic chord. So, if you can reproduce the C major arpeggio, that will be good enough. And it will be a good preparation for my master class on tonic chords in the pre-classic period.” Leaving this music school, you’ll never have listened to any Bach pieces, but you have wonderful three-chord transcriptions for guitar, and after your degree you can set out on writing three-chord pieces yourself. If only there were still people interested in Punk!
Of course, this is a bit hyperbolic. But the main point is that too much focus on cutting things to ‘student size’ will create an artificial entity that has no relation to anything outside the lecture hall. But while I thus agree with Zena Hitz that shunning the texts because of their difficulties sends all sorts of odd messages, I also think that this depends on the purpose at hand. If you want to learn about Kant, you should read Kant just like you should listen to Bach himself. But what if you’re not really interested in Kant, but in a sort of Kantianism under discussion in a current debate? In this case, the purpose is not to study Kant, but some concepts deriving from a certain tradition. In this case, you might be more like a jazz player who is interested in building a vocabulary. Then you might be interested, for instance, in how Bach dealt with phrases over diminished chords and focus on this aspect first. Of course, philosophical education should comprise both a focus on texts and on ideas, but I’d prioritise them in accordance with different purposes.
That said, everything in philosophy is quite difficult. As I see it, a crucial point in teaching is to convey means to find out where exactly the difficulties lie and why they arise. That requires all sorts of texts, primary, secondary, tertiary etc.
I recognize that I could only start to write about this … once I related to it. I dislike myself for this; my scholarly pride likes to think I can write about the unrelatable, too.Eric Schliesser
Philosophy students often receive the advice that they should focus on topics that they have a passion for. So if you have fallen for Sartre, ancient scepticism or theories of justice, the general advice is to go for one of those. On the face of it, this seems quite reasonable. A strong motivation might predict good results which, in turn, might motivate you further. However, I think that you might actually learn more by exposing yourself to material, topics and questions that you initially find remote, unwieldy or even boring. In what follows, I’d like to counter the common idea that you should follow your passions and interests, and try to explain why it might help to study things that feel remote.
Let me begin by admitting that this approach is partly motivated by my own experience as a student. I loved and still love to read Nietzsche, especially his aphorisms in The Gay Science. There is something about his prose that just clicks. Yet, I was always sure that I couldn’t write anything interesting about his work. Instead, I began to study Wittgenstein’s Tractatus and works from the Vienna Circle. During my first year, most of these writings didn’t make sense to me: I didn’t see why they found what they said significant; most of the terminology and writing style was unfamiliar. In my second year, I made things worse by diving into medieval philosophy, especially Ockham’s Summa Logicae and Quodlibeta. Again, not because I loved these works. In fact, I found them unwieldy and sometimes outright boring. So why would I expose myself to these things? Already at the time, I felt that I was actually learning something: I began to understand concerns that were alien to me; I learned new terminology; I learned to read Latin. Moreover, I needed to use tools, secondary literature and dictionaries. And for Ockham’s technical terms, there often were no translations. So I learned moving around in the dark. There was no passion for the topics or texts. But speaking with hindsight (and ignoring a lot of frustration along the way), I think I discovered techniques and ultimately even a passion for learning, for familiarising myself with stuff that didn’t resonate with me in the least. (In a way, it seemed to turn out that it’s a lot easier to say interesting things about boring texts than to say even boring things about interesting texts.)
Looking back at these early years of study, I’d now say that I discovered a certain form of scholarly explanation. While reading works I liked was based on a largely unquestioned understanding, reading these unwieldy new texts required me to explain them to myself. This in turn, prompted two things: To explain these texts (to myself), I needed to learn about the new terminology etc. Additionally, I began to learn something new about myself. Discovering that certain things felt unfamiliar to me, while others seemed familiar meant that I belonged to one kind of tradition rather than another. Make no mistake: Although I read Nietzsche with an unquestioned familiarity, this doesn’t mean that I could have explained, say, his aphorisms any better than the strange lines of Wittgenstein’s Tractatus. The fact that I thought I understood Nietzsche didn’t give me any scholarly insights about his work. So on top of my newly discovered form of explanation I also found myself in a new relation to myself or to my preferences. I began to learn that it was one thing to like Nietzsche and quite another to explain Nietzsche’s work, and still another to explain one’s own liking (perhaps as being part of a tradition).
So my point about not studying what you like is a point about learning, learning to get oneself into a certain mode of reading. Put more fancily: learning to do a certain way of (history of) philosophy. Being passionate about some work or way of thinking is something that is in need of explanation, just as much as not being passionate and feeling unfamiliar about something needs explaining. Such explanations are greatly aided by alienation. As I said in an earlier post, a crucial effect of alienation is a shift of focus. You can concentrate on things that normally escape your attention: the logical or conceptual structures for instance, ambiguities, things that seemed clear get blurred and vice versa. In this sense, logical formalisation or translation are great tools of alienation that help you to raise questions, and generally take an explanatory stance, even to your most cherished texts.
As a student, discovering this mode of scholarly explanation instilled pride, a pride that can be hurt when explanations fail or evade us. It was remembering this kind of pain, described in the motto of this post, that prompted these musings. There is a lot to be said for aloof scholarship and the pride that comes with it, but sometimes it just doesn’t add up. Because there are some texts that require a more passionate or intuitive relation before we can attain a scholarly stance towards them. If the passion can’t be found, it might have to be sought. Just like our ears have to be trained before we can appreciate some forms of, say, very modern music “intuitively”.
When writing papers, students and advanced philosophers alike are often expected to take a position within a debate and to argue for or against a particular claim. But what if we merely wish to explore positions and look for hidden assumptions, rather than defend a claim? Let’s say you look at a debate and then identify an unaddressed but nevertheless important issue, a commitment left implicit in the debate, let’s call it ‘X’. Writing up your findings, the paper might take the shape of a description of that debate plus an identification of the implicit X. But the typical feedback to such an exploration can be discouraging: It’s often pointed out that the thesis could have been more substantive and that a paper written this way is not publishable unless supplemented with an argument for or against X. Such comments all boil down to the same problem: You should have taken a position within the debate you were describing, but you have failed to do so.
But hang on! We’re all learning together, right? So why is it not ok to have one paper do the work of describing and analysing a debate, highlighting, for instance, some unaddressed X, so that another paper may attempt an answer to the questions about X and come up with a position? Why must we all do the same thing and, for instance, defend an answer on top of everything else? Discussing this issue, we* wondered what this dissatisfaction meant and how to react to it. Is it true? Should you always take a position in a debate when writing a paper? Or is there a way of giving more space to other approaches, such as identifying an unaddressed X?
One way of responding to these worries is to dissect and extend the paper model, for instance, by having students try other genres, such as commentaries, annotated translations, reviews, or structured questions. (A number of posts on this blog are devoted to this.) However, for the purposes of this post, we’d like to suggest and illustrate a different idea. We assume that the current paper model (defending a position) does not differ substantially from other genres of scholarly inquiry. Rather, the difference between, say, a commentary or the description of a debate, on the one hand, and the argument for a claim, on the other, is merely a stylistic one. Now our aim is not to present an elaborate defense of this idea, but to try out how this might help in practice.
To test and illustrate the idea (below), we have dug out some papers and rewritten sections of them. Before presenting one sample, let’s provide a brief manual. The idea rests on the, admittedly somewhat contentious, tenets that
any description or analysis can be reformulated as a claim,
the evidence provided in a description can be dressed up as an argument for the claim.
But how do you go about it? In describing a debate, you typically identify a number of positions. So what if you don’t want to adopt and argue for one of them? There is something to be said for just picking a side anyway, but if that feels too random, here is a different approach:
(a) One thing you can always do is defend a claim about the nature of the disagreement in the debate. Taken this way, the summary of your description or analysis becomes the claim about the nature of the disagreement, while the analysis of the individual positions functions as an argument / evidence for this claim. This is not a cheap trick; it’s just a pointed way of presenting your material.
(b) A second step consists in actually labelling steps as claims, arguments, evaluations etc. Using such words doesn’t change the content, but it signals even to a hasty reader where your crucial steps begin and end.
Let’s now look at a passage from the conclusion of a paper. Please abstract away from the content of discussion. We’re just interested in identifying pertinent steps. Here is the initial text:
“… Thus, I have dedicated this essay to underscoring the importance of this problem. I have first discussed two of the most prominent levels accounts, namely O&P’s layer-cake account, and Craver and Bechtel’s mechanistic levels, and shown that they both provide radically different levels accounts. I addressed the problems with each account, and it became clear that what is considered to be a problem by some, is considered to be a virtue by others. This led us to uncover a deeper disagreement, namely about what the function of a levels account is supposed to be and what the term “level” means.”
Here is the rewritten version (underlined sections indicate more severe changes or additions):
“… But why is this problem significant? I have first discussed two of the most prominent levels accounts, namely O&P’s layer-cake account, and Craver and Bechtel’s mechanistic levels, and shown that they both provide radically different levels accounts. I addressed the problems with each account, and it became clear that what is considered to be a problem by some, is considered to be a virtue by others. This is in keeping with my second-order thesis that the dispute is less about content but rather about defining criteria. However, this raises the question of what to make of levels on any set of criteria. Answering this question led me to defend my main (first-order) thesis: If we look at the different sets of criteria, we uncover a deeper disagreement, namely about what the function of a levels account is supposed to be and what the term “level” means. Accordingly, I claim that disparate accounts of levels indicate different functions of levels.“
We consider neither passage a piece of beauty. The point is merely to take some work in progress and see what happens if you follow the two steps suggested above: (a) articulate claims; (b) label items as such. – What can we learn from this small exercise? We think that the contrast between these two versions shows just how big of an impact the manner of presentation can have, not least on the perceived strength of a text. The desired effect would be that a reader can easily identify what is at stake for the author. Content-wise, both versions say the same thing. However, the first version strikes us as a bit detached and descriptive in character, whereas the second version seems more engaged and embracing a position. What used to be a text about a debate has now become a text partakingin a debate. (Of course, your impressions might differ. So we’d be interested to hear about them!) Another thing we saw confirmed in this exercise is that you always already have a position, because you end up highlighting what matters to you. Having something to say about a debate still amounts a position. Arguably, it’s also worth to be presented as such.
Where do we go from here? Once you have reformulated such a chunk and labelled some of your ideas (say, as first and second order claims etc.), you can rewrite the rest of your text accordingly. Identify these items in the introduction, and clarify which of those items you argue for in the individual sections of your paper, such that they lead up to these final paragraphs. That will probably allow you (and the reader) to highlight the rough argumentative structure of your paper. Once this is established, it will be much easier to polish individual sections.
For a few years during the 80s, Modern Talking was one of the most well known pop bands in Germany. But although their first single “You’re my heart, you’re my soul” was sold over eight million times, no one admitted to having bought it. Luckily, my dislike of their music was authentic, so I never had to suffer that particular embarrassment. Yet, imagine all these people alone in their homes, listening to their favourite tune but never daring to acknowledge it openly. Enjoying kitsch of any sort brings the whole drama of self-censorship to the fore. You might be moved deeply, but the loss of face is more unbearable than remaining in hiding. What’s going on here? Depending on what precisely is at stake, people feel very differently about this phenomenon. Some will say that self-censorship just maintains an acceptable level of decency or tact; others will say that it reflects political oppression or, ahem, correctness. At some point, however, you might let go of all shame. Perhaps you’ve got tenure and start blogging or something like that … While some people think it’s a feature of the current “cancel culture”, left or right, I think it’s more important to see the different kinds of reasons behind self-censorship. In some cases, there really is oppression at work; in other cases, it’s peer pressure. Neither is fun. In any case, it’s in the nature of this phenomenon that it is hard to track in a methodologically sound way. So rather than draw a general conclusion, it might be better to go through some very different stories.
Bad thoughts. – Do you remember how you, as a child, entertained the idea that your thoughts might have horrible consequences? My memory is faint, but I still remember assuming that thinking of swear words might entail my parents having an accident. So I felt guilty for thinking these words, and tried to break the curse by uttering them to my parents. But somehow I failed to convince them of the actual function of my utterance, and so they thought I was just calling them names. Today, I know that this is something that happens to occur in children, sometimes even pathologically strong and thus known as “intrusive thoughts” within an “obsessive compulsory disorder”. Whatever the psychological assessment, my experience was that of “forbidden” thoughts and, simultaneously, the inability to explain myself properly. Luckily, it didn’t haunt me, but I can imagine it becoming problematic.
One emergence of the free speech debate. – When I was between 7 and 10 years old (thus in the 1970s), I sometimes visited a lonely elderly woman. She was an acquaintance of my mother, well in her 70s and happy to receive some help. When no one else was around she often explained her political views to me. She was a great admirer of Franz Josef Strauß whom she described to me as a “small Hitler – something that Germany really needs again”. She hastened to explain that, of course, the real Hitler would be too much, but a “small” one would be quite alright. She then praised how, back in the day, women could still go for walks after dark etc. Listening to other people of that generation, I got the impression that many people in Germany shared these ideas. In 2007, the news presenter Eva Herman explicitly praised the family values of Nazi Germany and was dismissed from her position. The current rise of fascism in Germany strikes me as continuous with the sentiments I found around me early on. And if I’m not mistaken these sentiments date back at least to the 1930s and 1940s. In my experience, Nazism was never just an abstract political view. Early on did I realise that otherwise seemingly “decent” people could be taken by it. But this concrete personal dimension made the sweaty and simplistic attitude to other people all the more repulsive. In any case, I personally found that people in the vicinity of that ideology are the most vocal people who like to portray themselves as “victims” of censorship, though they are certainly not censoring themselves. (When it comes to questions of free speech, I am always surprised that whistleblowers such as Snowden are not mentioned.)
Peer pressure and classism. – I recently hosted a guest post on being a first generation student that really made me want to write about this issue myself. But often when I think about this topic, I still feel uncomfortable writing about it. In some ways, it’s all quite undramatic in that the transition to academia was made very easy by my friends. For what shouldn’t be forgotten is that it’s not only your parents and teachers who educate you. In my case at least, I tacitly picked up many of the relevant habits from my friends and glided into being a new persona. Although I hold no personal grudges, I know that “clothes make people” or “the man” as Gottfried Keller’s story is sometimes translated. What I noticed most is that people from other backgrounds often have a different kind of confidence being around academics. Whether that is an advantage across the board I don’t know. What I do know is that I took great care to keep my own background hidden from most colleagues, at least before getting a tenured job.
Opportunism and tenure. – Personally, I believe that I wouldn’t dare publishing this very post or indeed any of my posts, had I not obtained a tenured position. Saying this, I don’t want to impart advice. All I want to say is that getting this kind of job is what personally freed me to speak openly about certain things. But the existential weight of this fact makes me think that the greatest problem about self-censorship lies in the different socio-economic status that people find themselves in. This is just my experience, but perhaps it’s worth sharing. So what is it about, you might wonder? There is no particular truth that I would not have told before but would tell now. It’s not a matter of any particular opinion, be it left or right. Rather, it affects just about everything I say. The fact that I feel free to talk about my tastes, about the kitsch I adore, about the music I dislike, about the artworks I find dull, alongside the political inclinations I have – talking about all of this openly, not just politics, is affected by the fact that I cannot be fired just so and that I do not have to impress anyone I don’t want to impress. It is this freedom that I think does not only allow us to speak but also requires us to speak up when others will remain silent out of fear.
The myth of authenticity. – The fact that many of us feel they have to withhold something creates the idea that there might be a vast amount of unspoken truths under the surface. “Yes”, you might be inclined to ask, “but what do you really think?” This reminds me of the assumption that, in our hearts, we speak a private language that we cannot make intelligible to others. Or of the questions immigrants get to hear when people inquire where they really come from. It doesn’t really make sense. While it is likely that many people do not say what they would say if their situation were different, I don’t think it’s right to construe this as a situation of hidden truths or lies. (Some people construe the fact that we might hide conceal our opinions as lies. But I doubt that’s a pertinent description.) For better or worse, the world we live in is all we have when it comes to questions of authenticity. If you choose to remain silent, there is no hidden truth left unspoken. It just is what it is: you’re not speaking up and you might be in agony about that. You might conceal what you think. But then it is the concealing that shapes the world and yourself, not the stuff left unspoken. Put differently, there are no truths, no hidden selves, authentic or not, that persist without some relation to interlocutors.
Speaking of which, I want to finish this post with a word of thanks. It’s now two years ago that I started this blog. By now I have written 118 posts. If I include the guest posts, it adds up to 131. Besides having the pleasure of hosting great guest authors, I feel enormously privileged to write for you openly. On the one hand, this is enabled by the relatively comfortable situation that I am in. On the other hand, none of this would add up to anything if it weren’t for you, dear interlocutors.