This is the second installment of my still fairly new series Philosophical Chats. In this episode, I have a conversation with Nora Migdad who majors in Biology and minors in Philosophy. Like me (but a long time ago), Nora is a first-generation student. While being a first-gen student is often (rightly) treated as lending itself to disadvantages, it also offers intriguing perspectives on the peculiarities of academic life.
Following up on a guest post about being a first-gen student, Nora eventually initiated a conversation about this topic. After some exchanges about possible questions to be addressed we finally found time for the virtual meeting recorded above. Among the issues we covered are:
being a first-gen student 0:00
work-pressure and hierarchies 11:17
hierarchies, misconduct and prestige 12:32
protecting harassers 15:00
dealing with harassment outside and inside academia 22:40
Words fail me. And I’m still torn between solemnly staring into the middle distance and making silly jokes. For historians of philosophy like me, Susanne Bobzien’s paper “Frege plagiarized the Stoics” is a sort of landing on the moon, nothing short of a sensation. But reading comments on the matter here and there, I also begin to worry that the implications of her findings might tempt people to dismiss them out of hand. Why? Because they shatter a much cherished historiography. Frege is famously considered the “founder of modern analytic philosophy”. If Frege copied crucial parts of his later works from (Carl Prantl’s presentation of) Stoic logic, then these parts of the foundation are not Fregean but Stoic. This shifts a number of things, in our understanding of our history, of crucial tenets that held various generations of analytic philosophers captive, but also assumptions of authorship or originality. In this post, I simply want to highlight some implications that I think need elaboration in years to come.
Let me begin with why this moves me personally. After finishing my PhD on Ockham’s account of mental language in 2001, I was mainly driven by one question: What is it that makes us assume that sentences are complete units? Working on a project proposal on “Sentences, Senses, and States of Affairs: Conceptions of Semantic Identity from the Middle Ages onwards”, I studied ancient, medieval and modern texts by philosophers and grammarians. Although I started out from Abelard’s theory of the dictum and began to look for paths to 14th-century authors such as Adam Wodeham and Gregory of Rimini, it was entirely natural to read some Stoic material as well as Frege. All these authors attempt to spell out an account of what complete sentences say in opposition to words or other smaller linguistic units. In my project on this longue durée of sentence theories, I tried to pursue three different questions: (1) What are conceptual similarities between these accounts? (2) Are there lines of historical influence between these accounts? (3) Why did the issues tackled in these accounts seemingly disappear (if they did) in the 13th, and after the 14th century, until coming up again in the 19th century?
Now when presenting my research to historical audiences, they often warned me that my approach was prone to anachronism: “What does Frege have to do with medieval or ancient accounts?” The similarities in the theories were often shrugged off by pointing out similarities between the questions asked. Conversely, when presenting in front of philosophers they mostly weren’t moved by the historical accounts: “Of course, Frege is still interesting. But these earlier accounts are of mere historical interest.” So without clear sources that allowed for connecting the dots my question (2) for historical lines was often seen as either anachronistic or trivial. An idea shared by most historians and philosophers, then, was that, despite some striking similarities, Frege’s account of sentences was to be seen as entirely different from the endeavours in the ancient and medieval contexts. So what did I think? Although I was hopeful to find some direct historical lines, I wouldn’t have dreamed of Frege as having copied Stoic material. Susanne Bobzien’s paper has has shattered my entire picture of the matter. What have I been looking at when reading Frege? Have I, in fact, (at least partly) been reading the Stoic account in the wake of which we understand Abelard and others? Have I been preempted from seeing this by the silly but pervasively linear timeline of history at the back of my mind? What are we really talking about when we invoke the “Fregean account” of sentences? What do these names refer to?
Why is Bobzien’s discovery groundbreaking? – Looking at some first reactions to Bobzien’s paper, it’s disheartening to see how some people try to debunk these findings. Two main lines of defence seemed to emerge very quickly: (a) One line is that “we have known this for a long time”. Pointing to earlier research, some people emphasise that certain conceptual similarities have already been studied very well. (b) The other line is that “Frege still deserves credit for having invented … [add a list of venerable items manifesting the status of the genius father].” What these defences miss is the historical claim of the paper: Bobzien makes a compelling case that Frege took the Stoic accounts from Carl Prantl’s Geschichte der Logik im Abendlande. This answers a large part of question (2) of my former project. It is not merely an account of striking similarities; it is historical evidence for a direct influence. For any historian of philosophy, that’s the best you can get. Given that most accounts deny such a direct influence and given that most protagonists in 20th-century analytic philosophy take Frege’s work as their point of departure, much of our history needs to be rewritten.
People wishing to defend Frege’s status as a founder of analytic philosophy seem to misconstrue Bobzien’s findings in a different way. They don’t emphasise that the similarities were known, but that they don’t mean much, in the sense that Frege is still vastly different. But Bobzien does not claim that Frege is deprived of this status. She acknowledges clearly that Frege thought through carefully what he took over. But we would deprive ourselves of our understanding of Frege’s foundational work, if we ignored that it is in fact of Stoic origin. Frege’s work needs to be rethought, too. And reading Frege might hold more for reserachers on pre-modern philosophers than the staunch hunters of anachronisms care to admit. – In this sense, Bobzien’s paper does not end but open conversations about the history of philosophy.
Finally, we need to see how we wish to tackle the issue of plagiarism. Bobzien herself opts for a “benign” understanding, involving acts of “appropriation” when taking over the ideas as “being freely available to anyone to help themselves to”. Of course, certain jokes at the expense of the Fregean idea of “grasping thoughts in the third realm” or at the expense of analytic philosophers considering the history of philosophy as a resource for “mining it for ideas” suggest themselves. However, there remains the more serious issue of how we want to conceptualise the fact, yes, it is a fact, that we tend appropriate ideas of others. Given that most of us do professional work on texts, I am struck by an often rather simplistic understanding of what constitutes authorship or originality. In my piece on philosophy’s adversarial culture I suggested a more fluid attitude towards authorship: “If you discuss an idea among friends, tossing out illustrations, laughing away criticism and speculating about remote applications, whose idea is it at the end of the night? Everyone might have contributed to an initial formulation, of which hardly anything might be left. In this sense, ideas very often have multiple authors. In such friendly settings, a common reaction to a clarifying criticism is not defence, but something along the lines of: ‘Right, that’s what I actually meant to say!’ ” What is lacking in cases where we detect copying, appropriation or plagiarism is often not a misconstrued form of originality, but rather an acknowledgement of the role of our interlocutors and of the fact that thinking is not a lonely grasping of abstract thoughts but a social process.
Everything we take to be history is, in fact, present right now. Otherwise we wouldn’t think about it.
When I was little, I often perceived the world as an outcome of historical progress. I didn’t exactly use the word “historical progress” when talking to myself, but I thought I was lucky to grow up in the 20th century rather than, say, the Middle Ages. Why? Well, the most obvious examples were advances in technology. We have electricity; they didn’t. That doesn’t change everything, but still a lot. Thinking about supposedly distant times, then, my childhood mind conjured up an image of someone dragging themselves through a puddle of medieval mud, preferably while I was placed on the sofa in a cozy living-room with the light switched on and the fridge humming in the adjacent kitchen. It took a while for me to realise that this cozy contrast between now and then is not really an appreciation of the present, but a prejudice about history, more precisely about what separates us from the past. For what my living room fantasy obscures is that this medieval mud is what a lot of people are dragging themselves through today. It would have taken a mere stroll through town to see how many homeless or other people do not live in the same world that I identified as my present world. Indeed, most things that we call “medieval” are present in our current world. Listening to certain people today, I realise that talk of the Enlightenment, the Light of Reason and Rationality is portrayed in much the same way as my living-room fantasy. But as with the fruits of technology, I think the praise of Enlightenment is not an appreciation of the present, but a prejudice about what separates us from the past. One reaction to this prejudice would be to chide the prejudiced minds (and my former self); another reaction is to try and look more closely at our encounters with these prejudices when doing history. That means to try and see them as encounters with ourselves, with the ideologies often tacitly drummed into us, and to understand how these prejudices form our expectations when reading old texts. Approaching texts in this latter way, means to read them both as historical philosophical documents as much as an encounter with ourselves. It is this latter approach I want to suggest as a way of reading and teaching what could be called outdated philosophy. According to at least some of my students’ verdicts about last term, this might be worth pursuing.
Let’s begin with the way that especially medieval philosophy is often introduced. While it’s often called “difficult” and “mainly about religion”, it’s also said to require so much linguistic and other erudition that anyone will wonder why on earth they should devote much time to it. One of the main take-away messages this suggests is an enormous gap between being served some catchy chunks of, you know, Aquinas, on the one hand, and the independent or professional study of medieval texts, on the other hand. Quite unlike in ethics or social philosophy, hardly any student will see themselves as moving from the intro course to doing some real research on a given topic in this field. While many medievalists and other historians work on developing new syllabi and approaches, we might not spend enough time on articulating what the point or pay-off of historical research might be. – I don’t profess to know what the point of it all is. But why would anyone buy into spending years on learning Latin or Arabic, palaeography or advanced logic, accepting the dearth of the academic job market, a philosophical community dismissing much of their history? For the sake of, yes, what exactly? Running the next edition of Aquinas or growing old over trying to get your paper on Hildegard of Bingen published in a top journal? I’m not saying that there is no fun involved in studying these texts and doing the work it takes; I’m wondering whether we make sufficiently explicit why this might be fun. Given the public image of history (of philosophy), we are studying what the world was like before there was electricity and how they then almost invented it but didn’t.
Trying to understand what always fascinated me about historical studies, I realised it was the fact that one learns as much about oneself as about the past. Studying seemingly outdated texts helped me understand how this little boy in the living room was raised into ideologies that made him (yes, me) cherish his world with the fridge in the adjacent kitchen, and think of history as a linear progress towards the present. In this sense, that is in correcting such assumptions, studying history is about me and you. But, you ask, even if this is true, how can we make it palpable in teaching? – My general advice is: Try to connect to your student-self, don’t focus on the supposed object of study, but on what it revealed about you. Often this isn’t obvious, because there is no obvious connection. Rather, there is disparity and alienation. It is an alienation that might be similar to moving to a different town or country. So, try to capture explicitly what’s going on in the subject of study, too, in terms of experience, resources and methods available. With such thoughts in mind, I designed a course on the Condemnation of 1277 and announced it as follows:
Condemned Philosophy? Reason and faith in medieval and contemporary thought
Why are certain statements condemned? Why are certain topics shunned? According to a widespread understanding of medieval cultures, especially medieval philosophy was driven and constrained by theological and religious concerns. Based on a close reading of the famous condemnation of 1277, we will explore the relation between faith and reason in the medieval context. In a second step we will look at contemporary constraints on philosophy and the role of religion in assessing such constraints. Here, our knowledge of the medieval context might help questioning current standards and prejudices. In a third step we will attempt to reconsider the role of faith and belief in medieval and contemporary contexts.
The course was aimed at BA students in their 3rd year. What I had tried to convey in the description is that the course should explore not only medieval ideas but also the prejudices through which they are approached. During the round of introductions many students admitted that they were particularly interested in this twofold focus on the object and the subject of study. I then explained to them that most things I talk about can be read about somewhere else. What can’t be done somewhere else is have them come alive by talking them through. I added that “most of the texts we discuss are a thousand years old. Despite that fact, these texts have never been exposed to you. That confrontation is what makes things interesting.” In my view, the most important tool to bring out this confrontation lies in having students prepare and discuss structured questions about something that is hard to understand in the text. (See here for an extensive discussion) The reason is that questions, while targeting something in the text, reveal the expectations of the person asking. Why does the question arise? Because there is something lacking that I would expect to be present in the text. Most struggles with texts are struggles with our own expectations that the text doesn’t meet. Of course, there might be a term we don’t know or a piece of information lacking, but this is easily settled with an internet search these days. The more pervasive struggles often reveal that we encounter something unfamiliar in the sense that it runs counter to what we expect the text to say. This, then, is where a meeting of the current students and historical figures takes place, making explicit our and their assumptions.
During the seminar discussions, I noticed that students, unlike in other courses, dared targeting really tricky propositions that they couldn’t account for on the fly. Instead of trying to appear as being on top of the material, they delineated problems to be addressed and raised genealogical questions of how concepts might have developed between 1277 and 2020. Interestingly, the assumption was often not that we were more advanced. Rather they were interested in giving reasons why someone would find a given idea worth defending. So my first impression after this course was that the twofold focus on the object and subject of study made the students’ approach more historical, in that they didn’t take their own assumptions as a yardstick for assessing ideas. Another outcome was that students criticised seeing our text as a mere “object of study”. In fact, I recall one student saying that “texts are hardly ever mere objects”. Rather, we should ultimately see ourselves as engaging in dialogue with other subjects, revealing their prejudices as much as our own.
The children in the living room were not chided. They were recognised in what they had taken over from their elders. Now they could be seen as continuing to learn – making, shunning and studying history.
“I’m an avid reader of Locke.” “I love listening to Bach.” – Utterances like this often expose the canon – be it in philosophy, literature, music or other arts – as a status symbol. The specifics of our cultural capital might differ, but basically we might say that one man’s Mercedes Benz is another’s readership of Goethe. What is often overlooked is that challenging the canon can look equally status-driven: “Oh, that’s another dead white man.” “I’m so excited about Caroline Shaw’s work.” – Spoken in the pertinent in-group, utterances like this are just as much of an indication of status symbolism. Challenging the canon, then, can become as much of a worn trope as defending adherence to the traditional canon. Let me explain.
Functions of history. – For better or worse, the aims of our discipline are often portrayed in epistemic terms. We study history, we say, to understand or explain (the development of) ideas and events. And in doing that, we want to “get it right.” Arguably, the aim of getting it right obscures a whole set of quite different aims of history. I think more often than not, history is done to (politically) justify or even legitimise one’s position. Just as talk about ancestors justifies inheritance, talk about philosophical predecessors is often invoked to legitimise why it’s worth thinking about something along certain lines. Just asking a question on the fly is nothing, but continuing the tradition of inquiring about the criteria of knowledge does not only justify historical research; it also legitimises our current approaches. Seen this way, a historical canon legitimises one’s own interests. Likewise, the attack on a canonical figure can be seen as shaking such legitimacy, be it with regard to representative figures, topics or questions. Conversely, I might aim to adjust the canon to find and highlight the ancestry that legitimises a new field of study. This endeavour is not one of “getting it right” though. Of course, we cannot change the past, but we can attempt to change the canon or what we admit to the canon so as to admit of ancestors in line with new ways of thinking. As I see it, these are well-founded motivations to study and/or alter the study of canonical figures. – However, while such motivations might well drive our choices in doing history, they can also deteriorate into something like mere status symbolism. Let’s look at a concrete example.
Three kinds of debates. – I recently read a piece about Locke on slavery, making the point that Locke’s involvement in the American context is far more problematic than recent research portrayed it to be.* The piece struck me as an interesting contribution to (1) the debate on Locke’s political ideas, but the title was jazzed up with the recommendation to leave “Locke in the dustbin of history”. Since the word “dustbin” doesn’t return in the text, I’m not sure whether the title reflects the author’s choice. Be that as it may, in contrast to the piece itself (which is part of a series of texts on Locke’s political position), the title firmly places it in (2) a larger public debate about the moral status of canonical philosophers such as Hume, Berkeley or Aristotle. I think both the more scholarly and the more public debates are important and intertwined in various ways. We can be interested in both how Locke thought about slavery and how we want to judge his involvement. Given what I said about the justifying function of history, it’s clear that we look at authors not only as ancestors. We also ask whether they do or do not support a line of thought we want to endorse. And if it turns out that Locke’s thought is compatible with advocating slavery, then we want to think again how we relate to Locke, in addition to studying again the pertinent documents. However, in addition to these two debates, there is (3) yet another debate about the question whether we should be having these debates at all. This is the debate about the so-called “cancel culture”. While some say we shouldn’t cancel philosophers like Locke, others challenge the omnipresence of the notorious old or dead white men. As I see it, this latter debate about cancellation is highly problematic insofar as its proponents often question the legitimacy of the former (scholarly) debates.
As I see it, debates (1) and (2) are scholarly debates about Locke’s position on slavery. (1) makes an internal case regarding Locke’s writings. (2) also zooms in on the contrast to current views on slavery. (3) however is a different debate altogether. Here, the question is mainly whether it is legitimate to invoke Locke as an ancestor or as part of a canon we want to identify with. The main problem I see, though, is that the title “Leave John Locke in the historical dustbin” makes the whole piece ambiguous between (2) and (3). Given the piece, I’d think this works on level (2), but given how people responded to it and can use it, it becomes a hit piece on level (3) whose only aim seems to be to write Locke out of the (legitimate) canon. But this ambiguity or continuity between the the two kinds of debate is disastrous for the discipline. While on levels (1) and (2) the question of how Locke relates to slavery is an open question, dependent on interpretations of empirical evidence, Locke’s moral failure is already taken for granted on level (3). Here, the use of the canonical figure Locke stops being historical. It reduces to political partisanship. Why? Because history is then taken to be something already known, rather than something to be studied.
The irony is that each group, the defenders as well as the challengers of the canonical figure, questions the moral legitimacy of what they suppose the other group does by making a similar move, that is by appealing to a status symbol that enjoys recognition in the pertinent in-group. One group shouts “Locke and Enlightenment”; the other group shouts “Locke and Racism”. Neither approach to history strikes me as historical. It deteriorates into a mere use of historical items as status symbols, providing shortcuts for political fights. All of this is perhaps not very suprprising. The problem is that such status symbolism undermines scholarly debates and threatens to reduce historical approaches to political partisanship. My point, then, is not that all political or moral discussion of history reduces to status symbolism. But there is the danger that historical scholarship can appear to be continuous with mere status symbolism.
* I’d like to thank Nick Denyer, Natalia Milopolsky, Naomi Osorio, Tzuchien Tho, Anna Tropia, and Markus Wild for insightful remarks or exchanges on this matter.
Are all human beings equal? – Of course, that’s why we call them human. – But how do we know? – Well, it’s not a matter of empirical discovery, it’s our premise. – I see. And so everything else follows?
The opposition between empiricism and rationalism is often introduced as an epistemological dispute, concerning primarily the ways knowledge is acquired, warranted and limited. This is what I learned as a student and what is still taught today. If you’ve studied philosophy for a bit, you will also have heard that this opposition is problematic and coarse-grained when taken as a historical category. But in my view the problem is not that this opposition is too coarse-grained (all categories of that kind are). Rather, the problem lies with introducing it as a mere epistemological dispute. As I see it,* the opposition casts a much wider conceptual net and is rooted in metaphysical and even political ideas. Thus, the opposition is to be seen in relation to a set of disagreements in both theoretical and practical philosophy. In what follows, I don’t want to present a historical or conceptual account, but merely suggest means of recognising this wide-ranging set of ideas and show how the distinction helps us seeing the metaphysical implications and political choices related to our epistemological leanings.
Let me begin with a simple question: Do you think there is, ultimately, only one true description of the world? If your answer is ‘yes’, I’d be inclined to think that you are likely to have rationalist commitments. Why? Well, because an empiricist would likely reject that assumption for the reason that we might not be able to assess whether we lack important knowledge. Thus, we might miss out on crucial insights required to answer that question in the first place. This epistemological caution bears on metaphysical questions: Might the world be a highly contingent place, subject to sudden or constant change? If this is affirmed, it might not make sense to say that there is one true description of the world. How does this play out in political or moral terms? Rephrasing the disagreement a bit, we might say that rationalists are committed to the idea that the world is ordered in a certain way, while empiricists will remain open as to whether such an order is available to us at all. Once we see explanatory order in relation to world order, it becomes clear that certain commitments might follow for what we are and, thus, for what is good for us. If you believe that we can attain the one true description of the world, you might also entertain the idea that this standard should inform our sciences and our conduct at large. – Of course, this is quite a caricature of what I have in mind. All I want to suggest is that it might be rewarding to look whether certain epistemological leanings go hand in hand with metaphysical and practical commitments. So let’s zoom in on the different levels in a bit more detail.
(1) Epistemology: As I have already noted, the opposition is commonly introduced as concerning the origin, justification and limits of knowledge. Are certain ideas or principles innate or acquired through the senses? Where do we have to look in order to justify our assumptions? Can we know everything there is to be known, at least in principle, or are there realms that we cannot even sensibly hope to enter? – If we focus on the question of origin, we can already see how the opposition between empiricism and rationalism affects the pervasive nature-nurture debates: Are certain concepts and the related abilities owing to learning within a certain (social) environment or are the crucial elements given to us from the get-go? Now, let’s assume you’re a rationalist and think that our conceptual activity is mostly determined from the outset. Doesn’t it follow from this that you also assume that we are equal in our conceptual capacities? And doesn’t it also follow that rules of reasoning and standards of rationality are the same for all (rather than owing, say, to cultural contexts)? – While the answers are never straightforward, I would assume at least certain leanings into one direction or another. But while such leanings might already inform political choices, it is equally important to see how they relate to other areas of philosophy.
(2) Metaphysics: If you are an empiricist and assume that the main sources of our knowledge are our (limited) senses, this often goes and in hand with epistemic humility and the idea that we cannot explain everything. Pressed why you think so, you might find yourself inclined to say that the limits of our knowledge have a metaphysical footing. After all, if we cannot say whether an event is fully explicable, might this not be due to the fact that the world is contingent? Couldn’t everything have been otherwise, for instance because God interferes in events here and there? In other words, if you don’t assume there to be a sufficient reason for everything, this might be because you accept brute facts. Accordingly, the world is a chancy place and what our sciences track might be good enough to get by, but never provide the certainty that is promised by our understanding of natural laws. Depending on the historical period, such assumptions often go hand in hand with more or less explicit forms of essentialism. The lawful necessities in nature might be taken to relate to the way things are. Now essences are not only taken to determine what things are, but also how they ought to be. – Once you enter the territory of essentialism, then, it is only a small step to leanings regarding norms of being (together), of rationality, and of goodness.
(3) Theology / Sources of Normativity: If you allow for an essentialist determination of how things are and ought to be, this immediately raises the question of the sources of such essences and norms. Traditionally, we often find this question addressed in the opposition between theological intellectualism (or rationalism) and voluntarism: Intellectualists assume that norms of being and acting are prior to what God wills. So even God is bound by an order prior to his will. God acts out of reasons that are at least partly determined by the way natural things and processes are set up. By contrast, voluntarists assume that something is rational or right because God wills it, not vice versa. It is clear how this opposition rhymes with that of rationalism and empiricism: The rationalist assumes one order that even binds God. The empiricist remains epistemically humble, because she believes that rationality is fallible. Perhaps she believes this because she assumes that the world is a chancy place, which in turn might be owing to the idea that the omnipotent God can intervene anytime. It is equally clear how this opposition might translate into (lacking) justifications of moral norms or political power. – Unlike often assumed in the wake of Blumenberg and others, this doesn’t mean that voluntarism or empiricism straightforwardly translate into political absolutism. It is hardly ever a particular political idea that is endorsed as a result of empiricist or rationalist leanings. Nevertheless, we will likely find elements that play out in the justification of different systems.**
Summing up, we can see that certain ideas in epistemology go hand in hand with certain metaphysical as well as moral and political assumptions. The point is not to argue for systematically interwoven sets of doctrines, but to show that the opposition of empiricism and rationalism is so much more than just a disagreement about whether our minds are “blank slates”. Our piecemeal approach to philosophical domains might have its upsides, but it blurs our vision when it comes to the tight connections between theoretical and practical questions which clearly were more obvious to our historical predecessors. Seen this way, you might try and see whether you’ll find pertinently coherent assumptions in historical or current authors or in yourself. I’m not saying you’re inconsistent if you diverge from a certain set of assumptions. But it might be worth asking if and why you conform or diverge.
* A number of ideas alluded to here would never have seen the light of day without numerous conversations with Laura Georgescu.
The editors of Vivarium, a leading journal in the history of philosophy, recently published a notice on the retraction of several articles. It comes as no surprise that there was much discussion of the case on social media. Alongside the shock about the incident, it was the retraction notice itself that drew attention of blogs and individual commenters. The gist was that they had done a good job in conscientiously documenting instances of alleged plagiarism and describing the “cut throat nature of academic life”, as Eric Schliesser put it in a timely post on the issue. In what follows, I want to confine myself to the nature of the retraction notice.
What struck me in this notice is an aspect that I would like to call the moral framing of the editorial work in opposition to much of the rest of academia. Here is the passage I have in mind:
“We do not enjoy performing our duty. For marginal fields such as those served by Vivarium, we have seen from experience that the damage wreaked by plagiarism extends to institutions, bringing vulnerable positions, departments, and institutes to the attention of administrators eager to let the rationale of collective punishment direct the evisceration of budgets in Social Sciences and the Humanities. Our colleagues in adjacent fields will seize upon public cases of misconduct as an opportunityto reallocate scarce resources in their favor, thereby ensuring that those who previously lost out to plagiarists in competition for fellowships and positions lose out once again.” (C. Schabel / W. Duba, Notice, Vivarium 2020, 257; italics mine)
What is contrasted here is the unpleasant “duty” of the editors with the, shall we say, moral recklessness of administrators and colleagues. Of course, we are familiar with tirades about academia. But this is a formal notice about the reasons for retraction, in a top journal of the discipline. The conscientious listing of passages that follows makes for a strange contrast to the allusions (“we have seen from experience”) and unverified accusations expressed here. For a journal that rightly prides itself on standards of scholarly evidence, this is not a good look. Let me point out two aspects:
Firstly, it might indeed be the case that there are “administrators” who could be quoted as having used measures of “collective punishment” in such cases. But do we have evidence about this? And is this really evidence about the “eagerness” of administrators or are we looking at an even more structural issue? Most importantly, what is the reason to point this out in the given context? Does it serve to heighten the blameworthiness of what is being documented?
Secondly, I wonder about the reference to “our colleagues”. Since I am a specialist in the pertinent “marginal field”, the expression “our colleagues” should extend to my colleagues. The phrasing according to which they “will seize upon public cases” amounts to a prediction of their behaviour. Have my academic colleagues done such things? Are they likely to do such things? I know that people say all sorts of bad things on Twitter and I know that academia is competitive, but nothing I heard about such cases would bear testimony to the supposed behaviour. Again, would it not be apt to provide at least some evidence for this prediction?
Thus, we might say that the notice has a twofold structure: on the one hand, it outlines the passages and reasons for retraction; on the other hand, it frames this outline in a wider context of academic practices and moral standards. But while the outline fulfils good scholarly standards, the adjacent framing appeals to undocumented experience or hearsay. It is especially this latter part that strikes me as problematic, not least because it treats sociological assumptions about the current academic context as something that does not require reliable evidence.
“The ‘thorough’. – Those who are slow to know think that slowness is an aspect of knowledge.” Friedrich Nietzsche, The Gay Science
“Damn! I should have thought of that reply yesterday afternoon!” Do you know this situation? You’re confronted with an objection or perhaps a somewhat condescending witticism, and you just don’t know what to say. But the next day, you think of the perfect response. Let’s call this intellectual regret, whishing you had had that retort ready at the time or even anticipated the objection when stating your thesis, but you haven’t. Much of our intellectual lives is probably spent somewhere between regret and attempts at anticipating objections. What does this feeling of regret indicate? To a first approximation, we might say it indicates that we think of ourselves as imperfect. When we didn’t anticipate or even reply to an objection, something was missing. What was missing? Arguably, we were lacking what is often called smartness, often construed as the ability to quickly defend ideas against objections. But is that so?
Given our adversarial culture, we often take ourselves as either winning or losing arguments. Thus, we tend to see oppositions to our ideas as competitive. If we say “not p” and someone else advances arguments for “p”, we have the tendency to become defensive rather than embrace “p”. Accordingly, we often structure our work as the defence of a position, say “not p”. The anticipation of the objection “p” is celebrated as a hero narrative, in which p is anticipated and successfully conquered. This is why intellectual regrets might loom large in our mental lives. As I see it, however, the hero narrative is problematic. As I have argued earlier, it instils the desire to pick a side, ideally the right side. But this misses a crucial point of philosophical work: Taking sides is at best transitory; in the end it’s not more battle but peace that awaits us. In what follows, I’d like to suggest that we tend to misconstrue the nature of oppositions in philosophy. Rather than competitive positions, oppositions form an integral part of the ideas they seem to be opposed to. Accordingly, what we miss out on in situations of intellectual regret is not a smart defence of a position or dogma, but a more thorough understanding of our own ideas. But in what way does such an understanding involve oppositions?
Understanding through antonyms. – How do you explain philosophical ideas to someone? A quick way consists in contrasting a term with an opposed term or antonym. Asked what “objectivity” means, a good start is to begin by explaining how we use the term “subjective”. Wishing to explain an “ism”, it helps to look for the opposed ism. So you might explain “externalism” as an idea countering internalism. Now you might object that this shows how philosophers often proceed by carving out positions. It does indeed show as much. But it is equally true that we often need to understand the opposing position in order to understand a claim. So while the philosophical conversation often seems to unfold through positions and counterpositions, understanding them requires the alternatives. Negative theology has turned this into a thorough approach. – Now you might object that this might be a merely instrumental feature of understanding ideas, but it doesn’t show that one idea involves another idea as an integral part. After all, subjectivism is to be seen in opposition to objectivism, and if you’re holding one position, you cannot hold the other. To this I reply that, again, my point is indeed not about positions but about understanding. However, this feature is not merely instrumental. Arguably, at least certain ideas (such as subjectivism or atomism) do not make sense without their oppositions (objectivism or holism). That is, the relation to opposed ideas is part of the identity of certain ideas.
Counterfactual reasoning. – It certainly helps to think of opposed terms when making sense of an idea. But this feature extends to understanding whole situations and indeed our lives as such. Understanding the situation we are in often requires counterfactual reasoning. Appreciating the sunshine might be enhanced by seeing what were the case if it were raining. Understanding our biographies and past involves taking into account a number of what-ifs. Planning ahead involves imagining decidedly what is not the case but should be. Arguably, states such as hope, surprise, angst, and regret would be impossible if it were not for counterfactual ideas. So again, identifying the situation we are in involves alternatives and opposites.
Dialectical reasoning. – You might argue again that this just shows a clear distinction between ideas and their opposites. I do not deny that we might decidedly walk through one situation or idea before entering or imagining another. However, at least for some ideas and situations, encountering their oppositions or alternatives does not only involve understanding their negation. Rather, it leads to a new or reflective understanding of the original idea. You will have a new sense of your physical integrity or health once you’ve been hurt. You understand “doubt” differently once you realise that you cannot doubt everything at once (when seeing that you cannot doubt that you’re doubting) or once you thought through the antisceptical ideas of Spinoza or Wittgenstein. Your jealousy might be altered when you’ve seen others acting out of jealousy. Just like Hume’s vulgar view is a different one once you realise that you cannot let go of it (even after considering the philosophical view). The dynamics of dialectical reasoning don’t just produce alternatives but new, arguably, enriched identities.
Triangulation and identity. – Once we recognise how understanding opposites produce a new identity for the idea or the understanding of our very own lives (the examined life!), we see how this pervades our thinking. Entering a town from the station is different from entering it via the airport or seeing it on a map. Here we have three different and perhaps opposing ways of approaching the same place. But who would think that one position or one way of seeing things is true or better or more advanced than another? All of these ways might be taken as different senses, guiding us to the same referent, or as different perspectives in a triangulation between two different interlocutors. When it comes to understanding what we call reality, there is of course a vast amount of senses or triangulations. But who would think one is better? Must we not say that each perspective adds yet more to the same endeavour? So while we might progress through various positions, I would doubt that they are competitive. Rather they strike me as contributing to the same understanding.
Disagreement. – If this is a viable view of things, it might strike you as a bit neoplatonist. But there is nothing wrong with that. But what about real disagreements? Where can we place disagreements, if all oppositions are ultimately taken to resolve into one understanding? Given how stubborn humans are, it would be odd to say that oppositions are merely transient. My hunch is that there can be real disagreement. How? Whenever we have a position or perspective we can benefit from another perspective on the same thing or issue or idea. In keeping with what I said above, encountering such a different perspective would not be a disagreement but, ultimately, an enrichment. That said, we can doubt whether our different perspectives really concern the same thing or idea. We might assume that we now enter the same town we saw on the map earlier. But perhaps we in fact enter a different town. Assuming sameness is a presupposition of making sense or having a sensible disagreement about the same. But our attempt of tracing sameness might fail.
Returning to our intellectual regrets, what is mostly missing is not the smartness to outperform our interlocutors. Rather, we might lack a more full understanding that only the perspectives of others can afford us. In this sense, the oppositions in philosophy are transient. If we have to wait and listen to others, that slowness is indeed an aspect of knowledge.
“… solipsism strictly carried out coincides with pure realism. The I in solipsism shrinks to an extensionless point and there remains the reality co-ordinated with it.” Wittgenstein, TLP 5.64
When was the last time you felt really and wholly understood? If this question is meaningful, then there are such moments. I’d say, it does happen, but very rarely. If things move in a good direction, there is an overlap or some contiguity or a fruitful friction in your conversation. Much of the time, though, I feel misunderstood or I feel that I have misunderstood others. – Starting from such doubts, you could take this view to its extremes and argue that only you understand yourself or, more extreme still, that there is nothing external to your own mind. But I have to admit that I find these extreme brands of solipsism, as often discussed in philosophy, rather boring. They are highly implausible and don’t capture what I think is a crucial idea in solipsism. What I find crucial is the idea that each of us is fundamentally alone. However, it’s important to understand in what sense we are alone. As I see it, I am not alone in the sense that only I know myself or only my mind exists. Rather, I am alone insofar as I am different from others. Solitude, then, is not merely a feeling but also a fact about the way we are.* In what follows, I’d like to suggest reasons for embracing this view and how its acknowledgement might actually make us more social.
Throwing the baby out with the bathwater. – In 20th-century philosophy, solipsism has often had a bad name. Solipsism was and is mostly construed as the view that subjective experience is foundational. So you might think that you can only be sure about what’s going on in your own mind. If you hold that view, people will ridicule you as running into a self-defeating position, because subjective states afford no criteria to distinguish between what seems and what is right. Rejecting subjective experience as a foundation for knowledge or theories of linguistic meaning, many people seemed to think it was a bad idea altogether. This led to an expulsion of experience from many fields in philosophy. Yes, it does seem misguided to build knowledge or meaning on subjective experience. But that doesn’t stop experience from playing an important part in our (mental) lives. Let me illustrate this issue a bit more so as to show where I see the problem. Take the word “station”. For the (public) meaning of this word, it doesn’t matter what your personal associations are. You might think of steam trains or find the sound of the word a bit harsh, but arguably nothing of this matters for understanding what the word means. And indeed, it would seem a bit much if my association of steam trains would be a necessary ingredient for mastering the concept or using it in communication. This is a bit like saying: If we want to use the word “station” to arrange a meeting point, it doesn’t matter whether you walk to the station through the village or take the shortcut across the field. And yes, it doesn’t matter for the meaning or success of our use of the word whether you cut across the field. But hang on! While it doesn’t matter for understanding the use of the word, it does matter for understanding my interlocutor. Thinking of steam trains is different from not thinking of them. Cutting across the field is different from walking through the village. This is a clear way in which the experience of interlocutors matters. Why? Well, because it is different. As speakers, we have a shared understanding of the word “station”; as interlocutors we have different experiences and associations we connect with that word. As I see it, it’s fine to say that experience doesn’t figure in the (public) meaning. But it is problematic to deny that the difference in experience matters.
A typical objection to this point is that private or subjective experience cannot be constitutive for meaning. But this goes only so far. As interlocutors, we are not only interested in understanding the language that someone uses, but also the interlocutor who is using it. This is not an easy task. For understanding language is rooted in grasping sameness across different contexts, while understanding my interlocutor is rooted in acknowledging difference (in using the same words). This is not a point about emphatic privacy or the idea that our experience were to constitute meaning (it doesn’t). It’s a point about how differences can play out in practical interaction. To return to the earlier example “Let’s go to the station” can mean very different things, if one of you wants to go jointly but it turns out you have different routes in mind. So understanding the interlocutor involves not only a parsing of the sentence, but an acknowledgement of the differences in association. It requires acknowledging that we relate different experiences or expectations to this speech act. So while we have a shared understanding of language, we often lack agreement in associations. It is this lack of agreement that can make me vastly different from others. Accordingly, what matters in my understanding of solipsism is not that we have no public language (we do), but that we are alone (to some degree) with our associations and experiences.
Arguably, these differences matter greatly in understanding or misunderstanding others. Let me give an example: Since I started blogging, I can see how often people pick one or two ideas and run. Social media allow you to test this easily. Express an opinion and try to predict whether you’ll find yourself in agreement with at least a fair amount of people. Some of my predictions failed really miserably. But even if predictions are fulfilled, most communication situations lack a certain depth of understanding. Why is this the case? A common response (especially amongst analytically inclined philosophers) is that our communication lacks clarity. If this were true, we should improve our ways of communicating. But if I am right, this doesn’t help. What would help is acknowledging the differences in experience. Accordingly, my kind of solipsism is not saying: Only I know myself. Or: Only my mind exists. Rather it says: I am different (from others).
This “differential solipsism” is clearly related to perspectivism and even standpoint theory. However, in emerging from the acknowledgement of solitude, it has a decidedly existential dimension. If a bit of speculation is in order, I would even say that the tendency to shun solipsism might be rooted in the desire to escape from solitude by denying it. It’s one thing to acknowledge solitude (rooted in difference); it’s another thing to accept the solitary aspects of our (mental) lives. Let’s look more closely how these aspects play out.
Even if philosophers think that experience doesn’t figure in the foundations of knowledge and meaning, it figures greatly in many of our interactions.** We might both claim to like jazz, but if we go to a concert, it might be a disappointment when it turns out that we like it for very different reasons. So you might like the improvisations, while I don’t really care about this aspect, but am keen on the typical sound of a jazz combo. If the concert turns out to feature one but not the other aspect, our differences will result in disagreement. Likewise, we might disagree about our way to the station, about the ways of eating dinner etc. Now as I see it, the solitude or differences we experience in such moments doesn’t sting because of the differences themselves. What makes such moments painful is rather when we endure and paste over these differences without acknowledging them.
If I am right, then I don’t feel misunderstood because you don’t happen to care about the sound of the combo. I feel misunderstood, because the difference remains unacknowledged. Such a situation can typically spiral into a silly kind of argument about “what really matters”: the sound or the improvisation. But this is just silly: what matters for our mutual understanding is the difference, not one of the two perspectives. In a nutshell: True understanding does not lie in agreement, but in the detailed acknowledgement of disagreement.***
But why, you might ask, should this be right? Why would zooming in on differences in association or experience really amend the situation? The reason might be given in Wittgenstein’s claim that solipsism ultimately coincides with realism. How so? Well, acknowledging the different perspectives should hopefully end the struggle over the question which of the perspectives is more legitimate. Can we decide on the right way to the station? Or on the most salient aspect in a jazz concert? No. What we can do is articulate all the perspectives, acknowledging the reality that each view brings to the fore. (If you like, you can imagine all the people in the world articulating their different experiences, thereby bringing out “everything that is the case.”)
Writing this, I am reminded of a claim Evelina Miteva made in a conversation about writing literature: The more personal the description of events is, the more universal it might turn out to be. While this sounds paradoxical, the realism of differential solipsism makes palpable why this is true. The clear articulation of a unique experience does not block understanding. Quite the contrary: It allows for localising it in opposition to different experiences of the same phenomenon. In all these cases, we might experience solitude through difference, but we will not feel lonely for being invisible.
* Of course, the title “Solitude standing” is also a nod to the great tune by Suzanne Vega:
*** And once again, I am reminded of Eric Schliesser’s discussion of Liam Brights’s post on subjectivism, hitting the nail on the following head: “Liam’s post (which echoes the loveliest parts of Carnap’s program with a surprisingly Husserlian/Levinasian sensibility) opens the door to a much more humanistic understanding of philosophy. The very point of the enterprise would be to facilitate mutual understanding. From the philosophical analyst’s perspective the point of analysis or conceptual engineering, then, is not getting the concepts right (or to design them for ameliorative and feasible political programs), but to find ways to understand, or enter into, one’s interlocutor life world.”
Recently, I became interested (again) in the way our upbringing affects our values. Considering how groups, especially in academia, often manage to suppress criticism of misconduct, I began to wonder which values we associate with criticism more generally. First, I noticed a strange ambivalence. Just think about the ambivalent portrayal of whistle blowers like Edward Snowden! The ambivalence is captured in values like loyalty that mostly pertain to a group and are not taken to be universal. Then, it hit me. Yes, truth telling is nice. But in-groups ostracise you as a snitch, a rat or a tattletale! Denouncing “virtue signalling” or “cancel culture” seems to be on a par with this verdict. So while criticism of mismanagement or misconduct is often invited as an opportunity for improvement, it is mostly received as a cause of reputational damage.
Now I wrote up a small piece for Zoon Politikon.* In this blog post, I just want to share what I take to be the main idea.
The ambivalence of criticism in academia seems to be rooted in an on-going tension between academic and managerial hierarchies. While they are intertwined, they are founded on very different lines of justification. If I happen to be your department chair, this authority weighs nothing in the setting of, say, an academic conference. Such hierarchies might be justifiable in principle. But while the goals of academic work and thus hierarchies are to some degree in the control of the actual agents involved, managerial hierarchies cannot be justified in the same way. A helpful illustration is the way qualitative and quantitative assessment of our work come apart: A single paper might take years of research and end up being a game-changer in the field of specialisation, but if it happens to be the only paper published in the course of three years, it won’t count as sufficient output. So while my senior colleague might have great respect for my work as an academic, she might find herself confronted with incentives to admonish and perhaps even fire me.
What does this mean for the status of criticism? The twofold nature of hierarchies leaves us with two entirely disparate justifications of criticism. But these disparate lines of justification are themselves a constant reason for criticism. The fact that a field-changing paper and a mediocre report both make one single line in a CV bears testimony to this. But here’s the thing: we seemingly delegitimise such criticism by tolerating and ultimately accepting the imperfect status quo. Of course, most academics are aware of a tension: The quantification of our work is an almost constant reason for shared grievance. But as employees we find ourselves often enough buying into it as a “necessary evil”. Now, if we accept it as a necessary evil, we seem to give up on our right to criticise it. Or don’t we? Of course not, and the situation is a lot more dynamic than I can capture here. To understand how “buying into” an imperfect situation (a necessary evil) might seemingly delegitimise criticism, it is crucial to pause and briefly zoom in on the shared grievance I just mentioned.
Let me begin by summarising the main idea: The shared grievance constitutes our status quo and, in turn, provides social cohesion among academics. Criticism will turn out to be a disturbance of that social cohesion. Thus, critics of the status quo will likely be ostracised as “telling on” us.
One might portray the fact that we live with an academic and a managerial hierarchy simply as unjust. One hierarchy is justified, the other isn’t (isn’t really, that is). Perhaps, in a perfect world, the two hierarchies would coincide. But in fact we accept that, with academia being part of the capitalist world at large, they will never coincide. This means that both hierarchies can be justified: one as rooted in academic acclaim; the other as a necessary evil of organising work. If this is correct and if we accept that the world is never perfect, we will find ourselves in an on-going oscillation and vacillation. We oscillate between the two hierarchies. And we vacillate between criticising and accepting the imperfection of this situation. This vacillation is, I submit, what makes criticism truly ambivalent. On the one hand, we can see our work-relations from the different perspectives; on the other hand, we have no clear means to decide which side is truly justified. The result of this vacillation is thus not some sort of solution but a shared grievance. A grievance acknowledging both the injustices and the persisting imperfection. There are two crucial factors in this: The fact that we accept the imperfect situation to some degree; and the fact that this acceptance is a collective status, it is our status quo. Now, I alone could not accept on-going injustices in that status quo, if my colleagues were to continuously rebel against it. Thus, one might assume that, in sharing such an acceptance, we share a form of grievance about the remaining vacillation.
It is of course difficult to pin down such a phenomenon, as it obtains mostly tacitly. But we might notice it in our daily interactions when we mutually accept that we see a tension, for instance, between the qualitative and quantitative assessment of our work. This shared acceptance, then, gives us some social cohesion. We form a group that is tied together neither by purely academic nor by purely managerial hierarchies and relations. There might be a growing sense of complicity in dynamic structures that are and aren’t justified but continue to obtain. So what forms social cohesion between academics are not merely factors of formal appraisal or informal friendship. Rather, a further crucial factor is the shared acceptance of the imperfection of the status quo. The acceptance is crucial in that it acknowledges the vacillation and informs what one might call the “morale” of the group.
If this is correct, academics do indeed form a kind of group through acceptance of commonly perceived imperfections. Now if we form such a group, it means that criticism will be seen as both justified but also as threatening the shared acceptance. We know that a critic of quantitative work measures is justified. But we also feel that we gave in and accepted this imperfection a while ago. The critic seemingly breaks with this tacit consent and will be seen like someone snitching or “telling on us”. As I see it, it is this departure from an in-group consensus that makes criticism appear as snitching. And while revealing a truth about the group might count as virtuous, it makes the critic seemingly depart from the in-group. Of course, companies and universities enjoy also some legal protection. Even if you find out about something blameworthy, you might be bound by rules about confidentiality. This is why whistle blowers do indeed have an ambivalent reputation, too. But I guess that the legal component alone does not account for the force of the in-group mentality at work in suppressing criticism.
This mode of suppressing criticism has pernicious effects. The intertwined academic and managerial hierarchies often come with inverse perceptions of criticism: your professorial colleague might be happy to learn from your objections, while your department chair might shun your criticism and even retaliate against you. Yet, they might be the same person. Considering the ubiquitous histories of suppressing critics of sexism, racism and other kinds of misconduct, we do not need to look far to find evidence for ostracism or retaliation against critics. I think that it’s hard to explain this level of complicity with wrongdoers merely by referring to bad intentions, on the one hand, or formal agreements such as confidentiality, on the other. Rather, I think, it is worthwhile to consider the deep-rooted in-group consensus that renders criticism as snitching. One reason is that snitching counts, at least in a good number of cultures, as a bad action. But while this might be explained with concerns about social cohesion, it certainly remains a morally dubious verdict, given that snitching is truth-conducive and should thus be aligned with values such as transparency. Going by personal anecdotes, however, I witnessed that snitching was often condemned even by school teachers, who often seemed to worry about social cohesion no less than about truthfulness. In other words, we don’t seem to like that the truth be told when it threatens our status quo.
In sum, we see that the ambivalent status of criticism is rooted in a twofold hierarchy that, in turn, comes with disparate sets of values. Shared acceptance of these disparate sets as an unavoidable imperfection binds together an in-group that will sanction explicit criticism of this imperfection as a deviation from the consensus. The current charges against so-called “virtue signalling”, a “call out culture” or “cancel culture” on social media strike me as instances of such sanctions. If we ask what makes the inclinations to sanction in-group norm violations so strong, it seems helpful to consider the deep-rooted code against snitching. While the moral status of sanctioning snitching is certainly questionable, it can shed light on the pervasive motivation and strikingly ready acceptance of such behaviour.
* Following a discussion of a blog post on silence in academia, Izabela Wagner kindly invited me to contribute to a special issue in Zoon Politikon. I am enormously grateful to her for the exchanges and for providing this opportunity. Moreover, I have benefitted greatly from advice by Lisa Herzog, Pietro Ingallina, Mariya Ivancheva, Christopher Quinatana, Rineke Verbrugge, and Justin Weinberg.
This is the first installment of my new video series Philosophical Chats. In this episode, I have a conversation (for about 40 minutes) with my old friend Kai Ivo Baulitz, an actor and playwright, who is currently in Prague for a film shooting, but has to quarantine most of the time.* – We talk about how the crisis changed our minds and ways, about Kai’s situation in Prague, about being under surveillance, about anger and guilt, about acting and kissing, about how doing philosophy is like having a midlife crisis, about embracing fatalism, and about how we end up feeling inconsistent much of the time. Enjoy!
* Fun fact: This year, Kai and I would have celebrated our 30th school leaving anniversary (Abiturfeier), but corona took care of preventing that. Perhaps this conversation makes up for that a bit.
Those who know us from our school days might find it particularly ironic that Kai is currently stuck in Prague.