“Heidegger was a Nazi.” What now?

“B was a bigot” is a phrase that raises various questions. We can say it of various figures, both dead and alive. But this kind of phrase is used for various purposes. In what follows, I’d like consider some implications of this phrase and its cognates. – Let me begin with what might seem a bit of a detour. Growing up in Germany, I learned that we are still carrying responsibility for the atrocities committed under the Nazi regime. Although some prominent figures declared otherwise even in the Eighties, I think this is true. Of course, one might think that one cannot have done things before one was born, but that does not mean that one is cut off from one’s past. Thinking historically means, amongst other things, to think of yourself as determined by continuities that run right through you from the past into the options that make your future horizon. The upshot is: we don’t start from scratch. It is with such thoughts that I look at the debates revolving around Heidegger and other bigots. Is their thought tainted by their views? Should we study and teach them? These are important questions that will continue to be asked and answered. Adding to numerous discussions, I’d like to offer three and a half considerations.*

(1) The question whether someone’s philosophical thought is tainted or even pervaded by their political views should be treated as an open question. There is no a priori consideration in favour of one answer. That said, “someone’s thought” is ambiguous. If we ask whether Heidegger’s or Frege’s (yes, Frege’s!) thought was pervaded by their anti-semitism, the notion is ambiguous between “thought” taken as an item in psychological and logical relations. The psychological aspects that explain why I reason the way I do, often do not show up in the way a thought is presented or received. – Someone’s bigotry might motivate their thinking and yet remain hidden. But even if something remains hidden, it does not mean that it carries no systematic weight. There is an old idea, pervasive in the analytic tradition, that logical and political questions are distinct. But the idea that logic and politics are distinct realms is itself a political idea. All such issues have to be studied philosophically and historically for each individual thinker. How, for instance, can Spinoza say what he says about humans and then say what he says about women? This seems glaringly inconsistent and deserves study rather than brushing off. However, careful study should involve historically crucial ties beyond the question of someone’s thought. There are social, political and institutional continuities (and discontinuities) that stabilise certain views while disqualifying others.

(2) Should we study bigots? If the forgoing is acceptable, then it follows that we shouldn’t discourage the study of bigots. Quite the contrary! This doesn’t mean that I recommend the study of bigots in particular; there are enough understudied figures that you might turn to instead. It just means that their bigotry doesn’t disqualify them as topics of study and that if you’re wondering whether you should, that might in itself be a good reason to get started. This point is of course somewhat delicate, since history of philosophy is not only studied by disinterested antiquarians, but also for reasons of justifying why we endorse certain views or because we hope to find good or true accounts of phenomena. – Do we endorse someone’s political views by showing continuities between their thoughts and ours? Again, that depends and should be treated as an open question. But I don’t think that shunning the past is a helpful strategy. After all, the past provides the premises we work from, whether we like it or not. Rather we should look carefully at possible implications. But the fact that we appropriate certain ideas does not entail that we are committed to such implications. As I said in my last post, we can adopt thoughts, while changing and improving them. That fact that Heidegger was a Nazi does not turn his students or later exegetes into Nazis. However, once we know about the bigotry we should acknowledge as much in research and teaching.

(3) What about ourselves? Part of the reason for making the second remark was that I sometimes hear people say: “A was a bigot; so we shouldn’t teach A. Let’s rather teach B.” While I agree that there are huge numbers of understudied figures that might be taught instead of the same old classics, I don’t think that this line of argument helps. As I see it, it often comes out of the problematic idea that, ideally, we should study and teach only such figures that we consider morally pure. This is a doubtful demand not only because we might end up with very little material. It is also problematic because it suggests that we can change our past at will. Therefore, attempts at diversifying our teaching should not be supported by arguments from supposedly different moral status; rather we should see that globalisation requires us to eventually acknowledge the impact of various histories and their entanglements. – We don’t teach Heidegger because we chose to ignore his moral status. We teach his and other works because our own thought is related to these works. This has an important consequence for our own moral status. Having the histories we do, our own moral status is tainted. In keeping with my introductory musings, I’d like to say that we are responsible for our past. The historical continuities that we like and wish to embrace are as much our responsibilities as those that we wish to disown. Structurally oppressive features of the past are not disrupted just because we change our teaching schedule.

I guess the general idea behind these considerations is this: The assumption that one can cut off oneself from one’s (philosophical) past is an illusion. As philosophers in institutional contexts we cannot deny that we might be both beneficiaries of dubious heritage as well as suffering from burdens passed down. In other words, some of the bigotry will carry over. Again, this doesn’t mean that we are helpless continuants of past determinants, but it means that it is better to study our past and our involvements with it carefully rather than deny them and pretend to be starting from scratch.

___

* See especially the pieces by Peter Adamson and Eric Schliesser.

I don’t know what I think. A plea for unclarity and prophecy

Would you begin a research project if there were just one more day left to work on it? I guess I wouldn’t. Why? Well, my assumption is that the point of a research project is that we improve our understanding of a phenomenon. Improvement seems to be inherently future-directed, meaning that we understand x a bit better tomorrow than today. Therefore, I am inclined to think that we would not begin to do research, had we not the hope that it might lead to more knowledge of x in the future. I think this not only true of research but of much thinking and writing in general. We wouldn’t think, talk or write certain things, had we not the hope that this leads to an improved understanding in the future. You might find this point trivial. But a while ago it began to dawn on me that the inherent future-directedness of (some) thinking and writing has a number of important consequences. One of them is that we are not the (sole) authors of our thoughts. If this is correct, it is time to rethink our ways of evaluating thoughts and their modes of expression. Let me explain.

So why am I not the (sole) author of my thoughts? Well, I hope you all know variations of the following situation: You try to express an idea. Your interlocutor frowns and points out that she doesn’t really understand what you’re saying. You try again. The frowning continues, but this time she offers a different formulation. “Exactly”, you shout, “this is exactly what I meant to say!” Now, who is the author of that thought? I guess it depends. Did she give a good paraphrase or did she also bring out an implication or a consequence? Did she use an illustration that highlights a new aspect? Did she perhaps even rephrase it in such a way that it circumvents a possible objection? And what about you? Did you mean just that? Or do you understand the idea even better than before? Perhaps you are now aware of an important implication. So whose idea is it now? Hers or yours? Perhaps you both should be seen as authors. In any case, the boundaries are not clear.

In this sense, many of my thoughts are not (solely) authored by me. We often try to acknowledge as much in forewords and footnotes. But some consequences of this fact might be more serious. Let me name three: (1) There is an obvious problem for the charge of anachronism in history of philosophy (see my inaugural lecture).  If future explications of thoughts can be seen as improvements of these very thoughts, then anachronistic interpretations should perhaps not merely be tolerated but encouraged. Are Descartes’ Meditations complete without the Objections and Replies? Can Aristotle be understood without the commentary traditions? Think about it! (2) Another issue concerns the identity of thoughts. If you are a semantic holist of sorts you might assume that a thought is individuated by numerous inferential relations. Is your thought that p really what it is without it entailing q? Is your thought that p really intelligible without seeing that it entails q? You might think so, but the referees of your latest paper might think that p doesn’t merit publication without considering q. (3) This leads to the issue of acceptability. Whose inferences or paraphrases count? You might say that p, but perhaps p is not accepted in your own formulation, while the expression of p in your superviser’s form of words is greeted with great enthusiasm. In similar spirit, Tim Crane has recently called for a reconsideration of peer review.  Even if some of these points are controversial, they should at least suggest that authorship has rather unclear boundaries.

Now the fact that thoughts are often future-directed and have multiple authors has, in turn, a number of further consequences. I’d like to highlight two of them by way of calling for some reconsiderations: a due reconsideration of unclarity and what Eric Schliesser calls “philosophic prophecy”.*

  • A plea for reconsidering unclarity. Philosophers in the analytic tradition pride themselves on clarity. But apart from the fact that the recognition of clarity is necessarily context-dependent, clarity ought to be seen as the result of a process rather than a feature of the thought or its present expression. Most texts that are considered original or important, not least in the analytic tradition, are hopelessly unclear when read without guidance. Try Russell’s “On Denoting” or Frege’s “On Sense and Reference” and you know what I mean. Or try some other classics like Aristotle’s “De anima” or Hume’s “Treatise”. Oh, your own papers are exempt from this problem? Of course! Anyway, we all know this: we begin with a glimpse of an idea. And it’s the frowning of others that either makes us commit it to oblivion or try an improvement. But if this is remotely true, there is no principled reason to see unclarity as a downside. Rather it should be seen as a typical if perhaps early stage of an idea that wants to grow.
  • A plea for coining concepts or philosophic prophecy. Simplifying an idea by Eric Schliesser, we should see both philosophy and history of philosophy as involved in the business of coining concepts that “disclose the near or distant past and create a shared horizon for our philosophical future.”* As is well-known some authors (such as Leibniz, Kant or Nietzsche) have sometimes decidedly written rather for future audiences than present ones, trying to pave conceptual paths for future dialogues between religions, metaphysicians or Übermenschen. For historians of philosophy in particular this means that history is never just an antiquarian enterprise. By offering ‘translations’ and explanations we can introduce philosophical traditions to the future or silence them. In this sense, I’d like to stress, for example, that Ryle’s famous critique of Descartes, however flawed historically, should be seen as part of Descartes’ thought. In the same vain, Albert the Great or Hilary Putnam might be said to bring out certain versions of Aristotle. This doesn’t mean that they didn’t have any thoughts of their own. But their particular thoughts might not have been possible without Aristotle, who in turn might not be intelligible (to us) without the later developments. In this sense, much if not all philosophy is a prophetic enterprise.

If my thoughts are future-directed and multi-authored in such ways, this also means that I often couldn’t know at all what I actually think, if it were not for your improvements or refinements. This is of course one of the lessons learned from Wittgenstein’s so-called private language argument. But it does not only concern the possibility of understanding and knowing. A fortiori it also concerns understanding our own public language and thought. As I said earlier, I take it to be a rationality constraint that I must agree to some degree with others in order to understand myself. This means that I need others to see the point I am trying to make. If this generalises, you cannot know thyself without listening to others.

___

* See Eric Schliesser, “Philosophic Prophecy”, in Philosophy and It’s History, 209.

 

 

Why do we share the vulgar view? Hume on the medical norms of belief*

We tend to think that beliefs are opinions that we form in the light of certain evidence. But perhaps most beliefs are not like that. Perhaps most beliefs are like contagious diseases that we catch. – When philosophers talk like that, it’s easy to think that they are speaking metaphorically. Looking at debates around Hume and other philosophers, I’ve begun to doubt that. There is good reason to see references to physiology and medical models as a genuine way of philosophical explanation. As I hope to suggest now, Hume’s account of beliefs arising from sympathy is a case in point.

Seeing the table in front of me, I believe that there is a table. Discerning the table’s colour, I believe that the table is brown. It is my philosophical education that made me wonder whether what I actually perceive might not be the table and a colour but mental representations of such things. Taking things to be as they appear to us, without wondering about cognitive intermediaries, that is what is often called the vulgar view or naïve realism. Now you might be inclined to think that this view is more or less self-evident or natural, but if you look more carefully, you’ll quickly see that it does need explaining.

As far as I know there is no historical study of the vulgar view, but I found various synonyms for this view or its adherents: Ockham, for instance, speaks of the “layperson” (laicus), Bacon, Berkeley and Hume of the “vulgar view” or “system”, Reid and Moore of “common sense”. When it is highlighted, it is often spelled out in opposition to a “philosophical view” such as representationalism, the “way of ideas” or idealism. Today, I’d like to briefly sketch what I take to be Hume’s account of this view. Not only because I like Hume, but because I think his account is both interesting and largely unknown. As I see it, Hume thinks that we adhere to the vulgar view because others around us hold it. But why, you might ask, would other people’s views affect our attitudes so strongly? If I am right, Hume holds that deviating from this view – for instance by taking a sceptical stance – will be seen as not normal and make us outsiders. Intriguingly, this normality is mediated by our physiological dispositions. Deviation from the vulgar view means deviation from the common balance of humours and, for instance, suffering from melancholy.** In this sense, the vulgar view we share is governed by medical norms, or so I argue.

The vulgar view is often explicitly discussed because it raises problems. If we want to explain false beliefs or hallucinations, it seems that we need to take recourse to representations: seeing a bent stick in water can’t mean to see a real stick, but some sort of representation or idea. Why? Because reference to the (straight) stick cannot explain why we see it as bent. Since the vulgar view doesn’t posit cognitive representations, it cannot account for erroneous perceptions. What is less often addressed, however, is that the vulgar view or realism is not at all plain or empirical in nature. The vulgar view is not a view that is confirmed empirically; rather it is a view about the nature of empirical experience. It’s not that we experience that objects are as they appear. So the source of the vulgar view cannot be given in experience or any empirical beliefs. Now if this is correct, we have to ask what it is that makes us hold this view. There is nothing natural or evident about it. But if this view is not self-evident, why do we hold it and why is it so widespread?

Enter Hume: According to Hume, most of the beliefs, sentiments and emotions we have are owing to our social environment. Hume explains this by referring to the mechanism of sympathy: “So close and intimate is the correspondence of human souls, that no sooner any person approaches me, than he diffuses on me all his opinions, and draws along my judgment in a greater or lesser degree.” (Treatise 3.3.2.1) Many of the beliefs we hold, then, are not (merely) owing to exposure to similar experiences, but to the exposure to others. Being with others affords a shared mentality. In his Essay on National Character, Hume writes: “If we run over the globe, or revolve the annals of history, we shall discover every where signs of a sympathy or contagion of manners, none of the influence of air or climate.” What is at stake here? Arguing that sympathy and contagion explain the sociological and historical facts better, Hume dismisses the traditional climate theory in favour of his account of sympathy. Our mentalities are not owing to the conditions of the place we live in but to the people that surround us.***

Now how exactly is the “contagion” of manners and opinions explained? Of course, a large part of our education is governed by linguistic and behavioural conventions. But at the bottom, there is a physiological kind of explanation that Hume could appeal to. Corresponding to our mental states are physiological dispositions, temperature of the blood etc., the effects of which are mediated through the air via vapours which, in turn, affect the imagination of the recipient. Just like material properties of things affect our sense organs, the states of other bodies can affect our organs and yield pertinent effects. When Hume speaks of the “contagion” of opinion, it is not unlikely that he has something like Malebranche’s account in mind. According to this account opinions and emotions can be contagious and spread just like diseases.

In the Search after Truth, Malebranche writes: “To understand what this contagion is, and how it is transmitted from one person to another, it is necessary to know that men need one another, and that they were created that they might form several bodies, all of whose parts have a mutual correspondence. … These natural ties we share with beasts consist in a certain disposition of the brain all men have to imitate those with whom they converse, to form the same judgments they make, and to share the same passions by which they are moved.” (SAT 161) The physiological model of sympathetic contagion, then, allows for the transmission of mental states allueded to above. This is why Hume can claim that a crucial effect of sympathy lies in the “uniformity of humours and turn of thinking”. In this sense, a certain temperament and set of beliefs might count as pertinent to a view shared by a group.

Of course, this mostly goes unnoticed. It only becomes an issue if we begin to deviate from a common view, be it out of madness or a sceptical attitude:  “We may observe the same effect of poetry in a lesser degree; and this is common both to poetry and madness, that the vivacity they bestow on the ideas is not derive’d from the particular situations or connexions of the objects of these ideas, but from the present temper and disposition of the person.” (T 1.3.10.10)

The point is that the source of a certain view might not be the object perceived but the physiological dispositions which, in turn, are substantially affected by our social environment. If this is correct, Hume’s account of sympathy is ultimately rooted in a medical model. The fact that we share the vulgar view and other attitudes can be explained by appealing to physiological interactions between humans.

As I see it, this yields a medical understanding of the normality we attribute to a view. Accordingly, Hume’s ultimate cure from scepticism is not afforded by argument but by joining the crowd and playing a game of backgammon. The supposed normality of common sense, then, is not owing to the content of the view but to the fact that it is widespread.

____

* This is a brief sketch of my Hume interpretation defended in my book on Socialising Minds: Intersubjectivity in Early Modern Philosophy, the manuscript of which I’m currently finalising. – Together with Evelina Miteva, I also co-organise a conference on “Medicine and Philosophy”. The CFP is still open (till December 15, 2018): please apply if you’re interested.

** Donald Ainslie makes a nice case for this in his Hume’s True Scepticism, but claims that Hume’s appeal to humoral theory might have to be seen as metaphorical. — I realise that proper acknowledgements to Humeans would take more than one blog post in itself:) Stefanie Rocknak’s work has been particularly important for getting to grips with Hume’s understanding of the vulgar view. – Here, I’m mainly concerned with the medical model in the background. Marina Frasca-Spada’s work has helped with that greatly. But what we’d need to understand better still is the medical part in relation to the notion of imagination, as spelled out in Malebranche, for instance. Doina Rusu and Koen Vermeir have done some great work on transmission via vapours, but the picture we end up with is still somewhat coarse-grained, to put it mildly.

*** I am grateful to Evelina Miteva for sharing a preliminary version of her paper on Climata et temperamenta, which provides a succinct account of the medieval discussion.  Hume should thus be seen as taking sides in an ongoing debate about traits and mentalities arising from climate vs. arising from sympathy.

In facts we trust? More on the war against education

Last week we learned that the Central European University is forced out of Hungary. While a lot of other bad things happened since then, this event confused me more than others. Why do people let this happen? And why are acts such as the abolishment of Gender Studies met with so little resistance by the scientific community? As far as I’m concerned, universities are global institutions. Expelling a university or abolishing a discipline should worry every democratic citizen in the world. But before I get onto my moral high ground again, I want to pause and understand what it actually is that I find so shocking. Perhaps you all find this trivial, but it’s only beginning to dawn on me that it is trust we’re lacking. So I think the reason that we (or too many of us) let these things happen is that we lost trust in institutions like universities. Let me explain.

In my last piece, I made some allusions about how this war against education exploits misguided beliefs about the fact-value distinction and postmodernism. I still think so, but I couldn’t really pin down what held my ideas together. I’m not sure I do now, but in focussing on the notion of trust I have more of a suspicion than last week. Loss of trust is widespread: We distrust banks, insurance companies, many politicians, and perhaps even our neighbours. That seems at least sometimes reasonable, because it often turns out that such institutions or their representatives do not act on behalf of our interest. And if they say they do, they might be lying. This distrust has probably grown so much that nobody even assumes that certain institutions would act out of anyone else’s interest.

In this respect, I guess universities and the disciplines taught there are perceived as a mixed bag. On the one hand, the commitments in the sciences and humanities seem to foster a certain trust and respectability. On the other hand, not really or not anymore. I don’t know whether this is owing to the fact that universities are increasingly run like businesses, but the effect is that it’s kind of ok to distrust academics. They might be lobbying like anyone else. They might have an agenda over and above the noble pursuit of truth.

Once you embrace this idea, you immediately see why the bashing of “experts” worked so well during the Brexit campaign. You also understand why conspiracy theories begin to be seen on a par with scientifically established results. If everyone might just be lobbying, then the very idea of academic expertise is undermined. Just today, I read a comment on FB claiming that “we don’t need any higher education, now that we’ve got free access to information via the internet”. You might be shaking your head like me, but once the trust is gone, why should anyone be more respectable than anyone else?

Now all of this is not really disheartening enough. And yes, it get’s worse. I guess most of us have of course noticed something to this effect. When we write grant applications or even papers we actually do engage in lobbying. We say: look, my evidence is better, my thought more thorough, my method more reliable! Being used to such tropes, it’s a small step towards saying that STEM fields are better because they deal with facts rather than ideologies, like Gender Studies. This simplistic opposition between facts and ideologies or values is the last straw that some scientists cling to in order to promote their own trustworthiness. But while some might be thinking that they are acting in the name of Truth, they will ultimately be perceived as just another person doing business and praising their goods.

You might want to object that it has always been (a bit) like this. But, I reply, back in the day universities were not perceived as businesses, nor academics as entrepreneurs. The upshot is that the competition for resources in academia makes us not only claim that we advance a project. It’s a fine line to saying that we are engaging in a different and better and ultimately more trustworthy kind of project than the colleague next door. In doing so, we cross the line from advancement to difference, thereby undermining the common ground on which we stand. In such cases, we insinuate that we are more trustworthy than our colleagues from different disciplines. In doing so, we undermine the trust that all academics need collectively in order for a university to remain a trustworthy institution.

Our competition and lobbying threatens to undermine the trust in institutions of higher education. Not surprising, then, that politicians who claim just that will find so much approval. If academics don’t treat one another as trustworthy, why should universities be trustworthy institutions? Why should politicians even pretend to trust experts, and why should people outside academia trust them? Yes, I know there are answers. But if we want to counter anti-democratic anti-intellectualism, we need to maintain or rather restore trust in universities. But this requires two things: we must stop undermining each other’s reputation; and we must counter the belief that a university is a business just like any other.

The war against education

Today is one of the darkest days in the history of Europe. The press reports that the Central European University in Budapest is forced to move out of Hungary despite fulfilling all demands of the Hungarian Government. We have seen this coming for quite some time. That this has not been prevented is a disaster. Or so I think. Having had the privilege to live and study in Budapest for one and a half years during the nineties, I am personally moved: I am sad and angry. While I know that there are many people – among them many friends and colleagues – who think and feel as much, I am not sure that everyone is on the same page about the meaning of this fact. I’m not well versed in the details of politics. But I am convinced that this is not a ‘Hungarian thing’ alone. I rather see it as part of a war on education and as such as a war against democracy. While I’m not in the most clearheaded mood, I still want to offer some considerations I have been mulling about recently.

The war on education is a war against democracy. You don’t need to be a convicted Habermasian to see why (higher) education is a necessary element of democracies. Informed discourse and participation cannot exist without education. Threatening teachers, lecturers and students is something we’re witnessing everywhere in Europe and elsewhere. While there is much to be said about the current Hungarian government, this move is not unique. Turkey, the US, Romania, you name it witness similar moves. As a German, I am acutely aware that the far right party (AFD) is currently attempting to facilitate a climate of denunciation, too. That’s peanuts perhaps compared to what we witnessed today, but it strikes me as part of a concerted strategy. This strategy comprises many aspects: intimidation is perhaps obvious; casualisation of the (academic) workforce, while pretending universities are businesses might seem a more nuanced way of destroying education.  If we want to fight this strategy, we have to undermine it everywhere.

As I see it, this war exploits problematic beliefs in the fact-value distinction. This second concern is more invested and difficult (for me) to articulate. But here goes: After the last US elections, there was a lot of talk about what is called “postmodernism”. Suddenly, postmodernism and relativism were held responsible for the “loss” of truth and many other things. Moreover, their proponents were held responsible for the rise of far-right opinions and politics. Perhaps trying to be defensive, some people even embraced that slogan according to which “science has no agenda”, insinuating that science deals with facts and leaves values to politics. I think I’ve never read so much crap in my life. Now, there is a lot to be said why the fact-value distinction does not amount to a dichotomy, but this is for another day.*

Anyway, I guess we’re not doing ourselves a favour, if we sacrifice discourse over such matters to such fast food slogans as “science has no agenda”. What I have in mind today is the fact that many people who should know better applauded when the Hungarian government targeted Gender Studies. Since this discipline is currently one of the favoured examples for the supposed effects of postmodernism, some people seemed not to notice that their abolishment was just another attack against (academic) freedom. While I don’t think that we have to agree about the fact-value distinction, I sincerely hope for the agreement that this is a matter of free discourse. No one should applaud when a government abolishes an academic discipline.

Finally, it is a great thing that Vienna will host the CEU in the future. But universities are not virtual places that can be moved around without loss. They are part of making a place, a city, a country, a continent, the world with its people what they are. Let’s not forget about the people who cannot move along. Being placed in Europe, we are in this together.

____

* Here is a bit more on this issue.

 

Do ideas matter to philosophy? How obsession with recognition blocks diversity

When suffering from writer’s block, I spent much of my time in the library browsing through books that were shelved beside the ones I originally looked for. Often these were books that didn’t have any traces of use: neither, it seemed, had anyone read them, nor were they cited by anyone. The names of the authors were often unfamiliar and a search confirmed that they sometimes were no longer in academia. Funnily enough, these books often contained the most refreshing and original ideas. Their approach to topics or texts was often unfamiliar to me, but the effort of figuring out what they were arguing was time well spent. Nevertheless, my attempts to bring them up in discussions weren’t picked up on. People continued to cite the more familiar names. Why are we letting this happen?

Most of you probably know the following phenomenon: During a discussion someone proposes an idea; the discussion moves on. Then an established person offers almost a repetition of the proposed idea and everyone goes: “oh, interesting.” Put as a rule of thumb: prestige gets you attention; interesting ideas as such not so much. There is a gendered version of this phenomenon, too: If you want to listen to an interesting idea authored by a woman, better have a man repeat it. Now, an important aspect of this phenomenon is that it seems to incentivise that we relate our philosophical work to that of prestigious figures. In other words, we will make sure that what we say picks up on what established figures say. As Kieran Healy has shown, citation patterns confirm this. Cite David Lewis and you might join the winning in-group. We hope to get recognition by citing established people. Now you might just shrug this off as an all too human trait. But what I’d like to argue is that this behaviour crucially affects how we evaluate ideas.

I think Healy’s citation patterns show that we are inclined to value such ideas that are either closely related (in content) to those of established figures or that are presented in a similar manner or method. Put simply: you’re more likely to get recognition if you imitate some “big shot” in content or method. Conversely, if you don’t imitate “big shots”, your work won’t be valued. Why is this important? My hunch is that this practice minimises diversity of content and method. Philosophers often like to present themselves as competitors for the best ideas. But if we track value through recognition, there is no competition between ideas.

Now if this is the case, why don’t we see it? My answer is that we don’t recognise it because there are competing big shots. And the competition between big shots makes us believe that there is diversity. Admittedly, my own evidence is anecdotal. But how could it not be. When I started out as a medievalist, the thing to be done to get recognition was to prepare a critical edition of an obscure text. So I learned a number of strange names and techniques in this field. However, outside of my small world this counted for, say, not much. And when the German Research Foundation (DFG) stopped funding such projects, a lot of people were out of a job. Moving on to other departments, I quickly learned that there was a different mainstream, and that mainstream didn’t favour editions or work on obscure texts. Instead you could make a move by writing on a canonical figure already edited. Just join some debate. Still further outside of that context you might realise that people don’t value history of philosophy anyway. But rather than seeing such different approaches as complementary, we are incentivised to compete for getting through with one of these approaches.

However, while competition might nourish the illusion of diversity, the competition for financial resources ultimately blocks diversity because it will ultimately lead to one winner. And the works and books that don’t follow patterns established in such competitions seem to fall through the cracks. There is more evidence of course once we begin to take an international perspective: There are people who write whole PhD dissertations that will never be recognised outside of their home countries. So they have to move to richer countries and write a second PhD to have any chance on the international market. In theory, we should expect such people to be the best-trained philosophers around: they often have to familiarise themselves with different approaches and conventions, often speak different languages, and are used to different institutional cultures. But how will we evaluate their ideas? Will they have to write a bit like David Lewis or at least cite him sufficiently in order to get recognition?

Now you might want to object that I’m conflating cause and effect. While I say that we assign value because of prestige, you might argue that things are the other way round: we assign prestige because of value. – If this were the case, I would want to see some effort to at least assess the ideas of those who don’t align their work with prestigious figures. But where do we find such ideas? For reasons stated above, my guess is that we don’t find them in the prestigious departments and journals. So where should we look for them?

My hunch is that we ‘ll find true and worthwhile diversity in the lesser known departments and journals. So please begin: Listen to the students who don’t speak up confidently, read the journals and books from publishers whose names you cannot recognise. Listen to people whose native language isn’t English. And stop looking for ideas that sound familiar.

Against leaving academia

For quite some years, newspapers and the academic blogosphere have been packed with advice for those considering leaving academia. There are practical tips of how to enter the non-academic world or pleas against the stigma that one might see in “giving up” etc. Many pieces of such advice are very helpful indeed and imparted out of the best intentions. However, I am troubled to see that there is also an ever growing number of pieces that advise leaving academia or at least imply that it is the best thing one can do. The set of reasons for this is always the same: academia is bad, bad, bad. It is toxic, full of competition, a threat to one’s health and exploitative. On a famous philosophy blog I even read that it is “unethical” to encourage students to stay in academia. In what follows, I’d like to take issue with such claims and present three reasons against leaving academia.

Given my own academic biography, I’d be the last person to underestimate the downsides of academia. Surviving, let alone “making it”, is down to sheer luck. All your merits go nowhere unless you’re in the right place at the right time. However, that does not mean (1) that we don’t need academics, (2) that academia is worse than any other place or (3) that work in academia can’t be fun. Let’s look at these points in turn.

(1) We need academics. – Believe it or not, even though politicians of certain brands, taxpayers and even one’s parents might ceaselessly claim that most academic work and the humanities in particular are useless, the contrary is true. Discourse and reflection are an integral part of democracies. Academia is designed to enable just that: research and higher education are not just some niches; they are the beating heart of democratic cultures across the globe. Of course, our actual daily practice might often look somewhat differently. But there is more than one response to the fact that the nobler ends of our work are often under threat, from inside and outside. The alternative to leaving is attempting to improve academia. That might be quite difficult. But if masses of good people keep leaving academia, it will lead to increasing corrosion and undermine our democracies. To be sure, ultimately anyone’s personal reasons are good enough, but I find the general advice in favour of leaving slightly (if often unintentionally) anti-democratic.

(2) Academia is part of the rest of the world. – Academia is often called bad names. We are living in an ivory tower and some philosophers never even leave their armchairs. I often talk to students who have been advised to pursue their “plan b” before they really got started with their studies. They unanimously seem to be told that “the world outside” or the “normal world” is better. It seems that academics have a lot of special problems that don’t exist outside or at least not in such numbers. Again, I do not wish to downplay our problems, far from it. I truly believe that there are a number of issues that need urgent attention. But then again I wonder why leaving should help with that. Many problems in academia are problems owing to (bad) working conditions and policies. But why would anyone think that these very same problems do not exist in the rest of the world? Plan b won’t lead to some sort of paradise. The conditions apply to the workforce inside and outside of ivory towers. In fact, I know quite a number of people who have non-academic jobs. By and large, the conditions don’t strike me as much different. Competition, (mental) health issues, exploitation, harassment, misogyny, bullying, you name it – all of these things abound elsewhere, too. So if you want to leave, look around first: you might find the same old same old in disguise.

(3) Academic work can be fun. – We’re often told that our kind of work causes a lot of suffering (not solely in our recipients). Again, I don’t want to downplay the fact that a lot of things we are asked to do might feel quite torturous. But when I listen to myself and other people describing what it actually is that makes it so troublesome, it is often not due to the actual work itself. Writing might be hard, for instance, but the unpleasant feelings are not owing to the writing, but to the idea of it being uncharitably received. Similarly, interacting with fellow students or after a talk in the q & a might be stressful, but as I see it, the stress is often created out of (the fear of) unpleasant standards of aggressive interaction. Imagining talking through the same stuff with an attentive friend will not trigger the same responses I guess. Again, my advice would not be leaving but working towards improving the standards of interaction.

You might still say that all of these considerations are cold comfort in view of the real suffering going on. I won’t deny that this is a possibility. In fact, academia can be full of hidden or overt cruelties and people might have very good reasons indeed to leave academia. I don’t see doing so as a failure or as wrong. What I find problematic is the current trend of advising such measures on a general basis. But of course, for some this advice might still be helpful to embrace a good decision or an inevitable step. What ultimately encouraged me to write this post today are my students, two of whom came to me this week to tell me that, contrary to their previous expectations, they found their fellow students ever so supportive, charitable and encouraging. Where they were warned to fear competition, they were actually met with the friendliest cooperation. I don’t hear this all too often, but who would want to let this hopeful generation down?