Against history of philosophy: shunning vs ignoring history in the analytic traditions

Does history matter to philosophy? Some time ago I claimed that, since certain facts about concepts are historical, all philosophy involves history to some degree (see here and here). But this kind of view has been and is attacked by many. The relation to history is a kind of philosophical Gretchenfrage. If you think that philosophy is a historical endeavour, you’ll be counted among the so-called continental philosophers. If you think that philosophy can be done independently of (its) history, you’ll be counted among the analytic philosophers. Today, I’ll focus on the latter, that is, on analytic philosophy. What is rarely noted is that the reasons against history are rather different and to some extent even contradictory. Roughly put, some think that history is irrelevant, while others think that it is so influential that it should be shunned. In keeping with this distinction, I would like to argue that the former group tends to ignore history, while the latter group tends to shun history. I believe that ignoring history is a relatively recent trend, while shunning history is foundational for what we call analytic philosophy. But how do these trends relate? Let’s begin with the current ignorance.

A few years ago, Mogens Laerke told me that he once encountered a philosopher who claimed that it wasn’t really worth going back any further in history than “to the early Ted Sider”. Indeed, it is quite common among current analytic philosophers to claim that history of philosophy is wholly irrelevant for doing philosophy. Some educational exposure might count as good for preventing us from reinventing the wheel or finding the odd interesting argument, but on the whole the real philosophical action takes place today. Various reasons are given for this attitude. Some claim that philosophy aims at finding the truth and that truth is non-historical. Others claim that you don’t need any historical understanding to do, say, biology or mathematics, and that, since philosophy is a similar endeavour, it‘s equally exempt from its history. I’ll look at these arguments some other day. But they have to rely on the separability of historical factors from what is called philosophy. As a result of this, this position denies any substantial impact of history on philosophy. Whatever the merit of this denial, it has enormous political consequences. While the reasons given are often dressed as a-political, they have serious repercussions on the shape of philosophy in academic institutions. In Germany, for instance, you’ll hardly find a department that has a unit or chair devoted to history of philosophy. Given the success of analytic practitioners through journal capture etc., history is a marginalised and merely instrumental part of philosophy.

Yet, despite the supposed irrelevance of history, many analytic philosophers do see themselves as continuous with a tradition that is taken to begin with Frege or Russell. To portray contemporary philosophical work as relevant, it is apparently not enough to trust in the truth-conduciveness of the current philosophical tools on display. Justifying current endeavours has to rely on some bits and bobs of history. For some colleagues, grant agencies and students it’s not sufficient to point to the early Ted Sider to highlight the relevance of a project. While pointing to early analytic philosophy is certainly not enough, at least some continuity in terminology, arguments and claims is required. But do early analytic philosophers share the current understanding of history? As I said in the beginning, I think that many early figures in that tradition endorse a rather different view. As late as 1947, Ryle writes in a review of Popper in Mind, the top journal of analytic philosophy:

“Nor is it news to philosophers that Nazi, Fascist and Communist doctrines are descendants of the Hegelian gospel. … Dr Popper is clearly right in saying that even if philosophers are at long last immunized, historians, sociologists, political propagandists and voters are still unconscious victims of this virus …”*

Let me single out two claims from this passage: (1) Hegelian philosophy shaped pervasive political ideologies. (2) Philosophy has become immune against such ideologies. The first claim endorses the idea that historical positions of the past are not only influential for adherent philosophers, but shape political ideologies. This is quite different from the assumption that history is irrelevant. But what about the second claim? The immunity claim seems to deny the influence of history. So on the face of it, the second claim seems to be similar to the idea that history is irrelevant. This would render the statements incongruent. But there is another reading: Only a certain kind of philosophy is immune from the philosophical past and the related ideologies. And this is non-Hegelian philosophy. The idea is, then, not that history is irrelevant, but, to the contrary, that history is quite relevant that thus certain portions of the past should be shunned. Analytic philosophy is construed as the safe haven, exempt from historical influences that still haunt other disciplines.

Ryle is not entirely clear about the factors that would allow for such immunity. But if claim (2) is to be coherent with (1), then this might mean that we are to focus on certain aspects of philosophy and that we should see ourselves in the tradition of past philosophers working on these aspects. If this correct, Ryle is not claiming that philosophy is separate from history and politics, but that it can be exempt from certain kinds of history and politics. As Akehurst argues**, this tradition was adamant to shun German and Britisch idealism as well as many figures that seemed to run counter to certain ideas. Whatever these precise ideas are, the assumption that (early) analytic philosophy is simply a-historical or a-political is a myth.

Whatever one thinks of Ryle’s claims, they are certainly expressive of a core belief in the tradition. At it’s heart we see a process of shunning with the goal of reshaping the canon. The idea of being selective about what one considers as the canon is of course no prerogative of analytic philosophy. However, what seems to stand out is the assumption of immunity. While the attempt to immunise oneself or to counter one’s biases is a process that includes the idea that one might be in the grip of ideologies, the idea that one is already immune seems to be an ideology itself.

Now how does this shunning relate to what I called today’s ignorance? For better or worse, I doubt that these stances are easily compatible. At the same time, it seems likely that the professed ignorance is an unreflected outcome of the shunning in earlier times. If this is correct, then the idea of non-historicity has been canonised. In any case, it is time reconsider the relation between analytic philosophy and the history of philosophy.***

____

* Thanks to Richard Creek, Nick Denyer, Stefan Hessbrüggen, Michael Kremer, and Eric Schliesser for some amusing online discussion of this passage.

** See T. Akehurst, The Cultural Politics of Analytic Philosophy: Britishness and the Spectre of Europe, London: Continuum 2010, esp. 58-60. I am grateful to Catarina Dutilh-Novaes for bringing this book to my attention. See also his brief blog post focussing on Russell.

*** Currently, Laura Georgescu and I are preparing a special issue on the Uses and Abuses of History in Analytic Philosophy for JHAP. Please contact us if you are interested in contributing!

Networks and friendships in academia

Recently, I came across an unwelcome reminder of my time as a graduate student and my early-career days. It had the shape of a conference announcement that carries all the signs of a performative contradiction: it invites you by exclusion. What can we learn from such contradictions?

The announcement invites early-career people to attend seminars that run alongside a conference whose line-up is already fixed and seems to consist mainly of a circle of quite established philosophers who have been collaborating closely ever since. Since the invitation is not presented as a “call”, it’s hard to feel invited in the first place. Worse still, you’re not asked to present at the actual conference but to attend “seminars” that are designed “to motivate students and young scholars from all over the world to do research in the field of medieval philosophy and to help them learn new scientific methodology and develop communication skills.” If you’re still interested in attending, you’ll look in vain for time slots dedicated to such seminars. Instead, there is a round table on the last day, scheduled for the same time the organising body holds their annual meeting, thus probably without the established scholars.* You might say there is a sufficient amount of events, so just go somewhere else. But something like the work on the “Dionysian Traditions” is rarely presented. In fact, medieval philosophy is often treated as a niche unto itself, so the choice is not as vast as for, say, analytic metaphysics.

If you think this is problematic, I’ll have to disappoint you. There is no scandal lurking here. Alongside all the great efforts within a growingly inclusive infrastructure of early career support, things like that happen all the time, and since my time as a professor I have been accused of organising events that do at least sound “clubby” myself. Of course, I’m not saying that the actual event announced is clubby like that; it’s just that part of the description triggers old memories. When I was a graduate student, in the years before 2000, at least the academic culture in Germany seemed to be structured in a clubby fashion. By “structured” I mean that academic philosophy often seemed to function as a simple two-class system, established and not-established, and the not-established people had the status of onlookers. They were, it seemed, invited to kind of watch the bigger figures and learn by exposure to greatness. But make no mistake; this culture did not (or not immediately) come across as exclusionary. The onlookers could feel privileged for being around. For firstly, even if this didn’t feel like proper participation, it still felt like the result of a meritocratic selection. Secondly, the onlookers could feel elated, for there was an invisible third class, i.e. the class of all those who either were not selected or didn’t care to watch. The upshot is that part of the attraction of academia worked by exclusion. As an early career person, you felt like you might belong, but you were not yet ready to participate properly.

Although this might come across as a bit negative, it is not meant that way. Academia never was an utopian place outside the structures that apply in the rest of the world. More to the point, the whole idea of what is now called “research-led teaching” grew out of the assumption that certain skills cannot be taught explicitly but have to be picked up by watching others, preferably advanced professionals, at work. Now my point is not to call out traditions of instructing scholars. Rather, this memory triggers a question that keeps coming back to me when advising graduate students. I doubt that research-led teaching requires the old class system. These days, we have a rich infrastructure that, at least on the surface, seems to counter exclusion. But have we overcome this two-class system, and if not, what lesson could it teach us?

Early career people are constantly advised to advance their networking skills and their network. On the whole, I think this is good advice. However, I also fear that one can spend a quarter of a lifetime with proper networking without realising that a network as such does not help. Networks are part of a professionalised academic environment. But while they might help exchanging ideas and even offer frameworks for collaborative projects, they are not functional as such. They need some sort of glue that keeps them together. Some people believe that networks are selective by being meritocratic. But while merit or at least prestige might often belong to the necessary conditions of getting in, it’s probably not sufficient. My hunch is that this glue comes in the shape of friendship. By that I don’t necessarily mean deeply personal friendships but “academic friendships”: people like and trust each other to some degree, and build on that professionally. If true, this might be an unwelcome fact because it runs counter to our policies of inclusion and diversity. But people need to trust each other and thus also need something stronger than policies.

Therefore, the lesson is twofold: On the one hand, people need to see that sustainable networks require trust. On the other hand, we need functional institutional structures to both to sustain such networks and to counterbalance the threat of nepotism that might come with friendship. We have or should have such structures in the shape of laws, universities, academic societies and a growing practice of mentoring. To be sure, saying that networks are not meritocratic does not mean that there is no such thing as merit. Thus, such institutions need to ensure that processes of reviewing are transparent and in keeping with commitments to democratic values as well as to the support of those still underrepresented. No matter whether this concerns written work, conferences or hiring. But the idea that networks as such are meritocratic makes their reliance on friendships invisible.

Now while friendships cannot be forced, they can be cultivated. If we wish to counter the pernicious class system and stabilise institutional remedies against it, we should advise people to extend (academic) friendships rather than competition. Competition fosters the false idea that getting into a network depends on merit. The idea of extending and cultivating academic friendship rests on the idea that merit in philosophy is a collective effort to begin with and that it needs all the people interested to keep weaving the web of knowledge. If at all, it is this way that meritocratic practices can be promoted; not by exclusion. You might object that we are operating with limited resources, but if the demand is on the rise, we have to demand more resources rather than competing for less and less. That said, cultivating academic friendships needs to be counterbalanced by transparency. Yet while we continue to fail, friendships are not only the glue of networks, but might be what keeps you sane when academia seems to fall apart.

Postscriptum I: So what about the conference referred to above? The event is a follow-up from a conference in 1999, and quite some of the former participants are present again. If it was, as it seems, based on academic friendships, isn’t that a reason to praise it? As I said and wish to emphasise again, academic friendships without institutional control do not foster the kinds of inclusive environments we should want. For neither can there be meritocratic procedures without the inclusion of underrepresented groups, nor can a two-class separation of established and not-established scholars lead to the desired extension of academic friendships. In addition to the memories triggered, one might note other issues. Given that there are comparatively many women working in this field, it is surprising that only three women are among the invited speakers. That said, the gendered conference campaign has of course identified understandable reasons for such imbalances. A further point is the fact that early career people wishing to attend have roughly two weeks after the announcement to register and apply. There is no reimbursement of costs, but one can apply for financial support after committing oneself to participate. – In view of these critical remarks, it should be noted again that this conference rather represents the status quo than the exception. The idea is not to criticise that academic friendships lead to such events, but rather to stress the need for rethinking how these can be joined with institutional mechanisms that counterbalance the downsides in tightening our networks.

___

* Postscriptum II (14 March 2019): Yes. Before writing this post, I sent a mail to S.I.E.M.P. inquiring about the nature of the seminars for early career people. I asked:

(1) Are there any time slots reserved for this or are the seminars held parallel to the colloquium?
(2) What is the “new scientific methodology” referred to in the call?
(3) And is there any sort of application procedure?

The mail was forwarded to the local organisers and prompted the following reply:

“Thank you for interest in the colloquium on the Dionysian Traditions!

The time for the seminars is Friday morning. The papers should not be longer than 20 minutes. You should send us a list with titles, and preferably – with abstracts too. We have a strict time limit and not everyone may have the opportunity to present. Travel and accommodation costs are to be covered by the participants.

The new scientific methodology is the methodology you deem commensurate with the current knowledge about the Corpus.”

Apart from the fact that the event runs from a Monday to a Wednesday, the main question about the integration and audience of these seminars remains unanswered. Assuming that “Friday” is Wednesday, the seminars conicide with the announced round table, to be held at the same time at which the bureau of S.I.E.P.M. holds their meeting. (This was confirmed by a further exchange of mails.) But unlike the announcement itself, the mail now speaks of “papers” that the attendees may present.

Fake news, faith, and the know-it-all

When working on Ockham’s discussion of the distinction between faith and reason, I encountered an interesting kind of sentence, the so-called “neutral proposition” (propositio neutra). A common example for such a sentence is “the number of stars is even.” It is neutral in that we have no grounds for assenting or withholding assent. We grasp what it means but we are neither compelled to believe it nor to disbelieve it. (Please note: “neutral” doesn’t necessarily mean that the proposition is neither true nor false; it just means that we have currently no way of figuring out whether it’s true or false.)* In fact, many important things we believe seem to have that status, at least at the time of learning about them. We believe that we have been born in a certain year, that the earth is round and so on. Most of us learn such things through the testimony of others without ever checking them. Although the context of the discussion in Ockham is theological, his ideas generalise: there are many things we do and need to take on faith. I think that this fact is crucial but underrated in the discussion of fake news.** A very widespread response to the phenomenon of fake news is to recommend fact checking. I think this is one-sided and thus problematic. When we have the suspicion that some news item is fake news, then we often are in a position where we cannot (immediately) assess the information. In other words, news are much of the time a collection of neutral propositions for us. In what follows, I’d like to suggest that we need to consider the role of faith or trust as well as the related role of (intellectual) humility, if we want to tackle this issue.

We don’t only learn things through others; we also learn early on that it is vital to trust others and trust what they say. Trust is the glue that holds our societies and our lives together. It’s not surprising, then, that we have a tendency to believe everything we perceive and read. Yes, every now and then we might step back and look again, but our default mode is to believe.*** So even if, strictly speaking, a neutral proposition comes our way, we will embrace it. Read the following sentence: “The majority of people now living in Prenzlauer Berg (in Berlin) have migrated there from Southern Germany.” Do you believe it? Of course, the current context of discussion might make you doubtful, but you’d probably read on without hesitation if this were a newspaper article on urban life in Berlin. Your response would not be to fact-check but to believe, unless something triggers a doubt.

This psychological fact, the “bias to believe”, has a number of consequences. We are inclined to believe things. If this is the glue of our lives, then any dysfunction of that glue will hurt us. We will be hurt if our trust is exploited. More importantly perhaps, our pride will be hurt if we are found out to have assented to a lie or even passed on a piece of false information. We will be called naïve, and people will reduce the degree of trust in us. Do you like to be called naïve? – I don’t. So what do I do? That depends on my emotional and other resources. Was it a one-off? Were you just told that Father Christmas doesn’t have a beard? That’s fine. But what if your whole belief system is branded as a result of naivety? You certainly will feel excluded, to put it mildly.

Let’s shift the focus for a second: how will you feel if you are a religious person who is told, again and again, that there is no God, that atheism is the way to go and that religion is anti-science? It is often said that matters of religion are a private issue. Psychologically speaking, this cannot be right. If trusting and believing are the glue of society, then attacks on our beliefs will hurt and upset individuals and, by extension, our society. Of course, many people have come to live with that. For many, it’s part of the package I guess. We can be pluralists. But the direct confrontation might still hurt. And if we can choose our company, we might be inclined to stick with those who respect our beliefs and perhaps host a quiet resentment towards those who feel justified in attacking us.

The point I want to return to now is that criticism of our beliefs often not only concerns individual convictions but also targets the trust we have in others, the trust on which we were inclined to embrace certain beliefs. Religion is just one of many possible examples. Most of our beliefs are deeply entrenched in our daily actions and partly shared conventions: be they religious, political, aesthetical etc. But the example of religion is a helpful one, since there is hardly any field in which people seem to feel so justified to self-righteously criticise others, and this despite the fact that most beliefs in this area are not attacked because they could be shown to be false. Most beliefs in this realm are a matter of faith. They are what I introduced as neutral propositions, to which we are neither compelled to assent nor to dissent. There is a huge difference between the agnostic claim that we do not know about these matters and the more invested claim that certain beliefs are false. In some cases, such a stance might be justified; in other cases, we might just act like a know-it-all. My hunch is that the latter stance is fairly widespread and causes much more controversy than is justified by the evidence the participants in disagreements can invoke.

If we are criticised for holding certain beliefs, this might of course be justified. There is nothing wrong with that. What I am concerned with is beliefs that are based on neutral propositions. Of course you might argue that one should only believe what one has evidence for. Good luck with that! – If we are dealing with information that we can’t assess, we have three options: we can embrace it (which is what we are inclined to do); we can (try to) reject it; or we can acknowledge – hold your breath, drumroll: we can acknowledge that we do not know whether it’s true or not. The virtue I am referring to is known as (intellectual) humility. Of course, we can do what we like if we are by ourselves, scrolling through the web or listening quietly. But if we are in a discussion, our choice matters. Do we want to criticise? By all means, if it is justified. But more often than not our own means are limited: we have stored whole systems of beliefs, without ever checking whether they are true. If we are not sure, it might be advisable to just acknowledge that. Criticising others in their beliefs is probably going to hurt them, more or less. The point is not to stop being critical; the point is to figure out what we are critical towards. Instead of saying, “you are mistaken”, we can also say, “I don’t know whether that’s right or not.” You can then establish whether and how that can be checked.

Now of course this does not mean that we should try and check all the beliefs we hold. Luckily, we have a division of labour. My parents know my birthday; so I don’t need to work it out by going to archives. There are a number of authorities we rely on. “Relying on authorities” sounds naïve perhaps, but that’s what we do when we trust others. If we have disagreements with others about politics or religion, this is often owing to the fact that we rely on different authorities or that we prioritise different authorities. Authorities come in various shapes. Often we don’t even notice them, because they have the form of deeply entrenched ideologies, promoting misogyny, racism and other forms of dehumanisation. Equally often they might concern ideas about how the world works, about what is valuable, what is useful etc. Beliefs about all such matters can be spread by everyone, with quite different epistemic status. In some matters, we trust our friends more than others, even if they might lack epistemic credentials. Criticising others often involves criticising their authorities. Again, that’s fine and often vital, but it’s equally important to be aware that we are doing it, because it concerns the glue of trust that potentially holds us together or keeps us apart if we disagree.

Calling out “fake news” is a way of criticising such authorities. Now what should we do in cases of disagreement? Criticism is of course important. But it is eqally important to see that we are interacting with others whose beliefs are at stake. Even if we suspect that the politicians or the news venues in question are merely bullshitting, the believers are inclined to take their words for granted; they trust their authorities. Now the point is not to be nice to people who believe bullshit; the point is to acknowledge that they have reasons to believe that bullshit. Calling believers (of bullshit or whatever) stupid will only deepen the rupture of trust. What’s crucial to see is that they will see our criticism in the same way as we see their beliefs. What we should establish in such disagreements, then, is whether we might perhaps be dealing with a neutral proposition. That might actually reveal a commonality between us and our interlocutor. We might both be in a position in which we don’t know for sure what’s going on. If we can establish that, we might gain more ground by scratching our heads than insisting we’re on the right side.

___

* This wasn’t really clear in the original post. Thanks to CJ Sheu for the fruitful discussion.

** Part of my reflections have been triggered by an excellent new book by Romy Jaster and David Lanius. Get it, if you have some German.

*** See for instance Eric Mandelbaum, Thinking is Believing.

The competition fallacy

“We asked for workers. We got people instead.” Max Frisch

 

Imagine that you want to buy an album by the composer and singer Caroline Shaw, but they sell you one by Luciano Pavarotti instead, arguing that Pavarotti is clearly the more successful and better singer. Well, philosophers often make similar moves. They will say things like “Lewis was a better philosopher than Arendt” and even make polls to see how the majority sees the matter. Perhaps you agree with me in thinking that something has gone severely wrong in such cases. But what exactly is it? In the following I’d like to suggest that competitive rankings are not applicable when we compare individuals in certain respects. This should have serious repercussions on thinking about the job market in academia.

Ranking two individual philosophers who work in fairly different fields and contexts strikes me as pointless. Of course, you can compare them, see differences and agreements, ask about their respective popularity and so forth. But what would Lewis have said about the banality of evil? Or Arendt about modal realism? – While you might have preferences for one kind of philosophy over another, you would have a hard time explaining who the “better” or more “important” philosopher is (irrespective of said preferences). There are at least three reasons for this: Firstly, Arendt and Lewis have very little point of contact, i.e. a straightforward common ground on which to plot a comparison of their philosophies. Secondly, even if they had more commonalities or overlaps, the respective understandings of what philosophy is and what good philosophy should accomplish can be fairly different. Thirdly and perhaps most importantly, philosophies are always individual and unique accomplishments. Unique creations are not something one can have a competition about. If we assume that there is a philosophical theory T1, T1 is not the kind of thing that you can compete about being better at. Of course, you can refine T1, but then you’ve created a refined theory T2. Now you might want to claim that T2 can be called better than T1. But what would T2 be, were it not for T1? Relatedly, philosophers are unique. The assumption that what one philosopher does can be done better or equally well by another philosopher is an illusion fostered by professionalised environments. People are always unique individuals and their ideas cannot be exchanged salva veritate.*

Now since there are open job searches (sometimes even without a specification of the area of specialisation) you could imagine a philosophy department in 2019 having to decide whether they hire Lewis or Arendt. I can picture the discussions among the committee members quite vividly. But in doing such a search they are doing the same thing as the shop assistant who ends up arguing for Pavarotti over Shaw. Then words like “quality”, “output”, “grant potential”, “teaching evaluations”, “fit” … oh, and “diversity” will be uttered. “Arendt will pull more students!” – “Yeah, but what about her publication record? I don’t see any top journals!” – “Well, she is a woman.” In a good world both of them would be hired, but we live in a world where many departments might rather hire two David Lewises. So what’s going on?

It’s important to note that the competition is not about their philosophies: Despite the word “quality”, for the three reasons given above, the committee members cannot have them compete as philosophers. Rather, the department has certain “needs” that the competition is about.** The competition is about functions in the department, not about philosophy. As I see it, this point generalises: competitions are never about philosophy but always about work and functions in a department.*** Now, the pernicious thing is that departments and search committees and even candidates often pretend that the search is about the quality of their philosophy. But in the majority of cases that cannot be true, simply because the precise shape, task and ends of philosophy are a matter of dispute. What weighs is functions, not philosophy.

Arguably, there can be no competition between philosophers qua philosophers. Neither between Arendt and Lewis, nor between Arendt and Butler, nor between Lewis and Kripke. Philosophers can discuss and disagree but they cannot compete. What should they compete about? If they compete about jobs, it’s the functions in departments that are at stake. (That is also the reason why we allow for prestige as quality indicators.)  If they assume to be competing about who is the better philosopher, they mistake what they are doing. Of course, one philosopher might be preferred over another, but this is subject to change and chance, and owing to the notion of philosophy of the dominant committee member. The idea that there can be genuinely philosophical competition is a fallacy.

Does it follow, then, that there is no such thing as good or better philosophy? Although this seems to follow, it doesn’t. In a given context and group, things will count as good or better philosophy. But here is another confusion lurking. “Good” philosophy is not the property of an individual person. Rather, it is a feature of a discussion or interacting texts. Philosophy is good if the discussion “works well”. It takes good interlocutors on all sides. If I stammer out aphorisms or treatises, they are neither good nor bad. What turns them into something worthwhile is owing to those listening, understanding and responding. To my mind, good quality is resourcefulness of conversations. The more notions and styles of philosophy a conversation can integrate, the more resources it has to tackle what is at stake. In philosophy, there is no competition, just conversation.

Therefore, departments and candidates should stop assuming that the competition is about the quality of philosophy. Moreover, we should stop claiming that competitiveness is an indicator of being a good philosopher.**** Have you ever convinced an interlocutor by shouting that you’re better or more excellent than them?

___

* At the end of the day, philosophies are either disparate or they are in dialogue. In the former case, rivalry would be pointless; in the latter case, the rivalry is not competitive but a form of (disagreeing or agreeing) refinement. If philosophers take themselves to be competing about something like the better argument, they are actually not competing but discussing and thus depend on one another.

** This does not mean that these needs or their potential fulfillment ultimately decide the outcome of the competition. Often there is disagreement or ignorance about what these needs are or how they are to be prioritised. With regard to committees, I find this article quite interesting.

*** In a recent blog post, Ian James Kidd distinguishes between being good at philosophy vs being good at academic philosophy. It’s a great post. (My only disagreement would be that being good at philosophy is ultimately a feature of groups and discussions, not individuals.) Eric Schliesser made similar points in an older more gloomy post.

**** On FB, Evelina Miteva suggeststhat we need fair trade philosophy, like the fair trade coffee. Fair trade coffee is not necessarily of a better taste or quality, it ‘only’ makes sure that the producers will get something out if their work.” – I think this is exactly right: On some levels, this already seems to be happening, for instance, in the open access movement. Something similar could be applied to recruiting and employment conditions in academia. In fact, something like this seems to be happening, in that some universities are awarded for being family friendly or being forthcoming in other ways (good hiring practice e.g.). – My idea is that we could amend many problems (the so-called mental health crisis etc.), if we were to stop incentivising competitiveness on the wrong levels and promote measures of solidarity instead. – The message should be that the community does no longer tolerate certain forms of wrong competition and exploitation.

Relatedly, this also makes for an argument in favour of affirmative action against discrimination of underrepresented groups: People who believe in meritocracy often say that affirmative action threatens quality. But affirmative action is not about replacing “good” with “underrepresented” philosophers. Why? Because the quality of philososphy is not an issue in competitive hiring in the first place.

Diversity and Vulnerability (Part II)

My parents were both refugees. In 1945, when my mother was five and my father thirteen years old, they both left their homes in what were villages near Kaliningrad and Wachlin, and started moving west. Eventually they both ended up in a small town near Düsseldorf where they met in the sixties. I’m told it wasn’t as bad as it is now for refugees by far, but they never felt very welcome. Thinking about their lives, it dawned on me rather late that one of the crucial driving factors in their conduct was the constant attempt to avoid attracting attention at all costs. “What will the neighbours say”, was a repeated phrase. While the phrase is rather common, I guess the intensity of the shame behind it will vary. For better or worse, it didn’t stick with me too much. But the issue of shame and hiding oneself is another one that keeps coming back when I think about diversity and what blocks it. – In what follows, I don’t want to speak out against diversity. Quite the contrary. But I want to reflect on what needs to change if we really want at least some of it.

So what has shame to do with diversity? – One of the assets of a fairly diverse team is that, at least after a while, people take fewer things for granted, ask more questions, and get to see things they hadn’t expected to see. In one word, multiple perspectives. But even if the members of a team are diverse (in whatever ways), this is hard to achieve. The reason is that, for this to occur, people have to be open. People have to make themselves visible. Being open, not as in calling a spade a spade, but as in showing your perspectives with all their possible shortcomings, that kind of being open is hard even among friends. Showing yourself like that creates great potential but leaves yourself vulnerable.*

Obviously, the vulnerability of people from “diverse backgrounds” or “underrepresented groups” (of whatever kind) is infinitely greater than of those who conform to perceived majorities. That shouldn’t be surprising because the very fact that someone stands out with ‘diversity markers’ puts them on the spot. Now in professional contexts, we are trained to conform as much as possible. Despite all the talk about fresh ideas people will call you “weird” (rather than “inspiring”) before you finish your sentence. Of course, it’s fine to be weird, but only if you’ve got tenure. So you’ll rather do and talk as everyone else does. And if you try otherwise, see what happens. Perhaps you get to be the token weirdo and people put you in a nice bowl and keep you on an extra shelf in the department, but it’s more likely that you just created more space for the next conformist. The reason is quite simple: academia is a competitive environment; the expectation is that you excel in common features, not in something no one else does. Accordingly, hiring committees look for “fit”, not for fun or something they need to look at or think about twice.

Now what’s going on here? I think that employers should stop pretending to be looking for diversity. What employers actually do when they pretend to look for candidates from “underrepresented groups” is attempting to circumvent common discriminatory behaviour. That is not a particularly noble end but a political necessity. In such cases, people are not sought out because one wants to diversify teams, but because discrimination is unjust and unlawful (in some places at least). Of course, discrimination is still a thing, and it should end. But ending discrimination is not the same as implementing diversity.

So how do we implement diversity? While discrimination can be countered by amending formal procedures such as hiring processes, diversity is something that is not or not necessarily owing to the fact that someone is from an underrepresented group. Of course, this can go together but it doesn’t need to. A diversity of ideas, approach or method is something that anyone might have for whatever reason. The crucial step to enable such diversity is a climate in which people trust each other. Trust each other sufficiently to be open and make themselves visible in their vulnerability. That would be a climate in which even strangers would feel welcome to share their views. That is something we need to work on all together. For the shame and pressure to conform is a thing for all of us. They more we let go of it, the better for everyone involved, not least for those from “underrepresented groups”. It’s a matter of solidarity, not a policy.

____

* On vulnerability, I learned a lot through Brené Brown’s talks (here’s a ted talk) as well as the work by my former colleague Christine Straehle (here’s a collected volume).

Some observations concerning diversity (Part I)

(1) This morning, I took my daughter Hannah to a nice playground in Berlin. She is just over two years old and particularly enjoys climbing. As we approached the climbing frame, one of the boys on top shouted that they wouldn’t allow girls on the climbing frame. The girls around him smiled …

(2) Two years ago, I joined an initiative designed to keep a far-right right party (AFD) out of parliament. The initiative was founded by a group of social entrepreneurs, and the first workshop was attended mainly by social entrepreneurs, most of them in their late twenties or early thirties. At the beginning, we were asked whether we actually knew anyone who would vote for the AFD. This was affirmed only by one man, somewhat older than most. He told us that he knew a few people in the village he came from. Later in the day, we had to form smaller groups. Guess who had difficulty finding a group …

(3) Quite a number of years ago, I was on a hiring committee while still being a postdoc. For the first round of going through the applications, we were advised to focus on CVs. Discussing the candidates, I listened to a number of meritocratic arguments: they mostly, but not solely, concerned numbers of “good” publications (“good” means in prestigious venues, in case you were wondering). Then someone pointed out that we should perhaps take more time for reading. Someone else pointed out that we should scan cover letters for reasons of interruptions in the CVs, such as times for child care work. Guess what didn’t carry any weight when we drew up the list …

I could go on. These are minor observations; anyone could have made them. Nevertheless, in all their banality, these stories of exclusion form the daily fabric of our failures. But where exactly can we locate the failures? The people involved in these situations were not bad people, not at all: lively children on a playground, ambitious young entrepreneurs fighting for a good cause, a hiring committee focussed on selecting the best candidate. The events didn’t result in catastrophes: Hannah climbed up anyway, the initiative went nowhere, someone certainly deserving got a job. – The point is that these situations are so ordinary that hardly anyone would consider or be willing to cause much of a fuss. Or so it seems. The harm might be done: The next little girl might be discouraged from going to the playground; the next lonely man from the village will perhaps find better company among the far-right; a number of candidates, who would have been equally deserving even by the committee’s standards, were not even considered. Even if no one tells them, the stories of exclusion go on. But what can we do?

The first thing is that in all these situations, there was a possibility of making a difference. Contrary to a widespread belief, we are not powerless minions. The fact that we cannot save the whole world doesn’t mean that we cannot change our ways here and there, or listen for a change. But there are two things that make me doubtful:

  • Despite all the fashionable talk of diversity, such stories suggest that many of us don’t really want diversity. What these situations reveal is that the exclusive attitudes seem to be very deeply ingrained.
  • Even if such attitudes can be overcome, I sense a sort of paranoia amongst many people in positions of public leadership. If you look at more and less well-known people in politics, such as presidents, prime ministers, chancellors etc., many of them display a fear of everything that is different. But while it’s easy to call them names for it, we shouldn’t forget that they get elected by majorities.

It’s complicated. The exclusions we see are not ad hoc, but fostered by long-standing structures. There’s much of that I dislike or even hate, while I continue to be a beneficiary in some way or another. That is probably true of many of us. Thus, the charge of hypocrisy always seems justified. But if that’s true, the charge of hypocrisy shouldn’t cause too much worry. The worry of being called a hypocrite might be paralysing. What shouldn’t be forgotten is that hypocrisy is structural, too. It can’t be amended by targeting individuals. Everyone who’s is trying to interfere and make a difference, can be called a hypocrite.

Why is the Wall still dividing us? The persistence of the intellectual cold war

Discussing some ideas relating to my piece on the divide between Eastern and Western Europe, I was struck by the following question: “How is it possible that the Wall separating East and West is still so successful?” It’s true! While I thought that 1989 marks a historical turning point that initiated an integrative process, this turns out to be a cherished illusion of mine. As my interlocutor pointed out, the Wall is still successful. It continues to keep the people apart. OK, I wanted to object, there is still an economic divide and that takes time of course. “No”, she retorted, “it’s not only economic, it’s also intellectual.” So there the point was being made again: the cold war attitude continues, also intellectually. What does that mean?

To keep things simple, I’ll focus on Germany. Let’s begin with my illusion. Having grown up in a small town near Düsseldorf, the Wall seemed far away. Apart from very few visits of relatives, I had no contacts and not much of an idea of the former GDR. Although I was quite moved by the fall of the Wall, it didn’t seem to affect my life directly. That changed when I moved to Budapest for one and a half years in 1994. I realised that my education had failed me. I knew hardly anything about the history of Hungary or Eastern Europe, not even about East Germany. Nevertheless, I was convinced that now there would be a process of integration under way. Europe would grow together. Slowly but surely; it had to. Did it?

East Germany is still a thing. That is strange in itself. It shouldn’t be. After all, the Wall has now been down slightly longer than it was up. It was built in 1961 and came down in 1989. Since then, 30 years have passed. But it must be liberating to think that neo-nazism, for instance, is not a problem in Germany but East Germany. The East is a perfect scapegoat in several domains. The narrative is dominated by the economic imbalance and the idea that people from the (former) East lack democratic education. The upshot of the narrative seems to be this: East Germans have a different mentality. The former generations were communists or dominated by communists. Their offspring are neo-nazis or indoctrinated by neo-nazis. You get the idea…

Of course, many biographies have been upset one way or another. Of course, the situation is complex, but after looking at some figures the crucial problems don’t strike me as owing primarily to a difference in mentality. This doesn’t mean that there is no problem of neo-nazism. But how did things start moving in the East? The former GDR seems to have been “colonised”, as some put it already by 1997. What were the effects in academia? Today, I read an article with the headline that “men lead three quarters of German universities”. Yes, I can imagine that. What the headline didn’t state but what I found among the figures is that “no rector or president” originates from East Germany. OK, not every academic biography culminates in leading a university. But still: reading this, I felt reminded of stories about “liquidation”. So I looked up some more figures: “By 1994, over 13,000 positions at universities in east Germany had been liquidated, and 20,000 more people (including 5,000 professors) had lost their jobs. Many teachers and academics were fired ‘despite the fact that they had proven professional qualifications.’” It’s hard to believe that these measures did anything to facilitate integration, let alone intellectual integration.

At this point, it doesn’t look like the divisive East-West narratives will loose their grip anytime soon. But why not? I guess that the colonialisation as part of the unification process will continue to be justified. It needs to be justified to cover up the crimes committed. But how can such things be justified? How can you justify to have sacked a huge amount of people and taken over quite some of their positions? Again, it will continue to be blamed on the mentality: If they complain about the past, you can blame it on their mentality. The lack of democratic education left them unfit for such responsibilities and now they are being nostalgic.* If they complain about their situation now, you can still blame it on the mentality. Which mentality? Well, they are still lacking democratic education and cannot understand that the state doesn’t have the duty to coddle them in their unemployment. Whichever way you see it, they just turn out to be incapable. So taking over responsibilities was and is justified.

It goes without saying that the forgoing is verging on a black and white picture. The idea is not to explain such complex political developments in a blog post. The idea is to gesture towards an answer to the question of why the Wall continues to be so efficient, even though it came down 30 years ago. What is the ideology that is so persistent? I truly don’t know. Perhaps the ideology of those erecting the Wall is still or again at work. Or the very idea of a Wall, of a divide, is more convenient than we want to believe. For it can serve in justifying differences and injustices even after tearing it down. In any case, the reference to “mentality” strikes me as a highly problematic form of moral exclusion.

So perhaps the Wall continues to serves as a justificational loop. How does it work? The people blaming each other will most likely not assume that they are just pointing to a supposed mentality in order to justify an injustice. They will take their beliefs (about communism, nazism, and capitalism etc.) to be true of those accused. But they can likely do so as long as the justificational mechanism of the Wall keeps them apart.

____

* Thinking about (n)ostalgia, there is also the more sinister explanation that the cherished memories do not simply reach back to the time after 1961, but further back to the times after 1933. In other words, if you want to explain the far-right movement by reference to mentality, you might look to the mentality that spans across East and West Germany, nourished well before the Wall was erected.