The competition fallacy

“We asked for workers. We got people instead.” Max Frisch

 

Imagine that you want to buy an album by the composer and singer Caroline Shaw, but they sell you one by Luciano Pavarotti instead, arguing that Pavarotti is clearly the more successful and better singer. Well, philosophers often make similar moves. They will say things like “Lewis was a better philosopher than Arendt” and even make polls to see how the majority sees the matter. Perhaps you agree with me in thinking that something has gone severely wrong in such cases. But what exactly is it? In the following I’d like to suggest that competitive rankings are not applicable when we compare individuals in certain respects. This should have serious repercussions on thinking about the job market in academia.

Ranking two individual philosophers who work in fairly different fields and contexts strikes me as pointless. Of course, you can compare them, see differences and agreements, ask about their respective popularity and so forth. But what would Lewis have said about the banality of evil? Or Arendt about modal realism? – While you might have preferences for one kind of philosophy over another, you would have a hard time explaining who the “better” or more “important” philosopher is (irrespective of said preferences). There are at least three reasons for this: Firstly, Arendt and Lewis have very little point of contact, i.e. a straightforward common ground on which to plot a comparison of their philosophies. Secondly, even if they had more commonalities or overlaps, the respective understandings of what philosophy is and what good philosophy should accomplish can be fairly different. Thirdly and perhaps most importantly, philosophies are always individual and unique accomplishments. Unique creations are not something one can have a competition about. If we assume that there is a philosophical theory T1, T1 is not the kind of thing that you can compete about being better at. Of course, you can refine T1, but then you’ve created a refined theory T2. Now you might want to claim that T2 can be called better than T1. But what would T2 be, were it not for T1? Relatedly, philosophers are unique. The assumption that what one philosopher does can be done better or equally well by another philosopher is an illusion fostered by professionalised environments. People are always unique individuals and their ideas cannot be exchanged salva veritate.*

Now since there are open job searches (sometimes even without a specification of the area of specialisation) you could imagine a philosophy department in 2019 having to decide whether they hire Lewis or Arendt. I can picture the discussions among the committee members quite vividly. But in doing such a search they are doing the same thing as the shop assistant who ends up arguing for Pavarotti over Shaw. Then words like “quality”, “output”, “grant potential”, “teaching evaluations”, “fit” … oh, and “diversity” will be uttered. “Arendt will pull more students!” – “Yeah, but what about her publication record? I don’t see any top journals!” – “Well, she is a woman.” In a good world both of them would be hired, but we live in a world where many departments might rather hire two David Lewises. So what’s going on?

It’s important to note that the competition is not about their philosophies: Despite the word “quality”, for the three reasons given above, the committee members cannot have them compete as philosophers. Rather, the department has certain “needs” that the competition is about.** The competition is about functions in the department, not about philosophy. As I see it, this point generalises: competitions are never about philosophy but always about work and functions in a department.*** Now, the pernicious thing is that departments and search committees and even candidates often pretend that the search is about the quality of their philosophy. But in the majority of cases that cannot be true, simply because the precise shape, task and ends of philosophy are a matter of dispute. What weighs is functions, not philosophy.

Arguably, there can be no competition between philosophers qua philosophers. Neither between Arendt and Lewis, nor between Arendt and Butler, nor between Lewis and Kripke. Philosophers can discuss and disagree but they cannot compete. What should they compete about? If they compete about jobs, it’s the functions in departments that are at stake. (That is also the reason why we allow for prestige as quality indicators.)  If they assume to be competing about who is the better philosopher, they mistake what they are doing. Of course, one philosopher might be preferred over another, but this is subject to change and chance, and owing to the notion of philosophy of the dominant committee member. The idea that there can be genuinely philosophical competition is a fallacy.

Does it follow, then, that there is no such thing as good or better philosophy? Although this seems to follow, it doesn’t. In a given context and group, things will count as good or better philosophy. But here is another confusion lurking. “Good” philosophy is not the property of an individual person. Rather, it is a feature of a discussion or interacting texts. Philosophy is good if the discussion “works well”. It takes good interlocutors on all sides. If I stammer out aphorisms or treatises, they are neither good nor bad. What turns them into something worthwhile is owing to those listening, understanding and responding. To my mind, good quality is resourcefulness of conversations. The more notions and styles of philosophy a conversation can integrate, the more resources it has to tackle what is at stake. In philosophy, there is no competition, just conversation.

Therefore, departments and candidates should stop assuming that the competition is about the quality of philosophy. Moreover, we should stop claiming that competitiveness is an indicator of being a good philosopher.**** Have you ever convinced an interlocutor by shouting that you’re better or more excellent than them?

___

* At the end of the day, philosophies are either disparate or they are in dialogue. In the former case, rivalry would be pointless; in the latter case, the rivalry is not competitive but a form of (disagreeing or agreeing) refinement. If philosophers take themselves to be competing about something like the better argument, they are actually not competing but discussing and thus depend on one another.

** This does not mean that these needs or their potential fulfillment ultimately decide the outcome of the competition. Often there is disagreement or ignorance about what these needs are or how they are to be prioritised. With regard to committees, I find this article quite interesting.

*** In a recent blog post, Ian James Kidd distinguishes between being good at philosophy vs being good at academic philosophy. It’s a great post. (My only disagreement would be that being good at philosophy is ultimately a feature of groups and discussions, not individuals.) Eric Schliesser made similar points in an older more gloomy post.

**** On FB, Evelina Miteva suggeststhat we need fair trade philosophy, like the fair trade coffee. Fair trade coffee is not necessarily of a better taste or quality, it ‘only’ makes sure that the producers will get something out if their work.” – I think this is exactly right: On some levels, this already seems to be happening, for instance, in the open access movement. Something similar could be applied to recruiting and employment conditions in academia. In fact, something like this seems to be happening, in that some universities are awarded for being family friendly or being forthcoming in other ways (good hiring practice e.g.). – My idea is that we could amend many problems (the so-called mental health crisis etc.), if we were to stop incentivising competitiveness on the wrong levels and promote measures of solidarity instead. – The message should be that the community does no longer tolerate certain forms of wrong competition and exploitation.

Relatedly, this also makes for an argument in favour of affirmative action against discrimination of underrepresented groups: People who believe in meritocracy often say that affirmative action threatens quality. But affirmative action is not about replacing “good” with “underrepresented” philosophers. Why? Because the quality of philososphy is not an issue in competitive hiring in the first place.

History of contemporary thought and the silence in Europe. A response to Eric Schliesser

What should go into a history of contemporary ideas or philosophy? Of course this is a question that is tricky to answer for all sorts of reasons. What makes it difficult is that we then tend to think of mostly canonical figures and begin to wonder which of those will be remembered in hundred years. I think we can put an interesting spin on that question if we approach it in a more historical way. How did our current thoughts evolve? Who are the people who really influenced us? There will not only be people whose work we happen to read, but those who directly interact and interacted with us. Our teachers, fellow students, friends and opponents. You might not think of them as geniuses, but we should drop that category anyway. These are likely people who really made a difference to the way you think. So let’s scratch our heads a bit and wonder who gave us ideas directly. In any case, they should figure in the history of our thought.

You might object that these figures would not necessarily be recognised as influential at large. However, I doubt that this is a good criterion: our history is not chiefly determined by who we take to be generally influential, but more often than not by those people we speak to. If not, why would historians bother to figure out real interlocutors in letters etc.? This means that encounters between a few people might make quite a difference. You might also object that a history of contemporary philosophy is not about you. But why not? Why should it not include you at least? What I like about this approach is that it also serves as a helpful corrective to outworn assumptions about who is canonical. Because even if certain figures are canonical, our interpretations of canonical figures are strongly shaped by our direct interlocutors.

Thinking about my own ideas in this way is a humbling experience. There is quite a number of people inside and outside my department to whom I owe many of my ideas. But this approach also reveals some of the conditions, political and other, that allow for such influence. One such condition I am painfully reminded of when observing the current political changes in Europe. No, I do not mean Brexit! Although I find these developments very sad and threatening indeed, most of the work done by friends and colleagues in Britain will reach me independently of those developments.

But Central and Eastern Europe is a different case. As it happens, the work that affected my own research most in the recent years is on the history of natural philosophy. It’s more than a guess when I say that I am not alone in this. Amongst other things, it made me rethink our current and historical ideas of the self. Given that quite a number of researchers who work on this happen to come from Central and Eastern Europe, much of this work probably wouldn’t have reached me, had it not been for the revolutions in 1989. This means that my thinking (and most likely that of others, too) would have been entirely different in many respects, had we not seen the Wall come down and communist regimes overthrown.

Why do I bring this up now? A brief exchange following up on an interesting post by Eric Schliesser* made it obvious that many Western Europeans, by and large, seem to assume that the revolutions from 1989 have had no influence on their thought. As he puts it, “the intellectual class kind of was conceptually unaffected” by them. And indeed, if we look at the way we cite and acknowledge the work of others, we regularly forget to credit many, if not most, of our interlocutors from less prestigious places. In this sense, people in what we call the Western world might be inclined to think that 1989 was not of significance in the history of thought. I think this is a mistake. A mistake arising from the canonical way of thinking about the work that influences us. Instead of acknowledging the work of individuals who actually influence us, we continue citing the next big shot whom we take to be influential in general. By excluding the direct impact of our actual interlocutors, we make real impact largely invisible. Intellectually, the West behaves as if it were still living in the Cold War times. But the fact that we continue to ignore or shun the larger patterns of real impact since 1989 does not entail that it is not there. Any claim to the contrary would, without further evidence at least, amount to an argument from ignorance.

The point I want to make is simple: we depend on other people for our thought. We need to acknowledge this if we want to understand how we come to think what we think. The fact that universities are currently set up like businesses might make us believe that the work people do can (almost) equally be done by other people. But this is simply not true. People influencing our thought are always particular people; they cannot be exchanged salva veritate. If we care about what we think, we should naturally care about the origin of our thought. We owe it to particular people, even if we sometimes forget the particular conversations in which our ideas were triggered, encouraged or refuted.

Now if this is correct, then it’s all the more surprising that we let go of the conditions enabling much of this exchange in Europe so easily. How is it possible, for instance, that most European academics remain quiet in the face of recent developments in Hungary? We witnessed how the CEU was being forced to move to Vienna in an unprecedented manner, and now the Hungarian Academy of Sciences is targeted.**

While the international press reports every single remark (no matter how silly) that is made in relation to Brexit, and while I see many academics comment on this or that aspect (often for very good reasons), the silence after the recent events in Hungary is almost deafening. Of course, Hungary is not alone in this. Academic freedom is now targeted in many places inside and outside Europe. If we continue to let it happen, the academic community in Europe and elsewhere will disintegrate very soon. But of course we can continue to praise our entrepreneurial spirit in the business park of academia and believe that people’s work is interchangeable salva veritate; we can continue talking to ourselves, listen diligently to our echoes, and make soliloquies a great genre again.

____

* See also this earlier and very pertinent post by Eric Schliesser.

** See also this article. And this call for support.

Should contemporary philosophers read Ockham? Or: what did history ever do for us?

If you are a historian of philosophy, you’ve probably encountered the question whether the stuff you’re working on is of any interest today. It’s the kind of question that awakens all the different souls in your breast at once. Your more enthusiastic self might think, “yes, totally”, while your methodological soul might shout, “anachronism ahead!” And your humbler part might think, “I don’t even understand it myself.” When exposed to this question, I often want to say many things at once, and out comes something garbled. But now I’d like to suggest that there is only one true reply to the main question in the title: “No, that’s the wrong kind of question to ask!” – But of course that’s not all there is to it. So please hear me out.

I’m currently revisiting an intriguing medieval debate between William of Ockham, Gregory of Rimini and Pierre D’Ailly on the question of how thought is structured. While I find the issue quite exciting in itself, it’s particularly interesting to see how they treat their different sources, Aristotle and Augustine. While they clearly all know the texts invoked they emphasise different aspects. Thinking about thought, William harkens back to Aristotle and clearly thinks that it’s structure that matters. By contrast, Gregory goes along with Augustine and emphasises thought as a mental action. – For these authors it was clear that their sources were highly relevant, both as inspiration and authorities. At the same time, they had no qualms to appropriate them for their uses. – In some respects we make similar moves today, when we call ourselves Humeans or Aristotelians. But since we also have professional historians of philosophy and look back at traditions of critical scholarship, both historians and philosophers are more cautious when it comes to the question of whether some particular author would be “relevant today”.

In view of this question, historians are trained to exhibit all kinds of (often dismissive) gut reactions, while philosophers working in contemporary themes don’t really have time for our long-winded answers. And so we started talking past each other, happily ever after. That’s not a good thing. So here is why I think the question of whether any particular author could inform or improve current debates is wrongheaded.

Of course everyone is free to read Ockham. But I wouldn’t recommend doing it, if you’re hoping to enrich the current debates. Yes, Ockham says a lot of interesting things. But you’d need a long time to translate them into contemporary terminology and still more time to find an argument that will look like a right-out improvement of a current argument.* – My point is not that Ockham is not an interesting philosopher. My point is that Ockham (and many other past philosophers) doesn’t straightforwardly speak to any current concerns.

However … Yes, of course there was going to be a “however”! However, while we don’t need to ask whether any particular author is relevant today, we should be asking a different question. That Ockham doesn’t speak to current concerns doesn’t mean that historians of philosophy (studying Ockham or others) have nothing to say about current concerns. So it’s not that Ockham should be twisted to speak to current concerns; rather historians and philosophers should be talking to each other! So the right question to ask is: how can historians speak to current issues?

The point is that historians study, amongst other things, debates on philosophical issues. “You say tomahto, I say tomato”, that sort of thing. Debates happen now as they happened then. What I find crucial is that studying debates reveals features that can be informative for understanding current debates. There are certain conditions that have to be met for a debate to arise. We’re not just moving through the space of reasons. Debates occur or decline because of various factors. What we find conceptually salient can be driven by available texts, literary preferences, other things we hold dear, theological concerns, technological inventions (just think of various computer models), arising political pressures (you know what I mean), linguistic factors (what would happen if most philosophers were to write in Dutch?), and a lot of other factors, be they environmental, sociological, or what have you. Although we like to think that the pursuit of truth is central, it’s by far not the only reason why debates arise and certain concepts are coined and stick around, while others are forgotten. Although contingent, such factors are recurrent. And this is something that affects our understanding of current as well as past debates. The historian can approach current debates as a historian in the same way that she can approach past debates. And this is, I submit, where historians can truly speak to current concerns.

Coming back to the debate I mentioned earlier, there is another issue (besides the treatment of sources) that I find striking. In their emphasis of Augustine, Gregory and Peter show a transition from a representational to an action model of thought. Why does this transition occur? Why do they find it important to emphasise action over representation against William? – Many reasons are possible. I won’t go into them now. But this might be an interesting point of comparison to the current debates over enactivism versus certain sorts of representationalism. Why do we have that debate now? Is it owing to influences like Ryle and Gibson? Certainly, they are points of (sometimes implicit) reference. Are there other factors? Again, while I think that these are intriguing philosophical developments, our understanding of such transitions and debates remains impoverished, if we don’t look for other factors. Studying past transitions can reveal recurrent factors in contemporary debates. One factor might be that construing thoughts as acts rather than mere representations discloses their normative dimensions. Acts are something for which we might be held responsible. There is a lot more to be said. For now suffice it to say that it is in comparing debates and uncovering their conditions, that historians of philosophy qua historians can really contribute.

At the same time, historians might also benefit from paying more attention to current concerns. Not only to stay up to date, but also to sharpen their understanding of historical debates.** As we all know, historical facts don’t just pop up. They have to be seen. But this seeing is of course a kind of seeing as. Thus, if we don’t just want to repeat the historiographical paradigms of, say, the eighties, it certainly doesn’t hurt if our seeing is trained in conversation with current philosophers.

____

* That said, this is of course an open question. So I’m happy to be shown a counterexample.

** More on the exchange between philosophers and historians of philosophy can be found in my inaugural lecture.

Do rejections of our claims presuppose that we are abnormal?

Discussions about meaning and truth are often taken as merely theoretical issues in semantics. But as soon as you consider them in relation to interactions between interlocutors, it’s clear that they are closely related to our psychology. In what follows, I’d like to suggest that people questioning our claims might in fact be questioning whether we are normal people. Sounds odd? Please hear me out. Let’s begin with a well known issue in semantics:

Imagine you’re a linguist, studying a foreign language of a completely unknown people. You’re with one of the speakers of that language when a white rabbit runs past. The speaker says “gavagai”. Now what does “gavagai” mean?

According to Quine, who introduced the gavagai example, the expression could mean anything. It might mean: “Look, there’s a rabbit” or “Lovely lunch” or “That’s very white” or “Rabbithood instantiated”. The problem is that you cannot determine what “gavagai” means. Our ontology is relative to the target language we’re translating into. And you cannot be sure that the source language carves up the world in the same way ours does. Now it is crucial to see that this is not just an issue of translation. The problem of indeterminacy starts at home: meaning is indeterminate. And this means that the problems of translations also figure in the interaction between speakers and hearers of the same language.

Now Davidson famously turns the issue upside down: we don’t begin with meaning but with truth. We don’t start out by asking what “gavagai” means. If we assume that the speaker is sincere, we’ll just translate the sentence in such a way that it matches what we take to be the truth. So we start by thinking: “Gavagai” means something like “Look, there’s a rabbit”, because that’s the belief we form in the presence of the rabbit. So we start out by ascribing the same belief to the speaker of the foreign language and translate accordingly. That we start out this way is not optional. We’d never get anywhere, if we were to start out by wondering what “gavagai” might or might not mean. Rather we cannot but start out from what we take to be true.

Although Davidson makes an intriguing point, I don’t think he makes a compelling case against relativism. When he claims that we translate the utterances of others into what we take to be true, I think he is stating a psychological fact. If we take someone else to be a fellow human being and think that she or he is sincere, then translating her or his utterances in a way that makes them come out true is what we count as normal behaviour. Conversely, to start from the assumption that our interlocutor is wrong and to translate the other’s utterances as something alien or blatantly false, would amount an abnormal behaviour on our part (unless we have reason to think that our interlocutor is seriously impaired). The point I want to make is that sincerity and confirmation of what we take to be true will correlate with normality.

If this last point is correct, it has a rather problematic consequence: If you tell me that I’m wrong after I have sincerely spoken what I take to be the truth, this will render either me or you as abnormal. Unless we think that something is wrong with ourselves, we will be inclined to think that people who listen to us but reject our claims are abnormal. This is obvious when you imagine someone stating that there is no rabbit while you clearly take yourself to be seeing a rabbit. When the “evidence” for a claim is more abstract, in philosophical debates for instance, we are of course more charitable, at least so long as we can’t be sure that we both have considered the same evidence. Alternatively, we might think the disagreement is only verbal. But what if we think that we both have considered the relevant evidence and still disagree? Would a rejection not amount to a rejection of the normality of our interlocutor?

What’s behind the veil of perception? A response to Han Thomas Adriaenssen

Imagine you’re using glasses, would you think that your grasp of reality is somehow indirect? I guess not. We assume that glasses aid our vision rather than distort or hinder it. The fact that our vision is mediated by glasses does not make it less direct than the fact that our vision is mediated through our eyes. Now imagine your perception is mediated by what early modern philosophers call “ideas”. Does it follow that our grasp of reality is indirect? Many philosophers think that it is. By contrast, I would like to suggest that this is misleading. Ideas make our perceptions no less direct than glasses.

Both early modern and contemporary crictics often take the “way of ideas” as a route to scepticism. The assumption seems to be that the mediation of perception through ideas makes our thoughts not about reality but about the ideas. Han Thomas Adrianssen’s recent book is original in that it tells the story of this and related debates from Aquinas to the early modern period. In celebration of receiving the JHP book prize, Han Thomas gave a brief interview that aptly summarises the common line of criticism against ideas or the assumption of indirect perception related to them:

“Okay. So you explore the philosophical problem of ‘perception and representation’ from Aquinas to Descartes; what exactly is the problem?

HTA: ‘Right. So it goes like this: what is going on in your mind when you see something in your environment? Take this chair, for instance. When you see this chair, what’s happening in your mind? One answer is that you form a kind of pictorial representation of the chair. You start drawing a mental image for yourself of the thing in front of you, and you label it: ‘chair’. … But then there is a worry: if this is how it works – if this is how we access the environment cognitively – then that means there is a sort of interface between us and reality. A veil of perceptions, if you will. So what we’re thinking about is not the chair itself, but our picture of the chair.– But that can’t be right!”

Besides summarising the historical criticisms, Han Thomas seems to go along with their doubts. He suggests that metaphors trick us into such problematic beliefs: the “mental image metaphor” comes “naturally”, but brings about “major problems”.

While I have nothing but admiration for the historical analysis presented, I would like to respond to this criticism on behalf of those assuming ideas or other kinds of representational media. Let’s look at the chided metaphor again. Yes, the talk of the mental image suggests that what is depicted is remote and behind the image. But what about the act of drawing the image? Something, presumably our sense organs are exposed to the things and do some ‘drawing’. So the drawing is not done behind a veil. Rather the act of drawing serves as a transformation of what is drawn into something that is accessible to other parts of our mind.* Thus, we should imagine a series of transformations until our minds end up with ideas. But if you think of it in those terms the transformation is not akin to putting something behind a veil. Rather it is a way of making sensory input available. The same goes for glasses or indeed our eyes. They do not put something behind a veil but make it available in an accessible form. My point is that the metaphor needs to be unpacked more thoroughly. We don’t only have the image; we have the drawing, too.

Following Ruth Millikan’s account of perception,** I would like to argue that the whole opposition of indirect vs direct perception is unhelpful. It has steered both early modern and 20th-century debates in epistemology in fruitless directions. Sense perception is direct (as long as it does not involve inferences through which we explicitly reason that the presence of an idea means the presence of a represented thing). At the same time sense perception is indirect in that it requires means of transformation that make things available to different kinds of receptors. Thus, the kind of indirectness involved in certain cognitive vehicles does not lead anymore to scepticism than the fact that we use eyes to see.

What early modern philosophers call ideas are just cognitive vehicles, resulting from transformations that make things available to us. If an analogy is called for I’d suggest relating them, not to veils, but to glasses. If we unpack the metaphor more thoroughly, what is behind the veil is not only the world, but our very own sense organs making the world available by processing them through media accessible to our mind. If that evokes sceptical doubts, such doubts might be equally raised whenever you put your glasses on or indeed open you eyes to see.

___

* As Han Thomas himself notes (in the book, not the interview), many medieval authors do not think that representationalism leads to scepticism, and endorse an “epistemic optimism”. I guess these authors could be reconstructed as agreeing with my reply. After all, some stress that species (which could be seen as functionally equivalent to ideas) ought to be seen as a medium quo rather than that which is ultimately cognised.

** Ruth Millikan even claims that language is a means of direct perception: “The picture I want to leave you with, then, is that coming to believe, say, that Johnny has come in by seeing that he has come in, by hearing by his voice that he has come in, and by hearing someone say “Johnny has come in,” are normally equivalent in directness of psychological processing. There is no reason to suppose that any of these ways of gaining the information that Johnny has come in requires that one perform inferences. On the other hand, in all these cases it is likely that at least some prior dedicated representations must be formed. Translations from more primitive representations and combinations of these will be involved. If one insists on treating all translation as a form of inference, then all these require inference equally. In either event, there is no significant difference in directness among them. ”

Who’s afraid of relativism?

In recent years, relativism has had a particularly bad press. Often chided along with what some call postmodernism, relativism is held responsible for certain politicians’ complacent ignorance or bullshitting. While I’m not alone in thinking that this scapegoating is due to a severe misunderstanding of relativism, even those who should know better join the choir of condemnation:

“The advance of relativism – the notion that truth is relative to each individual’s standpoint – reached what might be seen as a new low with the recent claim by Donald Trump’s senior adviser Kellyanne Conway that there are such things as “alternative facts”. (She went so far as to cite a non-existent “Bowling Green massacre” to justify Trump’s refugee travel ban, something she later described as a “misspeak”.)” Joe Humphrey’s paraphrasing Timothy Williamson in the Irish Times, 5.7. 2017

If this is what Williamson thinks, he confuses relativism with extreme subjectivism. But I don’t want to dismiss this view too easily. The worry behind this accusation is real. If people do think that truth is relative to each individual’s standpoint, then “anything goes”. You can claim anything and there are no grounds for me to correct you. If this is truth, there is no truth. The word is a meaningless appeal. However, I don’t think that the politicians in question believe in anything as sophisticated as relativism. Following up on some intriguing discussions about the notion of “alternative facts”, I believe that the strategy is (1) to lie by (2) appealing to an (invented) set of states of affairs that supposedly has been ignored. Conway did not assume that she was in the possession of her own subjective truth; quite the contrary. Everyone would have seen what she claimed to be the truth, had they cared to look at the right time in the right way. If I am right, her strategy depends on a shared notion of truth. In other words, I guess that Williamson and Conway roughly start out from the same understanding of truth. To bring in relativism or postmodernism is not helpful when trying to understand the strategy of politicians.

By introducing the term “alternative facts” Conway reminds us of the fact (!) that we pick out truths relative to our interests. I think we are right to be afraid of certain politicians. But why are we afraid of relativism? We have to accept that truth, knowledge or morality are relative to a standard. Relativism is the view that there is more than one such standard.* This makes perfect sense. That 2 plus 2 equals 4 is not true absolutely. Arguably, this truth requires the agreement on a certain arithmetic system. I think that arithmetic and other standards evolve relative to certain interests. Of course, we might disagree about the details of how to spell out such an understanding of relativism. But it is hard to see what makes us so afraid of it.

Perhaps an answer can be given by looking at how relativism evolved historically. If you look at early modern or medieval discussions of truth, knowledge and morality, there is often a distinction between divine and human concepts. Divine knowledge is perfect; human knowledge is partial and fallible. Divine knowledge sets an absolute standard against which human failure is measured. If you look at discussions in and around Locke, for instance, especially his agnosticism about real essences and divine natural law, divine knowledge is still assumed but it loses the status of a standard for us. What we’re left with is human knowledge, in all its mediocrity and fallibility. Hume goes further and no longer even appeals to the divine as a remote standard. Our claims to knowledge are seen as rooted in custom. Now if the divine does no longer serve as an absolute measure, human claims to knowledge, truth and morality are merely one possible standard. There is no absolute standard available. Nominal essences or customs are relative to the human condition: our biological make-up and our interests. The focus on human capacities, irrespective of the divine, is a growing issue, going hand in hand with an idea of relativism.  The “loss” of the absolute is thus owing to a different understanding of theological claims about divine standards. Human knowledge is relative in that it is no longer measured against divine knowledge. If this is correct, relativism emerged (also) as a result of a dissociation of divine and human standards. Why would we be afraid of that?

____

* I’m following Martin Kusch’s definition in his proposal for the ERC project on the Emergence of Relativism“It is not easy to give a neutral definition of “relativism”: defenders and critics disagree over the question of what the relativist is committed to. Roughly put, the relativist regarding a given domain (e.g. epistemology) insists that judgments or beliefs in this domain are true or false, justified or unjustified, only relative to  systems of standards. For the relativist there is more than one such system, and there is no neutral way of adjudicating between them. Some relativists go further and claim that all such systems are equally valid.”

How we unlearn to read

Having been busy with grading again, I noticed a strange double standard in our reading practice and posted the following remark on facebook and twitter:

A question for scholars. – How can we spend a lifetime on a chapter in Aristotle and think we’re done with a student essay in two hours? Both can be equally enigmatic.

Although it was initially meant as a joke of sorts, it got others and me thinking about various issues. Some people rightly pointed out that we mainly set essay tasks for the limited purpose of training people to write; others noted that they are expected to take even less than two hours (some take as little as 10 minutes per paper). Why do we go along with such expectations? Although our goals in assigning essays might be limited, the contrast to our critical and historical engagement with past or current texts of philosophers should give us pause. Let me list two reasons.

Firstly, we might overlook great ideas in contributions by students. I am often amazed how some students manage to come up with all crucial objections and replies to certain claims within 20 minutes, while these considerations took perhaps 20 years to evolve in the historical setting. Have them read, say, Putnam’s twin earth thought experiment and listen to all the major objections passing by in less than an hour. If they can do that, it’s equally likely that their work contains real contributions. But we’ll only notice those if we take our time and dissect sometimes clumsy formulations to uncover the ideas behind them. I’m proud to have witnessed quite a number of graduate students who have developed highly original interpretations and advanced discussions in ways that I didn’t dream of.

Secondly, by taking comparably little time we send a certain message both to our students and ourselves. On the one hand, such a practice might suggest that their work doesn’t really matter. If that message is conveyed, then the efforts on part of the students might be equally low. Some students have to write so many essays that they don’t have time to read. And let’s face it, grading essays without proper feedback is equally a waste of time. If we don’t pay attention to detail, we are ultimately undermining the purpose of philosophical education. Students write more and more papers, while we have less and less time to read them properly. Like a machine running blindly, mimicking educational activity. On the other hand, this way of interacting with and about texts will affect our overall reading practice. Instead of trying to appreciate ideas and think them through, we just look for cues of familiarity or failure. Peer review is overburdening many of us in similar ways. Hence, we need our writing to be appropriately formulaic. If we don’t stick to certain patterns, we risk that our peers miss the cues and think badly of our work. We increasingly write for people who have no time to read, undermining engagement with ideas. David Labaree even claims that it’s often enough to produce work that “looks and feels” like a proper dissertation or paper.

The extreme result is an increasing mechanisation of mindless writing and reading. It’s not supring that hoaxes involving automated or merely clichéd writing get through peer review. Of course this is not true across the board. People still write well and read diligently. But the current trend threatens to undermine educational and philosophical purposes. An obvious remedy would be to improve the student-teacher ratio by employing more staff. In any case, students and staff should write less, leaving more time to read carefully.

___

Speaking of reading, I’d like to thank all of you who continue reading or even writing for this blog. I hope you enjoy the upcoming holidays, wish you a very happy new year, and look forward to conversing with you again soon.

Embracing mistakes in music and speech

Part of what I love about improvised music is the special relation to mistakes. If you listen to someone playing a well known composition, a deviation from the familiar melody, harmony or perhaps even from the rhythm might appear to be a mistake. But what if the “mistake” is played with confidence and perhaps even repeated? Compare: “An apple a day keeps the creeps away.” Knowing the proverb, you will instantly recognise that something is off. But did I make a downright mistake or did play around with the proverb? That depends I guess. But what does it depend on? On the proverb itself? On my intentions? Or does it depend on your charity as a listener? It’s hard to tell. The example is silly and simple but the phenomenon is rather complex if you think about mistakes in music and speech. What I would like to explore in the following is what constitutes the fine line between mistake and innovation. My hunch is there is no such thing as a mistake (or an innovation). Yes, I know what you’re thinking, but you’re mistaken. Please hear me out.

Like much else, the appreciation of music is based on conventions that guide our expectations. Even if your musical knowledge is largely implicit (in that you might have had no exposure to theory), you’ll recognise variations or oddities – and that even if you don’t know the piece in question. The same goes for speech. Even if you don’t know the text in question and wouldn’t recognise if the speaker messed up a quotation, you will recognise mispronunciations, oddities in rhythm and syntax and such like. We often think of such deviations from conventions as mistakes. But while you might still be assuming that the speaker is sounding somewhat odd, they might in fact be North Americans intonating statements as if they were questions, performing funny greeting rituals or even be singing rap songs. Some things might strike people as odd while others catch on, so much so that they end up turning into conventions. – But why do we classify one thing as a variation and the other as a mistake?

Let’s begin with mistakes in music. You might assume that a mistake is, for instance, a note that shouldn’t be played. We speak of a “wrong note” or a “bum note”. Play an F# with much sustain over a C Major triad and you get the idea. Even in the wildest jazz context that could sound off. But what if you hold that F# for half a bar and then add a Bb to the C Major triad? All else being equal, the F# will sound just fine (because the C Major can be heard as a C7 and the F# as a the root note of the tritone substitution F#7) and our ear might expect the resolution to a F Major triad.* Long story short: Whether something counts as a mistake does not depend on the note in question, but on what is played afterwards.**

Let this thought sink in and try to think through situations in which something sounding off was resolved. If you’re not into music, you might begin with a weird noise that makes you nervous until you notice that it’s just rain hitting the roof top. Of course, there are a number of factors that matter, but the upshot is that a seemingly wrong note will count as fine or even as an impressive variation if it’s carried on in an acceptable way. This may be through a resolution (that allows for a reinterpretation of the note) or through repetition (allowing for interpreting it as an intended or new element in its own right) or another measure. Repetition, for example, might turn a strange sequence into an acceptable form, even if the notes in question would not count as acceptable if played only once. It’s hard to say what exactly will win us over (and in fact some listeners might never be convinced). But the point is not that the notes themselves are altered, but that repetition is a form of creating a meaningful structure, while a one-off does not afford anything recognisable. That is, repetition is a means to turn mistakes into something acceptable, a pattern. If this is correct, then it seems sensible to say that the process of going through (apparent) mistakes is not only something that can lead to an amended take on the music, but also something that leads to originality. After all, it’s turning apparent mistakes into something acceptable that makes us see them as legitimate variations.

I guess the same is true of speech. Something might start out striking you as unintelligible, but will be reinterpreted as a meaningful pattern if it is resolved into something acceptable. But how far does this go? You might think that the phenomenon is merely of an aesthetic nature, pertaining to the way we hear and recontextualise sounds in the light of what comes later. We might initially hear a string of sounds that we identify as language once we recognise a pattern in the light of what is uttered later. But isn’t this also true of the way we understand thoughts in general? If so, then making (apparent) mistakes is the way forward – even in philosophy.

Now you might object that the fact that something can be identified as an item in a language (or in music) does not mean that the content of what is said makes sense or is true. If I make a mistake in thinking, it will remain a mistake, even if the linguistic expression can be amended. – Although it might seem this way, I’d like to claim that the contrary is true: The same that goes for music and basic speech comprehension also goes for thought. Thoughts that would seem wrong at the time of utterance can be adjusted in the light of what comes later. Listening to someone, we will do everything to try and make their thoughts come out true. Trying to understand a thought that might sound unintelligible and wrong in the beginning might lead us to new insights, once we find ways in which it rhymes with things we find acceptable. “Ah, that is what you mean!” As Donald Davidson put it, charity is not optional.*** And yes, bringing Davidson into the picture should make it clear that my idea is not new. Thoughts that strike us as odd might turn out fine or even original once we identify a set of beliefs that makes them coherent. — Only among professional philosophers, it seems, we are all too often inclined to make the thoughts of our interlocutors come out false. But seen in analogy to musical improvisation, the talk of mistakes is perhaps just conservatism. Branding an idea as mistaken might merely reveal our clinging to familiar patterns.

___

* Nicer still is this resolution: You hold that F# for half a bar and then add a F# in the bass. All else being equal, the F# will sound just fine (because the C Major can be heard as a D7 add9/11 without the root note) and our ear might expect the resolution to a G Major triad.

** See also Daniel Martin Feige’s Philosophie des Jazz, p. 77, where I found some inspiration for my idea: “Das, was der Improvisierende tut, erhält seinen spezifischen Sinn erst im Lichte dessen, was er später getan haben wird.”

*** The basic idea is illustrated by the example at the beginning of an older post on the nature of error.

Why would we want to call people “great thinkers” and cite harassers? A response to Julian Baggini

If you have ever been at a rock or pop concert, you might recognise the following phenomenon: The band on the stage begins playing an intro. Pulsing synths and roaring drums build up to a yet unrecognisable tune. Then the band breaks into the well-known chorus of their greatest hit and the audience applauds frenetically. People become enthusiastic if they recognise something. Thus, part of the “greatness” is owing to the act of recognising it. There is nothing wrong with that. It’s just that people celebrate their own recognition at least as much as the tune performed. I think much the same is true of our talk of “great thinkers”. We applaud recognised patterns. But only applauding the right kinds of patterns and thinkers secures our belonging to the ingroup. Since academic applause signals and regulates who belongs to a group, such applause has a moral dimension, especially in educational institutions. Yes, you guess right, I want to argue that we need to rethink whom and what we call great.

When we admire someone’s smartness or argument, an enormous part of our admiration is owing to our recognition of preferred patterns. This is why calling someone a “great thinker” is to a large extent self-congratulatory. It signals and reinforces canonical status. What’s important is that this works in three directions: it affirms that status of the figure, it affirms it for me, and it signals this affirmation to others. Thus, it signals where I (want to) belong and demonstrates which nuances of style and content are of the right sort. The more power I have, the more I might be able to reinforce such status. People speaking with the backing of an educational institution can help building canonical continuity. Now the word “great” is conveniently vague. But should we applaud bigots?

“Admiring the great thinkers of the past has become morally hazardous.” Thus opens Julian Baggini’s piece on “Why sexist and racist philosophers might still be admirable”. Baggini’s essay is quite thoughtful and I advise you to read it. That said, I fear it contains a rather problematic inconsistency. Arguing in favour of excusing Hume for his racism, Baggini makes an important point: “Our thinking is shaped by our environment in profound ways that we often aren’t even aware of. Those who refuse to accept that they are as much limited by these forces as anyone else have delusions of intellectual grandeur.” – I agree that our thinking is indeed very much shaped by our (social) surroundings. But while Baggini makes this point to exculpate Hume,* he clearly forgets all about it when he returns to calling Hume one of the “greatest minds”. If Hume’s racism can be excused by his embeddedness in a racist social environment, then surely much of his philosophical “genius” cannot be exempt from being explained through this embeddedness either. In other words, if Hume is not (wholly) responsible for his racism, then he cannot be (wholly) responsible for his philosophy either. So why call only him the “great mind”?

Now Baggini has a second argument for leaving Hume’s grandeur untouched. Moral outrage is wasted on the dead because, unlike the living, they can neither “face justice” nor “show remorse”. While it’s true that the dead cannot face justice, it doesn’t automatically follow that we should not “blame individuals for things they did in less enlightened times using the standards of today”. I guess we do the latter all the time. Even some court systems punish past crimes. Past Nazi crimes are still put on trial, even if the system under which they were committed had different standards and is a thing of a past (or so we hope). Moreover, even if the dead cannot face justice themselves, it does make a difference how we remember and relate to the dead. Let me make two observations that I find crucial in this respect:

(1) Sometimes we uncover “unduly neglected” figures. Thomas Hobbes, for instance, has been pushed to the side as an atheist for a long time. Margaret Cavendish is another case of a thinker whose work has been unduly neglected. When we start reading such figures again and begin to affirm their status, we declare that we see them as part of our ingroup and ancestry. Accordingly, we try and amend an intellectual injustice. Someone has been wronged by not having been recognised. And although we cannot literally change the past, in reclaiming such figures we change our intellectual past, insofar as we change the patterns that our ingroup is willing to recognise. Now if we can decide to help changing our past in that way, moral concerns apply. It seems we have a duty to recognise figures that have been shunned, unduly by our standards.**

(2) Conversely, if we do not acknowledge what we find wrong in past thinkers, we are in danger of becoming complicit in endorsing and amplifying the impact of certain wrongs or ideologies. But we have the choice of changing our past in these cases, too. This becomes even more pressing in cases where there is an institutional continuity between us and the bigots of the past. As Markus Wild points out in his post, Heidegger’s influence continues to haunt us, if those exposing his Nazism are attacked. Leaving this unacknowledged in the context of university teaching might mean becoming complicit in amplifying the pertinent ideology. That said, the fact that we do research on such figures or discuss their doctrines does not automatically mean that we endorse their views. As Charlotte Knowles makes clear, it is important how we relate or appropriate the doctrines of others. It’s one thing to appropriate someone’s ideas; it’s another thing to call that person “great” or a “genius”.

Now, how do these considerations fare with regard to current authors? Should we adjust, for instance, our citation practices in the light of cases of harassment or crimes? – I find this question rather difficult and think we should be open to all sorts of considerations.*** However, I want to make two points:

Firstly, if someone’s work has shaped a certain field, it would be both scholarly and morally wrong to lie about this fact. But the crucial question, in this case, is not whether we should shun someone’s work. The question we have to ask is rather why our community recurrently endorses people who abuse their power. If Baggini has a point, then the moral wrongs that are committed in our academic culture are most likely not just the wrongs of individual scapegoats who happen to be found out. So if we want to change that, it’s not sufficient to change our citation practice. I guess the place to start is to stop endowing individuals with the status of “great thinkers” and begin to acknowledge that thinking is embedded in social practices and requires many kinds of recognition.

Secondly, trying to take the perspective of a victim, I would feel betrayed if representatives of educational institutions would simply continue to endorse such voices and thus enlarge the impact of perpetrators who have harmed others in that institution. And victimhood doesn’t just mean “victim of overt harassment”. As I said earlier, there are intellectual victims of trends or systems that shun voices for various reasons, only to be slowly recovered by later generations who wish to amend the canon and change their past accordingly.

So the question to ask is not only whether we should change our citation practices. Rather we should wonder how many thinkers have not yet been heard because our ingroup keeps applauding one and the same “great mind”.

___

* Please note, however, that Hume’s racism was already criticised by Adam Smith and James Beattie, as Eric Schliesser notes in his intriguing discussion of Baggini’s historicism (from 26 November 2018).

** Barnaby Hutchins provides a more elaborate discussion of this issue: “The point is that a neutral approach to doing history of philosophy doesn’t seem to be a possibility, at least not if we care about, e.g., historical accuracy or innovation. Our approaches need to be responsive to the structural biases that pervade our practices; they need to be responsive to the constant threat of falling into this chauvinism. So it’s risky, at best, to take an indiscriminately positive approach towards canonical and non-canonical alike. We have an ethical duty (broadly construed) to apply a corrective generosity to the interpretation of non-canonical figures. And we also have an ethical duty to apply a corrective scepticism to the canon. Precisely because the structures of philosophy are always implicitly pulling us in favour of canonical philosophers, we need to be, at least to some extent, deliberately antagonistic towards them.”

In the light of these considerations, I now doubt my earlier conclusion that “attempts at diversifying our teaching should not be supported by arguments from supposedly different moral status”.

*** See Peter Furlong’s post for some recent discussion.

Heidegger: Uses and Abuse(s)

Following his post ‘‘Heidegger was a Nazi’ What now?’, Martin Lenz invited me to join the discussion.

There has been a lot written about whether we can separate out Heidegger’s philosophical work from his politics, in particular whether Being and Time – which is often seen as his most significant contribution – can be ‘saved’. There is a lot of excellent scholarship in this area (see for example the work of Mahon O’Brien), but this is not my particular field of expertise. Nevertheless, while I do not feel I can speak directly to the historical question, I would say that, personally, when I first encountered Being and Time as an undergraduate, I didn’t read it and think ‘this guy is definitely a Nazi’. However, once you have this knowledge it obviously makes you reflect on the writing, and there are certain points in the text (the issue of destiny etc), which can be read as problematic in light of his Nazism. Although I do wonder to what extent these things are read into the text in light of knowledge of his politics. I would also add that these more problematic aspects are, to my mind, not the key contributions of Being and Time and that what I take to be the more important concepts and ideas can be employed in other contexts without being ‘infected’ by his politics. In this vein, one must also note the influence of Heideggerean ideas, not only on the French tradition, but also for example on Arendt’s work. If Heidegger’s oeuvre is infected by his politics, does this mean that any work, or any thinker, that draws on his ideas is similarly infected? I think not.

Knowledge of Heideggerean ideas can help to enhance our understanding of other key thinkers, as I argue in my paper Beauvoir and Women’s Complicity in their own Unfreedom. Reading the notion of complicity in The Second Sex in light of the notions of falling and fleeing in Being and Time helps to bring about new ways of thinking about complicity that are not available if we just understand the notion of complicity with regard to the Sartrean idea of bad faith, or in light of the Republican tradition.

With regard to the broader debate about philosophers with, to put it mildly, ‘dodgy politics’, I think it is very striking that Frege, for example (who Martin does note in his original blog post), is so often not mentioned in this context and that these debates appear to be had almost exclusively in relation to Heidegger and not other thinkers who would also serve to make the same point. I would not in any way want to defend Heidegger’s politics, but I do think appeal to his politics is often used as a way to dismiss his work because people have other reasons for not wanting to engage with it, and this is an easy way to dismiss him. I’ve had people dismiss questions I’ve asked at conferences because (after a couple of follow up questions) it’s become apparent that I might be using Heideggerean ideas as a touch stone. In the formal discussion they’ve said ‘oh I don’t know anything about him’ and then shut down the discussion, even though knowledge of Heidegger wasn’t necessary to engage with the point. I don’t think if the same point was made using, for example, Kantian ideas or something inspired by Descartes anyone would dream of dismissing this in the same way. I’ve also had senior people tell me ‘you shouldn’t work on Heidegger, you’ll never get a job’. I think this attitude is unhelpful. Yes, his political views are abhorrent, but given his influence on other key thinkers and traditions I don’t think we can just dismiss his work.

I also think there seems to be an underlying assumption that anyone who works on Heidegger just uncritically accepts his ideas and worships him as a god, which is perhaps true of some (bad) Heidegger scholarship. But my own work, which draws on Heideggerean resources to make points in feminist philosophy, does not treat him in this way. One seems to encounter the attitude in a lot of people who are critical of Heidegger scholarship that anyone who works on him has been inducted into a kind of cult and completely lacks agency, that they can’t separate out the potentially fruitful ideas from those that may be politically compromised. Or that if a particular concept or idea does have some problematic elements, the scholar in question just wouldn’t be able to see it or critique it.

Aristotle, Hegel, Nietzsche all say some pretty problematic things about women, but this hasn’t stopped feminist philosophers from using their ideas and it doesn’t make the feminist scholarship that arises from this work somehow compromised, tainted, or anti-women. I think the point should be about how we engage with these thinkers and what we can do with them, rather than just dismissing them out of hand (often by people without a sufficient understanding of their work).

Charlotte Knowles, University of Groningen.