Spotting mistakes and getting it right

“Know thyself” is probably a fairly well known maxim among philosophers. But the maxim we live by rather seems to be one along the lines of “know the mistakes of others”. In calling this out I am of course no better. What prompts me to write about this now is a recent observation, not new but clearly refreshed with the beginning of the academic year: it is the obvious desire of students to “get it right”, right from the start. But what could be wrong with desiring to be right?

Philosophers these days don’t love wisdom but truth. Now spotting the mistakes of others is often presented as truth-conducive. If we refute and exclude the falsehoods of others, it seems, we’re making progress on our way to finding out the truth. This seems to be the reason why most papers in philosophy build their cases on refuting opposing claims and why most talks are met with unwavering criticism of the view presented. Killing off all the wrongs must leave you with the truth, no? I think this exclusion principle has all sorts of effects, but I doubt that it helps us in making the desired progress. Here is why.

A first set of reasons relates to the pragmatic aspects of academic exchange: I believe that the binary distinction between getting it right or wrong is misleading. More often than not the views offered to us are neither right nor wrong. This is owing to the fact that we have to present views successively, by putting forward a claim and explaining and arguing for it. What such a process exposes is normally not the truth or falsity of the view, but a need for further elaboration: by unpacking concepts and consequences, ruling out undesired implications, clarifying assumptions etc.

Now you might object that calling a view false is designed to prompt exactly that: clarification and exploration. But I doubt that this is the case. After all, much of academic exchange is driven by perceived reputation: More often than not criticism makes the speaker revert to defensive moves, if it doesn’t paralyse them: Rather than exploring the criticised view, speakers will be tempted to use strategies of immunising their paper against further criticism. If speakers don’t retract, they might at least reduce the scope of their claims and align themselves with more accepted tenets. This, I believe, blocks further exploration and sets an incentive for damage control and conformism. If you doubt this, just go and tell a student (or colleague) that they got it wrong and see what happens.

Still, you might object, such initial responses can be overcome. It might take time, but eventually the criticised speaker will think again and learn to argue for their view more thoroughly. – I wish I could share this optimism. (And I sometimes do.) But I guess the reason that this won’t happen, or not very often, is simply this: What counts in scholarly exchange is the publicly observable moment. Someone criticised by an opponent will see themselves as challenged not only as a representative of a view but as a member of the academic community. Maintaining or restoring our reputation will seem thus vital in contexts in which we consider ourselves as judged and questioned: If we’re not actually graded, under review or in a job talk, we will still anticipate or compare such situations. What counts in these moments is not the truth of our accounts, but whether we convince others of the account and, in the process, of our competence. If you go home as defeated, your account will be seen as defeated too, no matter whether you just didn’t mount the courage or concentration to make a more convincing move.

A second set of reasons is owing to the conviction that spotting falsehoods is just that: spotting falsehoods. As such, it’s not truth-conducive. Refuting claims does not (or at least not necessarily) lead to any truth. Why? Spotting a falsehood or problem does not automatically make any opposing claim true. Let me give an example: It is fairly common to call the so-called picture theory of meaning, as presented in Wittgenstein’s Tractatus, a failure. The perhaps intuitive plausibility that sentences function as pictures of states of affairs seems quickly refuted when asking how such pictures can be said to be true or false of a supposed chunk of reality. What do you do? Step out of the picture and compare it with the proper chunk? Haha! – Refuting the picture theory, then, seems to bring us one step closer to an appropriate theory of meaning. But such a dismissal makes us overlook that the picture theory has enormous merits. Once you see it as a theory of representation and stop demanding that it also accounts for the truth and falsity of representations, you begin to realise that it can work very well when combined with a theory of use or a teleosemantic theory. (See e.g. Ruth Millikan’s recontextualisation) The upshot is that our dismissals are often resulting from overlooking crucial further assumptions that would reinstate the dismissed account.

Now you might object that an incomplete account is still a bad account. Pointing this out is not per se wrong but will eventually prompt a recontextualisation that works. In this sense, you might say, the criticism becomes part of the recontextualised account. – To this I agree. I also think that such dialogues can prompt more satisfying results. But bearing the pragmatic aspects of academic exchange in mind, I think that such results are more likely if we present our criticism for what it is: not as barking at falsehoods but attempts to clarify, complete or complement ideas.

Now you might object that the difference between barking at falsehoods and attempts to clarify can be seen as amounting just to a matter of style. – But why would you think that this is an objection? Style matters. Much more than is commonly acknowledged.

Experiencing humility: hope for public debate?

When I was young (yes, stop snickering), when I was young I was often amazed at people’s knowledge. Most people had opinions about everything. The government issued a statement about a new policy and my father or one of my uncles already knew that the policy wouldn’t work. This admiration didn’t stop during my adolescence: I remember listening in awe when friends saw through all the motives and consequences of political decisions. How did they figure it all out? – Well, they probably didn’t. Or not much of it. I don’t want to sound condescending but most of us probably don’t understand the implications of political decisions all that well. Yet, judging by the readiness and vehemence of our contributions to public debate, most of us do at least give the impression of relative expertise. If this is correct, there is a disproportion between actual understanding and confidence in our opinions. In what follows, I’d like to suggest that amending this disproportion might hold the key to improving public debate.

According to Kees van den Bos and other social scientists,* this disproportion is one of the crucial factors leading to polarisation in public debate. However, the inverse also seems to be true: if people are asked to explain how certain policies work and experience that they understand these policies less well than they thought, they are likely to exhibit more moderation in their views of these policies. Fernbach et al. (2013) write:

“Across three studies, we found that people have unjustified confidence in their understanding of policies. Attempting to generate a mechanistic explanation under-mines this illusion of understanding and leads people to endorse more moderate positions. Mechanistic explanation generation also influences political behavior, making people less likely to donate to relevant advocacy groups. These moderation effects on judgment and decision making do not occur when people are asked to enumerate reasons for their position. We propose that generating mechanistic explanations leads people to endorse more moderate positions by forcing them to confront their ignorance. In contrast, reasons can draw on values, hearsay, and general principles that do not require much knowledge.”

So while I might become increasingly stubborn if you ask me to give reasons for p, I might become more moderate if you ask me to explain how p works. According to the researchers, this is the case because in the latter scenario I am humbled by experiencing the limits of my knowledge. I guess it won’t be too much to ask you to imagine examples. Asking how certain policies of, say, traffic regulation or migration work in practice might even lead politicians themselves to moderation.

What precisely is it that leads to moderation? My hunch is that the effect is produced by experiencing humility. This means that it is vital that the subject in question experiences their lack of knowledge. It is probably no good if I am told that I lack knowledge. (In fact, I believe that this might instil resentment.) The point is that I realise my lack in my own attempt at an explanation. So what I would like to emphasise is that the moderating effect is probably owing to experiencing this lack rather than merely knowing about this lack. Of course, I know that I don’t know how precisely certain policies work. But it’s still quite another thing to experience this ignorance in attempting to explain such policies. In other words, the Socratic attitude alone doesn’t help.

If this effect persists, this finding might indeed help ameliorating conversations and debates. Instead of telling people that they are wrong or asking for reasons, we might simply ask how the proposed idea works. This requires of course humility on part of all interlocutors. A good start might be debates in philosophy.

____

* I am grateful to Hendrik Siebe, Diego Castro and Leopold Hess for conversations about this work online and offline.

“How would you arrange the deportation of my father?” On responsible (free) speech. A response to Silvia Mazzini

Could you tell me, face to face or in writing, how you would go about having my father deported? – Why, you ask? – Well, maybe you think he is a burden for society. After all, he is quite old by now. So how do you do get it arranged? Should some people be sent to fetch him? Perhaps at night? Go on, then! –

You, gentle reader, probably don’t have such desires. But if I follow the political discussions in the Netherlands and other countries, many people want that. Only they don’t tell me personally; they talk about certain groups, not to me.

Ah, it’s not old people, you say, just Muslims? So they don’t want to come for my father? Well, lucky me then… Should it make a difference whether people want to deport my or someone else’s father? Well, it makes a difference, but does it matter? Not much. –– The point I would like to suggest is that we can imagine that certain opinions concern us directly, even if they don’t. In a controversial discussion between two opponents, such imaginations can help both interlocutors to make the conversation more personal, concrete, emotional and thus responsible. Following up on my last post, I would like to develop some ideas, then, how we can turn free speech into responsible speech.

In my last post, I tried to show that our disagreements about the limits of free speech are owing to two different ways of understanding how language works. Ultimately, I suggested that the crucial limit of free speech should be determined by the responsibilities we have as speakers. But I didn’t say much about these responsibilities themselves. Commenting on the post, my colleague Silvia Mazzini suggested that responsibility could be seen as offering the other the ability to respond:

Maybe we could then interpret [responsible freedom of speech] like Levinas did: responsibility is the “ability to respond”. In this sense, freedom of speech would mean that all the people involved in a dialogue are able to respond – that they have the intention to consider the different positions of the others.

This strikes me as the way to go. What I like about this idea in particular is that it doesn’t require us to provide a complicated catalogue of virtues or rules. Rather, the responsibility is imposed through the very fact that the opinion is not voiced as a statement about others but to others.

What’s the big deal, you might ask, does it really make such a difference whether I offer my opinion about a policy regarding a group of people to someone in particular? David Livingstone Smith’s work on dehumanisation made me see one point in particular recurring again and again: Although it might be simple to imagine doing harm to a certain group in the abstract, it is really hard to do something harmful to someone directly in front of you. (That is why dehumanising tactics are employed: it is easier to harm someone if you think of them as not really human.) Arguably, this carries over to speech acts. My hunch is that it is much harder to direct hate speech at someone in particular (rather than speak abstractly about members of a group).

My idea is, then, that it is easier to act as a responsible speaker, if you are addressing someone in particular directly. There are a number of reasons for this. Interacting with a concrete person, we are more likely to respond with adequate emotions and empathy, and we have to face the response. Although a face-to-face encounter will be best, I think this will even work in online communication. It makes a difference for me as a writer whether I imagine you, whoever you are, as a concrete person who might frown or agree. Or whether I simply toss out statements about abstract ideas, however much they might affect you. The point is, thus, that we shouldn’t always try to amend online debates by being as rational as possible or by cancelling out emotions. Rather, the task would be to facilitate adequate social emotions necessary for responsible interaction. Addressing others directly should have two consequences: (a) it should be more difficult to objectify and thus to harm the interlocutor; (b) it should invite the other to respond and make me anticipate some response. Thus, if we get people who utter opinions to address people directly in this way, they will speak more responsibly, rendering free speech not a battleground but a possibility for genuine and considerate exchange.

So far, so good. Of course you might have objections, but my worry at this point is not how to justify my idea. Rather, I see the main challenge in implementing it. I think we should give it a try and then see how well it works. So how can we change our conventions? How can we get from talking about people to taking to them? This is an open question, but at this moment I can think of four steps in the relevant contexts:

  • Change speech acts from third-person to second-person sentences: Saying that you should leave this country is much harder than saying that blog readers should leave this country. I’d think twice about what’s going to happen if i did so.
  • People can stand in for targeted people: If you hear someone going on about a religious group, you can respond as if you were targeted. The point is not to lie, but to offer yourself as a possible interlocutor (which might be more effective than just saying that the speaker is a bad person).
  • As a possible interlocutor you can demand the other to (empathetically) imagine your situation: It might make a difference to ask your opponent how she thinks the deportation of your father should be arranged. Rather than discussing the rights and wrongs in the abstract.
  • Dehumanising language must be rejected. Of course, there are limits. It is vital to state that, if your interlocutor crosses a red line.

Now you might think that all of this is too difficult. I doubt that. In the face of what we often call political correctness, we have acquired a lot of vocabulary and changed some of our speaking habits. Now we can adjust our imagination and syntax a bit. Of course, this will take time. But I really hope that you and I as well as (other) people in education, in companies, in the press, moderators in the media, citizens in online or analogue discussions gradually train and learn to adjust their language and address people directly. Yes, it will be harder to offer your opinion, but it will also be more fruitful. – At this point, I’m suggesting this and hope for more ideas about means and ways of implementing it. Ideally, we’ll find that this or something like it turns out to be a viable way of amending political discourse.

By the way, this should cut across the entire political spectrum. It has, for instance, become fashionable to engange in what is sometimes called leftist populism or target the group of “old white males”. Whatever your contention might be, if you want to tell someone like my father that inverse racism isn’t a thing, you won’t get him to respond sensibly if you target him as a member of that group. We act through language. And the way we act in our words is palpable, it affects individuals, and individual people are likely to respond in kind. Verbal attacks affect us, irrespective of the side we think we are on. Thus, whenever we want to make a point that affects others, we should try and address them directly. Conversely, if we encounter problematic opinions, we don’t need to shut them down. To respond on behalf of a targeted addressee, as if you were addressed directly, might be more fruitful in maintaing adequate standards and emotions.

Finally, it goes without saying that I am worst at following my own advice. So please don’t call me out too harshly.

Words as weapons? Free speech requires responsible speech

Whenever I’m asked what sparked my interest in the philosophy of language, I immediately remember two texts that I read almost thirty years ago: one is a short story by Ingeborg Bachmann called “Everything”; the other one is an essay by Václav Havel called “A Word about Words”. Both texts can be read as rather powerful reflections on the social and political dimensions of language. For me, they show a crucial feature of language: language is not merely a medium of describing reality; rather it is interwoven with our actions. Bachmann’s story dispels the illusion that we can freely teach and learn language, irrespective of the social and historical baggage that our verbal categories come with. And when Havel writes that a word can “turn into a baton” that is used to “beat” one’s fellow citizens, this is not entirely metaphorical. A word might carry and pass on the very force that makes someone lift or drop a baton. Thus, a word can hit and harm you. In recent years, these ideas began to haunt me again. If words have such force, then the current appeals to free speech require us to speak responsibly, or so I’d like to suggest in what follows.

Two camps. – Yesterday, the feminist Mona Eltahawy explained that she no longer wants to speak at De Balie, a famous forum of public debate in the Netherlands. The reason was that, after accepting the invitation, she had learned that this forum had formerly hosted a group of speakers who ended up openly discussing the “deportation of Muslims”. Among these speakers were Paul Cliteur, a Dutch academic and active politician in a right-wing party, and Wim van Rooy, a Flemish author with similar political leanings. Already back in the day, the event sparked strong reactions. Some thought that such speech is outright harmful and suggested that this debate triggered associations of the Wannsee Conference. Others thought that the right to free speech entitles us to say anything, or anything as long as it isn’t evidently unlawful. If you’re following the news, you might by now think that this follows a familiar pattern: on the one hand, there are those who deem certain speech acts as harmful and protest against them; on the other hand, there are those who declare that such protests infringe free speech. (See here for an earlier post on misconstruing free speech.) At first glance, then, it looks like we’re dealing with two camps: those who want to regulate speech and those who reject the regulation of (free) speech.

Two camps? – Many people and especially journalists seem to have bought into the idea that we are indeed dealing with two camps, with those who want to restrict and those who don’t want to restrict free speech. But I think that this is a misleading way of plotting the disagreement. It’s not easy to pin down what’s wrong with it, but here’s a try. I think we’re basically dealing with two different ideas of language: let’s call them the action view and the entertainment view.

  1. According to the action view, speech is interwoven with other actions and thus, depending on the kinds of actions, harmful or good. Thinking and speaking are actions, and to be treated accordingly. If I call someone an asshole, for instance, I act in a certain way and can harm others, even in a way that is recognised by the law that sanctions insults.
  2. According to the entertainment view, speech is a medium in which we entertain certain thoughts: we exchange arguments and hypotheses that are detached from action. Thinking and speaking are decidedly distinct from actions. Ideally, we think and speak before we act or instead of acting.

If we take these perspectives as opposites, we can immediately see why they spur so much disagreement. If I take the entertainment view, free speech and open debate will not be a means of harming others but rather a way of preventing bad action. We can argue instead of hurting or harming each other physically. We can anticipate bad consequences, and stop them from happening. – By contrast, if I endorse the action view, then speech is already a way of possibly harming others. Arguing or insulting can be the beginning or incitement of a chain of related and escalating actions. If I start insulting you, I might subtly begin to legitimise stronger harms, possibly ending up with forms of dehumanisation. In fact, ongoing insults might damage your (mental) health already. The upshot is: the disagreement is not about (free) speech, but about how language actually works.

Differences in degrees. – Now, if you ask which of these views of language is right, I have to say: both, in a way. The relation between language and action is not one of different categories but one of degree. Some language use is clearly action-related or even a form of action; other language use is detached from action or even a replacement of certain acts. So if I insult or sincerely threaten people by verbal means, I act and cause harm. But if I consider a counterfactual possibility or quote someone’s words, the language is clearly detached from action. However, arguably the relation to possible action is what contributes to making language meaningful in the first place. Even if I merely quote an insult, you still understand that quotation in virtue of understanding real insults.

Now what does that mean for the debate between the two camps? The good news is that they both have a point: language allows for action as well as for replacing action, even if these views are degrees on a continuum. This should allow for some progress in the debate between the two camps. But the crucial question is how we can deal with situations where the different emphases of these linguistic features lead to conflicts. As far as the general pattern of the opposition is concerned, I’d try to treat such conflicts in the same way we treat complaints more generally. How do you react if someone says that they feel insulted or intimidated? If you don’t understand what the complaint is targeting, you will probably ask what it is that constitutes the insult or threat, rather than continue with the abuse. Insisting that your right to free speech entitles you to say whatever you like is comparable to hitting someone and then, if they complain, saying that they might as well hit you, too.

Appealing to free speech is just a way of pointing out that language opens the possibility of entertaining certain thoughts, but it is no adequate response to someone complaining about an insult or threat. What many people forget is that freedom comes with responsibility. So, whatever we think we are entitled to through that freedom requires us to exercise that right responsibly. Thus, free speech requires responsible speech.*

That said, the right to free speech is important and should not be infringed easily. So what about the concerns regarding this freedom? The first thing to note is that such freedom is not infringed by protest or criticism. In fact, if Mona Eltahawy protests and cancels her attendance for the reasons given, she exercises her right to free speech. The second thing is that speech acts and other acts happen to have consequences. You can secretly think whatever you like, but if you publicly discuss the deportation of people belonging to a certain religion or race or whatever, you should face the consequences of being publicly called out and sanctioned accordingly. Again, appealing to free speech is not an adequate reaction to such complaints.

But if free speech is of no concern in such situations, when does it pose a concern? First of all, we should ask ourselves this: Who can actually infringe our right to free speech? Generally, it’s people who hold some power over us. So ask yourself whether the notorious student protests or other events that fill the media are really a threat to free speech. As long as people don’t have any power over you, it’s unlikely that they pose a threat to your right to free speech. It’s more likely that they exercise this very right. That said, exercising this right can be done in a sincere or in an insincere manner. And it can be done in a hurtful or threatening way. Spotting the difference between sincere concern and tactics is probably a lifelong exercise.

____

* Here is a follow-up post on how free speech can be turned into responsible speech.

Should contemporary philosophers read Ockham? Or: what did history ever do for us?

If you are a historian of philosophy, you’ve probably encountered the question whether the stuff you’re working on is of any interest today. It’s the kind of question that awakens all the different souls in your breast at once. Your more enthusiastic self might think, “yes, totally”, while your methodological soul might shout, “anachronism ahead!” And your humbler part might think, “I don’t even understand it myself.” When exposed to this question, I often want to say many things at once, and out comes something garbled. But now I’d like to suggest that there is only one true reply to the main question in the title: “No, that’s the wrong kind of question to ask!” – But of course that’s not all there is to it. So please hear me out.

I’m currently revisiting an intriguing medieval debate between William of Ockham, Gregory of Rimini and Pierre D’Ailly on the question of how thought is structured. While I find the issue quite exciting in itself, it’s particularly interesting to see how they treat their different sources, Aristotle and Augustine. While they clearly all know the texts invoked they emphasise different aspects. Thinking about thought, William harkens back to Aristotle and clearly thinks that it’s structure that matters. By contrast, Gregory goes along with Augustine and emphasises thought as a mental action. – For these authors it was clear that their sources were highly relevant, both as inspiration and authorities. At the same time, they had no qualms to appropriate them for their uses. – In some respects we make similar moves today, when we call ourselves Humeans or Aristotelians. But since we also have professional historians of philosophy and look back at traditions of critical scholarship, both historians and philosophers are more cautious when it comes to the question of whether some particular author would be “relevant today”.

In view of this question, historians are trained to exhibit all kinds of (often dismissive) gut reactions, while philosophers working in contemporary themes don’t really have time for our long-winded answers. And so we started talking past each other, happily ever after. That’s not a good thing. So here is why I think the question of whether any particular author could inform or improve current debates is wrongheaded.

Of course everyone is free to read Ockham. But I wouldn’t recommend doing it, if you’re hoping to enrich the current debates. Yes, Ockham says a lot of interesting things. But you’d need a long time to translate them into contemporary terminology and still more time to find an argument that will look like a right-out improvement of a current argument.* – My point is not that Ockham is not an interesting philosopher. My point is that Ockham (and many other past philosophers) doesn’t straightforwardly speak to any current concerns.

However … Yes, of course there was going to be a “however”! However, while we don’t need to ask whether any particular author is relevant today, we should be asking a different question. That Ockham doesn’t speak to current concerns doesn’t mean that historians of philosophy (studying Ockham or others) have nothing to say about current concerns. So it’s not that Ockham should be twisted to speak to current concerns; rather historians and philosophers should be talking to each other! So the right question to ask is: how can historians speak to current issues?

The point is that historians study, amongst other things, debates on philosophical issues. “You say tomahto, I say tomato”, that sort of thing. Debates happen now as they happened then. What I find crucial is that studying debates reveals features that can be informative for understanding current debates. There are certain conditions that have to be met for a debate to arise. We’re not just moving through the space of reasons. Debates occur or decline because of various factors. What we find conceptually salient can be driven by available texts, literary preferences, other things we hold dear, theological concerns, technological inventions (just think of various computer models), arising political pressures (you know what I mean), linguistic factors (what would happen if most philosophers were to write in Dutch?), and a lot of other factors, be they environmental, sociological, or what have you. Although we like to think that the pursuit of truth is central, it’s by far not the only reason why debates arise and certain concepts are coined and stick around, while others are forgotten. Although contingent, such factors are recurrent. And this is something that affects our understanding of current as well as past debates. The historian can approach current debates as a historian in the same way that she can approach past debates. And this is, I submit, where historians can truly speak to current concerns.

Coming back to the debate I mentioned earlier, there is another issue (besides the treatment of sources) that I find striking. In their emphasis of Augustine, Gregory and Peter show a transition from a representational to an action model of thought. Why does this transition occur? Why do they find it important to emphasise action over representation against William? – Many reasons are possible. I won’t go into them now. But this might be an interesting point of comparison to the current debates over enactivism versus certain sorts of representationalism. Why do we have that debate now? Is it owing to influences like Ryle and Gibson? Certainly, they are points of (sometimes implicit) reference. Are there other factors? Again, while I think that these are intriguing philosophical developments, our understanding of such transitions and debates remains impoverished, if we don’t look for other factors. Studying past transitions can reveal recurrent factors in contemporary debates. One factor might be that construing thoughts as acts rather than mere representations discloses their normative dimensions. Acts are something for which we might be held responsible. There is a lot more to be said. For now suffice it to say that it is in comparing debates and uncovering their conditions, that historians of philosophy qua historians can really contribute.

At the same time, historians might also benefit from paying more attention to current concerns. Not only to stay up to date, but also to sharpen their understanding of historical debates.** As we all know, historical facts don’t just pop up. They have to be seen. But this seeing is of course a kind of seeing as. Thus, if we don’t just want to repeat the historiographical paradigms of, say, the eighties, it certainly doesn’t hurt if our seeing is trained in conversation with current philosophers.

____

* That said, this is of course an open question. So I’m happy to be shown a counterexample.

** More on the exchange between philosophers and historians of philosophy can be found in my inaugural lecture.

I don’t know what I think. A plea for unclarity and prophecy

Would you begin a research project if there were just one more day left to work on it? I guess I wouldn’t. Why? Well, my assumption is that the point of a research project is that we improve our understanding of a phenomenon. Improvement seems to be inherently future-directed, meaning that we understand x a bit better tomorrow than today. Therefore, I am inclined to think that we would not begin to do research, had we not the hope that it might lead to more knowledge of x in the future. I think this not only true of research but of much thinking and writing in general. We wouldn’t think, talk or write certain things, had we not the hope that this leads to an improved understanding in the future. You might find this point trivial. But a while ago it began to dawn on me that the inherent future-directedness of (some) thinking and writing has a number of important consequences. One of them is that we are not the (sole) authors of our thoughts. If this is correct, it is time to rethink our ways of evaluating thoughts and their modes of expression. Let me explain.

So why am I not the (sole) author of my thoughts? Well, I hope you all know variations of the following situation: You try to express an idea. Your interlocutor frowns and points out that she doesn’t really understand what you’re saying. You try again. The frowning continues, but this time she offers a different formulation. “Exactly”, you shout, “this is exactly what I meant to say!” Now, who is the author of that thought? I guess it depends. Did she give a good paraphrase or did she also bring out an implication or a consequence? Did she use an illustration that highlights a new aspect? Did she perhaps even rephrase it in such a way that it circumvents a possible objection? And what about you? Did you mean just that? Or do you understand the idea even better than before? Perhaps you are now aware of an important implication. So whose idea is it now? Hers or yours? Perhaps you both should be seen as authors. In any case, the boundaries are not clear.

In this sense, many of my thoughts are not (solely) authored by me. We often try to acknowledge as much in forewords and footnotes. But some consequences of this fact might be more serious. Let me name three: (1) There is an obvious problem for the charge of anachronism in history of philosophy (see my inaugural lecture).  If future explications of thoughts can be seen as improvements of these very thoughts, then anachronistic interpretations should perhaps not merely be tolerated but encouraged. Are Descartes’ Meditations complete without the Objections and Replies? Can Aristotle be understood without the commentary traditions? Think about it! (2) Another issue concerns the identity of thoughts. If you are a semantic holist of sorts you might assume that a thought is individuated by numerous inferential relations. Is your thought that p really what it is without it entailing q? Is your thought that p really intelligible without seeing that it entails q? You might think so, but the referees of your latest paper might think that p doesn’t merit publication without considering q. (3) This leads to the issue of acceptability. Whose inferences or paraphrases count? You might say that p, but perhaps p is not accepted in your own formulation, while the expression of p in your superviser’s form of words is greeted with great enthusiasm. In similar spirit, Tim Crane has recently called for a reconsideration of peer review.  Even if some of these points are controversial, they should at least suggest that authorship has rather unclear boundaries.

Now the fact that thoughts are often future-directed and have multiple authors has, in turn, a number of further consequences. I’d like to highlight two of them by way of calling for some reconsiderations: a due reconsideration of unclarity and what Eric Schliesser calls “philosophic prophecy”.*

  • A plea for reconsidering unclarity. Philosophers in the analytic tradition pride themselves on clarity. But apart from the fact that the recognition of clarity is necessarily context-dependent, clarity ought to be seen as the result of a process rather than a feature of the thought or its present expression. Most texts that are considered original or important, not least in the analytic tradition, are hopelessly unclear when read without guidance. Try Russell’s “On Denoting” or Frege’s “On Sense and Reference” and you know what I mean. Or try some other classics like Aristotle’s “De anima” or Hume’s “Treatise”. Oh, your own papers are exempt from this problem? Of course! Anyway, we all know this: we begin with a glimpse of an idea. And it’s the frowning of others that either makes us commit it to oblivion or try an improvement. But if this is remotely true, there is no principled reason to see unclarity as a downside. Rather it should be seen as a typical if perhaps early stage of an idea that wants to grow.
  • A plea for coining concepts or philosophic prophecy. Simplifying an idea by Eric Schliesser, we should see both philosophy and history of philosophy as involved in the business of coining concepts that “disclose the near or distant past and create a shared horizon for our philosophical future.”* As is well-known some authors (such as Leibniz, Kant or Nietzsche) have sometimes decidedly written rather for future audiences than present ones, trying to pave conceptual paths for future dialogues between religions, metaphysicians or Übermenschen. For historians of philosophy in particular this means that history is never just an antiquarian enterprise. By offering ‘translations’ and explanations we can introduce philosophical traditions to the future or silence them. In this sense, I’d like to stress, for example, that Ryle’s famous critique of Descartes, however flawed historically, should be seen as part of Descartes’ thought. In the same vain, Albert the Great or Hilary Putnam might be said to bring out certain versions of Aristotle. This doesn’t mean that they didn’t have any thoughts of their own. But their particular thoughts might not have been possible without Aristotle, who in turn might not be intelligible (to us) without the later developments. In this sense, much if not all philosophy is a prophetic enterprise.

If my thoughts are future-directed and multi-authored in such ways, this also means that I often couldn’t know at all what I actually think, if it were not for your improvements or refinements. This is of course one of the lessons learned from Wittgenstein’s so-called private language argument. But it does not only concern the possibility of understanding and knowing. A fortiori it also concerns understanding our own public language and thought. As I said earlier, I take it to be a rationality constraint that I must agree to some degree with others in order to understand myself. This means that I need others to see the point I am trying to make. If this generalises, you cannot know thyself without listening to others.

___

* See Eric Schliesser, “Philosophic Prophecy”, in Philosophy and Its History, 209.

 

 

Abstract cruelty. On dismissive attitudes

Do you know the story about the PhD student whose supervisor overslept and refused to come to the defence, saying he had no interest in such nonsense? – No? I don’t know it either, by which I mean: I don’t know exactly what happened. However, some recurrent rumours have it that on the day of the PhD student’s defence, the supervisor didn’t turn up and was called by the secretary. After admitting that he overslept, he must indeed have said that he didn’t want to come because he wasn’t convinced that the thesis was any good. Someone else took over the supervisor’s role in the defence, and the PhD was ultimately conferred. I don’t know the details of the story but I have a vivid imagination. There are many aspects to this story that deserve attention, but in the following I want to concentrate on the dismissive attitude of the supervisor.

Let’s face it, we all might oversleep. But what on earth brings someone to say that they are not coming to the event because the thesis isn’t any good? The case is certainly outrageous. And I keep wondering why an institution like a university lets a professor get away with such behaviour. As far as I know the supervisor was never reprimanded, while the candidate increasingly went to bars rather than the library. I guess many people can tell similar stories, and we all know about the notorious discussions around powerful people in philosophy. Many of those discussions focus on institutional and personal failures or power imbalances. But while such points are doubtlessly worth addressing, I would like to focus on something else: What is it that enables such dismissive attitudes?

Although such and other kinds of unprofessional behaviour are certainly sanctioned too rarely, we have measures against it in principle. Oversleeping and rejecting to fulfil one’s duties can be reprimanded effectively, but what can we do about the most damning part of it: the dismissive attitude according to which the thesis was just no good? Of course, using it as a reason to circumvent duties can be called out, but the problem is the attitude itself. I guess that all of us think every now and then that something is so bad that, at least in principle, it isn’t worth getting up for. What is more, there is in principle nothing wrong with finding something bad. Quite the contrary, we have every reason to be sincere interlocutors and call a spade a spade, and sometimes this involves severe criticism.

However, some cases do not merely constitute criticism but acts of cruelty. But how can we distinguish between the two? I have to admit that I am not entirely sure about this, but genuine criticism strikes me as an invitation to respond, while in the case under discussion the remark about the quality of the thesis was given as a reason to end the conversation.* Ending a conversation or dismissing a view like that is cruel. It leaves the recipient of the critique with no means to answer or account for their position. Of course, sometimes we might have good reasons for ending a conversation like that. I can imagine political contexts in which I see no other way than turning my back on people. But apart from the fact that a doctoral defence shouldn’t be such an occasion, I find it suspicious if philosophers end conversations like that. What is at stake here?

First of all, we should note that this kind of cruelty is much more common than meets the eye. Sure, we rarely witness that a supervisor refuses to turn up for a defence. But anyone sitting in on seminars, faculty talks or lectures will have occasion to see that sometimes criticism is offered not as an invitation for response, but as a dismissal that is only thinly disguised as an objection. How can we recognise such a dismissal? The difference is that an opinion is not merely criticised but considered a waste of time. This and other slogans effectively end a conversation. Rather than addressing what one might find wanting, the opponent’s view will be belittled and portrayed as not being worth to be taken seriously. As I see it, such speech acts are acts of cruelty because they are always (even if tacitly) ad hominem. The conjunction of critical remarks and of ending a conversation shows that it is not merely the opinion that is rejected but that there is no expectation that the argument could be improved by continuing the conversation. In this sense, ending a conversation is owing to a severe lack of charity, ultimately dismissing the opponent as incapable or even irrational.

You would think that such behaviour gets called out quickly, at least among philosophers. But the problem is that this kind of intellectual bullying is actually rather widespread: Whenever we say that an opinion isn’t worth listening to, when we say, for instance, that analytical or continental philosophy is just completely wrongheaded or something of the kind, we are at least in danger of engaging in it.** Often this goes unnoticed because we move within circles that legitimise such statements. Within such circles we enjoy privilege and status; outside our positions are belittled as a waste of time. And the transition from calling something bad to calling something a waste of time is rather smooth, if no one challenges such a speech act.

Having said as much, you might think I am rather pessimistic about the profession. But I am not. In fact I think there is a straightforward remedy. Decouple criticisms from ending conversations! But now you might respond that sometimes a conversation cannot continue because we really do not share standards of scholarship or argument. And we certainly shouldn’t give up our standards easily. – I totally agree, but I think that rather than being dismissive we might admit that we have a clash of intuitions. Generally speaking, we might distinguish between two kinds of critical opposition: disagreements and clashes of intuition. While disagreements are opposing views that can be plotted on a common ground, clashes of intuition mark the lack of relevant common ground. In other words, we might distinguish between internal and external criticism, the latter rejecting the entire way of framing an issue. I think that it is entirely legitimate to utter external criticism and signal such a clash. It is another way of saying that one doesn’t share sufficient philosophical ground. But it also signals that the opposing view might still deserve to be taken seriously, provided one accepts different premises or priorities.*** Rather than bluntly dismissing a view because one feels safeguarded by the standards of one’s own community, diagnosing a clash respects that the opponent might have good reasons and ultimately engages in the same kind of enterprise.

The behaviour of the supervisor who overslept is certainly beyond good and evil. Why do I find this anecdote so striking? Because it’s so easy to call out the obvious failure on part of the supervisor. It’s much harder to see how we or certain groups are complicit in legitimising the dismissive attitude behind it. While we might be quick to call out such a brutality, the damning dismissive attitude is more widespread than meets the eye. Yet, it could be amended by admitting to a clash of intuitions, but that requires some careful consideration of the nature of the clash and perhaps the decency of getting out of bed on time.

_____

This post by Regina Rini must have been at the back of my mind when I thought about conversation-enders; not entitrely the same issue but a great read anyway.

**A related instance can be to call a contemporary or a historical view “weird”. See my post on relevance and othering.

*** Examples of rather respectable clashes are dualism vs. monism or representationalism vs. inferentialism. The point is that the debates run into a stalemate, and picking a side is a matter of decision rather than argument.