Should you be ashamed of flying? Moral shortcuts in the call-out culture

Do you still travel by plane? Have you recently suggested going to a steak house? Are you perhaps an old white man? – Then you’ve probably found yourself being called out one of these days. Doing these things or having certain traits means that your actions are addressed as moral failures. If you are involved in some sort of ecological or social activism, you might think that you’re off the hook, or compensate a bit at least. But then you can still be called out as a hypocrite. Shame on you! – If you think I’m trying to ridicule calling out moral failures, I’ll have to disappoint you. On the whole, I think the fact that we publicly deliberate about moral problems is a good thing. Naming problems and calling out people for committing problematic actions is part of that process. That this process is fallible in itself does not discredit it. However, there is an element in that process I begin to worry about: it is what I’d like to call moral shortcuts. Using a moral shortcut means to take an action, the expression of a view or even a trait as an indicator of a morally relevant intention or attitude. What makes my acts morally dubious is not the act itself but certain intentions or their lack. It’s not my suggestion of going to a steak house as such, but my not caring about the well-being of animals or the climate crisis that you want to call out. You might assume that one indicates the other, but this indication relation is tenuous. After all, I might have suggested going there merely because it was raining, not to consume meat. In the following, I’d like to suggest that, while calling out moral failures is an important practice, ascribing moral failures on tenuous grounds is morally dubious in itself.

Let’s begin by looking at moral shortcuts again. So does someone’s flying indicate a morally relevant intention? Of course, we are prone to suppose a close connection between action and intention. Arguably, a behaviour or process only is an action in virtue of an intention. What makes my taking a flight that kind of action is that if I have some pertinent intention, say of going to a place, getting on the plane etc. Conversely, if a refugee is forced onto a plane to be returned to their country of origin, you don’t want say that they “took a flight to Albania”. Accordingly, you won’t call out refugees for not caring about the climate crisis. Moreover, the intention of taking a flight is not necessarily an indication of a general attitude about the climate or even flying. So even if my action can be correctly called indicative of a pertinent intention, this might not be morally significant, be it because I lack alternatives or whatever. After all, the reason for calling out such acts is not to shame or sanction a singular intention. What we’re after is a general attitude, allowing, for instance, for the prediction of certain future acts. That someone gets onto a flight is as such not morally significant. It’s the general attitude of not caring that we might find blameworthy. But while it might be correct to assume that certain actions can be indicative of intentions that, in turn, can be indicative of general attitudes, such inferences are fallible. Now the fallibility as such is not a problem. But there are two problematic issues I want to highlight. The first is about the nature of inferential shortcuts; the second is about moral status of relying on such shortcuts:

  • As pointed out in my last post, we’re not only making tenuous judgments. Rather we often use actions, expressions of views as proxies of moral failures: Instead of calling out the attitude, we call out the acts or traits as such. Short of further evidence, the acts of flying or of suggesting eating meat themselves are treated as moral failures. As Justin E. H. Smith pointed out, this is now following associative patterns of prediction. Making moral judgments is like shopping with Amazon: “People who like to eat meat also fail to care about the climate crisis.” In addition to their fallibility, the focus on actions also deprives us of room for deliberation. Unlike intentions, actions are often exclusive, inviting strong friend-enemy distinctions and thus polarisation: If I do A, I can’t do B, can I? – But it is simply wrong to identify an action with a general attitude, for an action can be exprissive of several and even disparate attitudes. Yet, especially in online communication we are prone to make such shortcuts and thus have our exchanges spiral into heated black and white accusations.
  • However, despite their fallibility, we often have to rely on quick inferences. Moral wrongdoings can put us in severe danger. So it is understandable that certain actions raise suspicions. Especially when we are in immediate danger, inferential shortcuts might be close to seeming hardwired: Someone is aggressively running after you? You probably won’t wait for further cues to estimate their intentions. But it’s one thing to seek protection from harm; it’s quite another thing to call out and shame a person as a moral suspect or perpetrator while not averting immediate danger. If you have no more evidence than the moral shortcut, then the act of shaming someone is itself a moral transgression. Calling someone bad names based on individual acts, beliefs or traits such as their skin colour is rightly seen as morally blameworthy. This is, amongst other things, why we oppose racism, sexism and other transgressions based on shortcuts. My point is that such quick and purely associative inferences are also at work when we shame others without further evidence.

Given our globalised online culture, we often don’t have much more to go on than our shortcuts. While it is important to discuss actions as possible outcomes of structural problems, sources of harm and danger, or as indicative of morally significant attitudes, it is equally important not to glide from such deliberation into unwarranted shaming. In the face of public deliberation, we can monitor, question and adjust our behaviour if need be. In the face of being public shaming, however, we will be more inclined to run into arguments about hypocrisy.

On the other hand, there is the equally problematic tendency to mistake public deliberation about the moral status of certain actions for being blamed. But if someone expresses the idea that flying is morally blameworthy, they are not automatically blaming individuals for such actions. The assumption that you are personally blamed because someone calls out bad attitudes as indicated by acertain kind of behaviour, is unfounded and based on an inverse shortcut. Likewise, whatever is called out by the ‘old white men’ or boomer meme does not automatically translate into shaming individuals. Such memes are indicative of structural problems. Put in a nutshell, public deliberation is not public shaming. However, the tricky thing is that such deliberation can glide into shaming if people help themselves to moral shortcuts.

That said, we will continue to rely on shortcuts. My point is not to rid ourselves of them, but to restrict them in their scope. At the same time, this reliance on shortcuts increases the significance of what is called, often pejoratively so, symbol politics, tokenism and virtue signalling. We might think that such symbol politics is merely a form of appeasement or white washing, pretence or covering up. I doubt it. In times of increasing reliance on moral shortcuts, we often have nothing but symbols, tokens or signals to go on. We need them, but we equally need to be aware that they come with fallible tacit inferences.

Love, crime, sincerity and normality. Or: sameness claims in history

How do the things mentioned in the title hang together? – Read on, then! Think about this well known illusion: You see a stick in the water; the stick seems to be bent. What can you do to check whether it is really bent? – Knowing that water influences visual perception, you can change the conditions: You take it out of the water and realise that it is straight. Taking it out also allows for confirmation through a different sense modality: Touching the stick, you can compare the visual impression with the tactile one. Checking sense modalities and/or conditions against one another establishes an agreement in judgment and thus objectivity. If you only had the visual impression of the stick in the water, you could not form an objective judgment. For all you knew, the stick would be bent.

Now, objectivity is nice to have. But it requires a crucial presupposition that we have not considered so far: that the different perceptions are perceptions of the same thing. Identity assumptions about perceptual objects come easily. But, in principle, they could be challenged: How do you know that what you touch really is the same thing as the one you feel? Normally, yes: normally, you don’t ask that question. You presuppose that it’s the same thing. Of course, you might theorise about a wicked friend exchanging the sticks when you aren’t looking, but this is not the issue now. We need that presupposition; otherwise our world would fall apart. Cutting a long story short, to ‘have’ our world we need at least two things, then: (1) agreement in our tacit judgments (about perceptions) and agreement with the judgments of others: So when someone says it’s raining that judgment should agree with our perceptual judgments: “it’s raining” must agree with the noise we hear of the drops hitting the rooftop and the drops we see hitting the window; (2) and we must presuppose that all these judgments concern the same thing: the rain.

Now all hell breaks loose when such judgments are consistently challenged. What is it I hear, if not the rain? What do you mean when you say “it’s raining”, if not that it’s raining? Are you talking figuratively? Are you not sincere? – One might begin to distrust the speaker or even one’s senses (or the speaker’s senses). It might turn out that the sameness was but a presupposition. (Oh, and what guided the comparison between touch and vision in the first place? How do I know what it feels like to touch a thing looking like ‘that’? Best wishes from Mr Molyneux …)

Presuppositions about sameness and challenging them: this provides great plots for stories about love, crime, sincerity and normality. I leave it to your imagination to fill in the gaps now. Assumptions about sameness figure in judgments about sincerity, about objects, persons, about perceptions, just about everything. (Could it turn out that the Morning Star is not the Evening Star, after all?) It’s clear that we need such assumptions if we don’t want to go loopy, and it’s palpable what might happen if they are not confirmed. Disagreement in judgment can hurt and upset us greatly.

No surprise then that we read philosophical texts with similar assumptions. If your colleague writes a text entitled “on consciousness” or “on justice” you make assumptions about these ideas. Are these assumptions confirmed when you pick up a translation: “De conscientia” or “Über Bewusstsein”? Hmmm, does the Latin match? Let’s see! What you look for, at least when your suspicion is raised, is confirmation about the topic: Does it match what you take consciousness to be? But hang on! Perhaps you should check your linguistic assumptions first? Is it a good translation?

What you try to track is sameness, by tracking agreement in judgments about different kinds of facts. Linguistic facts have to match. But also assumptions about the topic. Now a new problem emerges: It might be that the translation is a match, but that you genuinely disagree with your colleage about what consciousness is. Or it might be that you agree about consciousness, but that the translation is incorrect. – How are you going to find out which disagreement actually obtains? – You can ask your colleage: What do you mean by “conscientia”? She then tells you that she means that conscientia is given if p and q obtain. You might now disagree: I think consciousness obtains when p and r obtain. Now you have a disagreement about the criteria for consciousness. – Really? Perhaps you now have disagreement of what “consciousness” means or you have a disagreement of what “conscientia” means. How do you figure that out? Oh, look into a canonical book on consciousness! – Let’s assume it even notes certain disagreements: What are the disagreements about?

I guess the situation is not all that different when we read historical texts. Perhaps a bit worse actually. We just invoke some more ways of establishing sameness: the so-called context. What is context? Let’s say we invoke a bunch of other texts. So we look at “conscientia” in Descartes. Should we look at Augustine? Some contemporaries? At Dennett? At some scholastic authors? Paulus? The Bible? How do we determine which context is the right one for establishing sameness. And is consciousness even a thing? A natural kind about which sameness claims can be well established? – Oh, and was Descartes sincere when he introduced God in the Meditations?

Sometimes disagreements among historians and philosophers remind me of the question which interpretation of a piece of music is the proper one. There is a right answer: it’s whichever interpretation you’ve listened to first. Everything else will sound more or less off, different in any case. That’s where all your initial presuppositions were rooted. Is it the same piece as the later interpretations? Is it better? How? Why do I like it? How do I recognise it as the same or similar? And I need a second coffee now!

I reach to my cup and find the coffee in there lukewarm – is it really my coffee, or indeed coffee?

____

Whilst I’m at it: Many thanks to all the students in my course on methodology in the history of philosophy, conveniently called “Core Issues: Philosophy and Its Past”. The recent discussions were very intriguing again. And over the years, the participants in this course inspired a lot of ideas going into this blog.

Ugly ducklings and progress in philosophy

Agnes Callard recently gave an entertaining interview at 3:16 AM. Besides her lovely list of views that should count as much less controversial than they do, she made an intriguing remark about her book:

“I had this talk on weakness of will that people kept refuting, and I was torn between recognizing the correctness of their counter-arguments (especially one by Kate Manne, then a grad student at MIT), and the feeling my theory was right. I realized: it was a bad theory of weakness of will, but a good theory of another thing. That other thing was aspiration. So the topic came last in the order of discovery.”

Changing the framing or framework of an idea might resolve seemingly persisting problems and make it shine in a new and favourable light. Reminded of Andersen’s fairy tale in which a duckling is considered ugly until it turns out that the poor animal is actually a swan, I’d like to call this the ugly duckling effect. In what follows, I’d like to suggest that this might be a good, if underrated, form of making progress in philosophy.

Callard’s description stirred a number of memories. You write and refine a piece, but something feels decidedly off. Then you change the title or topic or tweak the context ever so slightly and, at last, everything falls into place. It might happen in a conversation or during a run, but you’re lucky if it does happen at all. I know all too well that I abandoned many ideas, before I eventually and accidentally stumbled on a change of framework that restored (the reputation of) the idea. As I argued in my last post, all too often criticism in professional settings provides incentives to tone down or give up on the idea. Perhaps unsurprisingly, many criticisms focus on the idea or argument itself, rather than on the framework in which the idea is to function. My hunch is that we should pay more attention to such frameworks. After all, people might stop complaining about the quality of your hammer, if you tell them that it’s actually a screwdriver.

I doubt that there is a precise recipe to do this. I guess what helps most are activities that help you tweaking the context, topic or terminology. This might be achieved by playful conversations or even by diverting your attention to something else. Perhaps a good start is to think of precedents in which this happened. So let’s just look at some ugly duckling effects in history:

  • In my last post I already pointed to Wittgenstein’s picture theory of meaning. Recontextualising this as a theory of representation and connecting it to a use theory or a teleosemantic account restored the picture theory as a component that makes perfect sense.
  • Another precendent might be seen in the reinterpretations of Cartesian substance dualism. If you’re unhappy with the interaction problem, you might see the light when, following Spinoza, you reinterpret the dualism as a difference of aspects or perspectives rather than of substances. All of a sudden you can move from a dualist framework to monism but retain an intuitively plausible distinction.
  • A less well known case are the reinterpretations of Ockham’s theory of mental language, which was seen as a theory of ideal language, a theory of logical deep structure, a theory of angelic speech etc.

I’m sure the list is endless and I’d be curious to hear more examples. What’s perhaps important to note is that we can also reverse this effect and turn swans into ugly ducklings. This means that we use the strategy of recontextualisation also when we want to debunk an idea or expose it as problematic:

  • An obvious example is Wilfried Sellars’ myth of the given: Arguing that reference to sense data or other supposedly immediate elements of perception cannot serve as a foundation or justification of knowledge, Sellars dismissed a whole strand of epistemology.
  • Similarly, Quine’s myth of the museum serves to dismiss theories of meaning invoking the idea that words serve as labels for (mental) objects.
  • Another interesting move can be seen in Nicholas of Cusa’s coincidentia oppositorum, restricting the principle of non-contradiction to the domain of rationality and allowing for the claim that the intellect transcends this domain.

If we want to assess such dismissals in a balanced manner, it might help to look twice at the contexts in which the dismissed accounts used to make sense. I’m not saying that the possibility of recontextualisation restores or relativises all our ideas. Rather I think of this option as a tool for thinking about theories in a playful and constructive manner.

Nevertheless, it is crucial to see that the ugly duckling effect works in both ways, to dismiss and restore ideas. In any case, we should try to consider a framework in which the ideas in question make sense. And sometimes dismissal is the way to go.

At the end of the day, it could be helpful to see that the ugly duckling effect might not be owing to the duck being actually a swan. Rather, we might be confronted with duck-swan or duck-rabbit.

Spotting mistakes and getting it right

“Know thyself” is probably a fairly well known maxim among philosophers. But the maxim we live by rather seems to be one along the lines of “know the mistakes of others”. In calling this out I am of course no better. What prompts me to write about this now is a recent observation, not new but clearly refreshed with the beginning of the academic year: it is the obvious desire of students to “get it right”, right from the start. But what could be wrong with desiring to be right?

Philosophers these days don’t love wisdom but truth. Now spotting the mistakes of others is often presented as truth-conducive. If we refute and exclude the falsehoods of others, it seems, we’re making progress on our way to finding out the truth. This seems to be the reason why most papers in philosophy build their cases on refuting opposing claims and why most talks are met with unwavering criticism of the view presented. Killing off all the wrongs must leave you with the truth, no? I think this exclusion principle has all sorts of effects, but I doubt that it helps us in making the desired progress. Here is why.

A first set of reasons relates to the pragmatic aspects of academic exchange: I believe that the binary distinction between getting it right or wrong is misleading. More often than not the views offered to us are neither right nor wrong. This is owing to the fact that we have to present views successively, by putting forward a claim and explaining and arguing for it. What such a process exposes is normally not the truth or falsity of the view, but a need for further elaboration: by unpacking concepts and consequences, ruling out undesired implications, clarifying assumptions etc.

Now you might object that calling a view false is designed to prompt exactly that: clarification and exploration. But I doubt that this is the case. After all, much of academic exchange is driven by perceived reputation: More often than not criticism makes the speaker revert to defensive moves, if it doesn’t paralyse them: Rather than exploring the criticised view, speakers will be tempted to use strategies of immunising their paper against further criticism. If speakers don’t retract, they might at least reduce the scope of their claims and align themselves with more accepted tenets. This, I believe, blocks further exploration and sets an incentive for damage control and conformism. If you doubt this, just go and tell a student (or colleague) that they got it wrong and see what happens.

Still, you might object, such initial responses can be overcome. It might take time, but eventually the criticised speaker will think again and learn to argue for their view more thoroughly. – I wish I could share this optimism. (And I sometimes do.) But I guess the reason that this won’t happen, or not very often, is simply this: What counts in scholarly exchange is the publicly observable moment. Someone criticised by an opponent will see themselves as challenged not only as a representative of a view but as a member of the academic community. Maintaining or restoring our reputation will seem thus vital in contexts in which we consider ourselves as judged and questioned: If we’re not actually graded, under review or in a job talk, we will still anticipate or compare such situations. What counts in these moments is not the truth of our accounts, but whether we convince others of the account and, in the process, of our competence. If you go home as defeated, your account will be seen as defeated too, no matter whether you just didn’t mount the courage or concentration to make a more convincing move.

A second set of reasons is owing to the conviction that spotting falsehoods is just that: spotting falsehoods. As such, it’s not truth-conducive. Refuting claims does not (or at least not necessarily) lead to any truth. Why? Spotting a falsehood or problem does not automatically make any opposing claim true. Let me give an example: It is fairly common to call the so-called picture theory of meaning, as presented in Wittgenstein’s Tractatus, a failure. The perhaps intuitive plausibility that sentences function as pictures of states of affairs seems quickly refuted when asking how such pictures can be said to be true or false of a supposed chunk of reality. What do you do? Step out of the picture and compare it with the proper chunk? Haha! – Refuting the picture theory, then, seems to bring us one step closer to an appropriate theory of meaning. But such a dismissal makes us overlook that the picture theory has enormous merits. Once you see it as a theory of representation and stop demanding that it also accounts for the truth and falsity of representations, you begin to realise that it can work very well when combined with a theory of use or a teleosemantic theory. (See e.g. Ruth Millikan’s recontextualisation) The upshot is that our dismissals are often resulting from overlooking crucial further assumptions that would reinstate the dismissed account.

Now you might object that an incomplete account is still a bad account. Pointing this out is not per se wrong but will eventually prompt a recontextualisation that works. In this sense, you might say, the criticism becomes part of the recontextualised account. – To this I agree. I also think that such dialogues can prompt more satisfying results. But bearing the pragmatic aspects of academic exchange in mind, I think that such results are more likely if we present our criticism for what it is: not as barking at falsehoods but attempts to clarify, complete or complement ideas.

Now you might object that the difference between barking at falsehoods and attempts to clarify can be seen as amounting just to a matter of style. – But why would you think that this is an objection? Style matters. Much more than is commonly acknowledged.

Do rejections of our claims presuppose that we are abnormal?

Discussions about meaning and truth are often taken as merely theoretical issues in semantics. But as soon as you consider them in relation to interactions between interlocutors, it’s clear that they are closely related to our psychology. In what follows, I’d like to suggest that people questioning our claims might in fact be questioning whether we are normal people. Sounds odd? Please hear me out. Let’s begin with a well known issue in semantics:

Imagine you’re a linguist, studying a foreign language of a completely unknown people. You’re with one of the speakers of that language when a white rabbit runs past. The speaker says “gavagai”. Now what does “gavagai” mean?

According to Quine, who introduced the gavagai example, the expression could mean anything. It might mean: “Look, there’s a rabbit” or “Lovely lunch” or “That’s very white” or “Rabbithood instantiated”. The problem is that you cannot determine what “gavagai” means. Our ontology is relative to the target language we’re translating into. And you cannot be sure that the source language carves up the world in the same way ours does. Now it is crucial to see that this is not just an issue of translation. The problem of indeterminacy starts at home: meaning is indeterminate. And this means that the problems of translations also figure in the interaction between speakers and hearers of the same language.

Now Davidson famously turns the issue upside down: we don’t begin with meaning but with truth. We don’t start out by asking what “gavagai” means. If we assume that the speaker is sincere, we’ll just translate the sentence in such a way that it matches what we take to be the truth. So we start by thinking: “Gavagai” means something like “Look, there’s a rabbit”, because that’s the belief we form in the presence of the rabbit. So we start out by ascribing the same belief to the speaker of the foreign language and translate accordingly. That we start out this way is not optional. We’d never get anywhere, if we were to start out by wondering what “gavagai” might or might not mean. Rather we cannot but start out from what we take to be true.

Although Davidson makes an intriguing point, I don’t think he makes a compelling case against relativism. When he claims that we translate the utterances of others into what we take to be true, I think he is stating a psychological fact. If we take someone else to be a fellow human being and think that she or he is sincere, then translating her or his utterances in a way that makes them come out true is what we count as normal behaviour. Conversely, to start from the assumption that our interlocutor is wrong and to translate the other’s utterances as something alien or blatantly false, would amount an abnormal behaviour on our part (unless we have reason to think that our interlocutor is seriously impaired). The point I want to make is that sincerity and confirmation of what we take to be true will correlate with normality.

If this last point is correct, it has a rather problematic consequence: If you tell me that I’m wrong after I have sincerely spoken what I take to be the truth, this will render either me or you as abnormal. Unless we think that something is wrong with ourselves, we will be inclined to think that people who listen to us but reject our claims are abnormal. This is obvious when you imagine someone stating that there is no rabbit while you clearly take yourself to be seeing a rabbit. When the “evidence” for a claim is more abstract, in philosophical debates for instance, we are of course more charitable, at least so long as we can’t be sure that we both have considered the same evidence. Alternatively, we might think the disagreement is only verbal. But what if we think that we both have considered the relevant evidence and still disagree? Would a rejection not amount to a rejection of the normality of our interlocutor?

Embracing mistakes in music and speech

Part of what I love about improvised music is the special relation to mistakes. If you listen to someone playing a well known composition, a deviation from the familiar melody, harmony or perhaps even from the rhythm might appear to be a mistake. But what if the “mistake” is played with confidence and perhaps even repeated? Compare: “An apple a day keeps the creeps away.” Knowing the proverb, you will instantly recognise that something is off. But did I make a downright mistake or did play around with the proverb? That depends I guess. But what does it depend on? On the proverb itself? On my intentions? Or does it depend on your charity as a listener? It’s hard to tell. The example is silly and simple but the phenomenon is rather complex if you think about mistakes in music and speech. What I would like to explore in the following is what constitutes the fine line between mistake and innovation. My hunch is there is no such thing as a mistake (or an innovation). Yes, I know what you’re thinking, but you’re mistaken. Please hear me out.

Like much else, the appreciation of music is based on conventions that guide our expectations. Even if your musical knowledge is largely implicit (in that you might have had no exposure to theory), you’ll recognise variations or oddities – and that even if you don’t know the piece in question. The same goes for speech. Even if you don’t know the text in question and wouldn’t recognise if the speaker messed up a quotation, you will recognise mispronunciations, oddities in rhythm and syntax and such like. We often think of such deviations from conventions as mistakes. But while you might still be assuming that the speaker is sounding somewhat odd, they might in fact be North Americans intonating statements as if they were questions, performing funny greeting rituals or even be singing rap songs. Some things might strike people as odd while others catch on, so much so that they end up turning into conventions. – But why do we classify one thing as a variation and the other as a mistake?

Let’s begin with mistakes in music. You might assume that a mistake is, for instance, a note that shouldn’t be played. We speak of a “wrong note” or a “bum note”. Play an F# with much sustain over a C Major triad and you get the idea. Even in the wildest jazz context that could sound off. But what if you hold that F# for half a bar and then add a Bb to the C Major triad? All else being equal, the F# will sound just fine (because the C Major can be heard as a C7 and the F# as a the root note of the tritone substitution F#7) and our ear might expect the resolution to a F Major triad.* Long story short: Whether something counts as a mistake does not depend on the note in question, but on what is played afterwards.**

Let this thought sink in and try to think through situations in which something sounding off was resolved. If you’re not into music, you might begin with a weird noise that makes you nervous until you notice that it’s just rain hitting the roof top. Of course, there are a number of factors that matter, but the upshot is that a seemingly wrong note will count as fine or even as an impressive variation if it’s carried on in an acceptable way. This may be through a resolution (that allows for a reinterpretation of the note) or through repetition (allowing for interpreting it as an intended or new element in its own right) or another measure. Repetition, for example, might turn a strange sequence into an acceptable form, even if the notes in question would not count as acceptable if played only once. It’s hard to say what exactly will win us over (and in fact some listeners might never be convinced). But the point is not that the notes themselves are altered, but that repetition is a form of creating a meaningful structure, while a one-off does not afford anything recognisable. That is, repetition is a means to turn mistakes into something acceptable, a pattern. If this is correct, then it seems sensible to say that the process of going through (apparent) mistakes is not only something that can lead to an amended take on the music, but also something that leads to originality. After all, it’s turning apparent mistakes into something acceptable that makes us see them as legitimate variations.

I guess the same is true of speech. Something might start out striking you as unintelligible, but will be reinterpreted as a meaningful pattern if it is resolved into something acceptable. But how far does this go? You might think that the phenomenon is merely of an aesthetic nature, pertaining to the way we hear and recontextualise sounds in the light of what comes later. We might initially hear a string of sounds that we identify as language once we recognise a pattern in the light of what is uttered later. But isn’t this also true of the way we understand thoughts in general? If so, then making (apparent) mistakes is the way forward – even in philosophy.

Now you might object that the fact that something can be identified as an item in a language (or in music) does not mean that the content of what is said makes sense or is true. If I make a mistake in thinking, it will remain a mistake, even if the linguistic expression can be amended. – Although it might seem this way, I’d like to claim that the contrary is true: The same that goes for music and basic speech comprehension also goes for thought. Thoughts that would seem wrong at the time of utterance can be adjusted in the light of what comes later. Listening to someone, we will do everything to try and make their thoughts come out true. Trying to understand a thought that might sound unintelligible and wrong in the beginning might lead us to new insights, once we find ways in which it rhymes with things we find acceptable. “Ah, that is what you mean!” As Donald Davidson put it, charity is not optional.*** And yes, bringing Davidson into the picture should make it clear that my idea is not new. Thoughts that strike us as odd might turn out fine or even original once we identify a set of beliefs that makes them coherent. — Only among professional philosophers, it seems, we are all too often inclined to make the thoughts of our interlocutors come out false. But seen in analogy to musical improvisation, the talk of mistakes is perhaps just conservatism. Branding an idea as mistaken might merely reveal our clinging to familiar patterns.

___

* Nicer still is this resolution: You hold that F# for half a bar and then add a F# in the bass. All else being equal, the F# will sound just fine (because the C Major can be heard as a D7 add9/11 without the root note) and our ear might expect the resolution to a G Major triad.

** See also Daniel Martin Feige’s Philosophie des Jazz, p. 77, where I found some inspiration for my idea: “Das, was der Improvisierende tut, erhält seinen spezifischen Sinn erst im Lichte dessen, was er später getan haben wird.”

*** The basic idea is illustrated by the example at the beginning of an older post on the nature of error.

What Is an Error? Wittgenstein’s Voluntarism*

Imagine that you welcome your old friend Fred in your study. Pointing at the door, he asks you whether he should shut the window. You’re confused. Did Fred just call the door a window? He’s getting old, but surely not that old. You assume that Fred has made a simple mistake. But what kind of mistake was it? Did he make a linguistic mistake by mixing up the words? Or did he make a cognitive mistake by misrepresenting the facts and taking the door to be a window? “Fred, you meant to say ‘door’, didn’t you?” If he nods agreement, everything is fine. If he doesn’t, you will probably begin to worry about Fred’s cognitive system or conceptual scheme. You might wonder whether his vision is impaired or something worse has happened, unless it turns out that you, in turn, misread Fred’s gesture, while he did indeed mean the window opposite the door.

This example can be considered in various ways.** We usually take such mistakes to lie in an erroneous use of words rather than in a misrepresentation on part of the cognitive system, such as a hallucination. The latter case seems way more drastic. But are the cases of linguistic and cognitive mistakes related? Is one prior to the other? In what follows, I’d like to consider them through the lens of Wittgenstein’s later philosophy of mind and suggest that his account it has roots in theological voluntarism.

Let’s begin by looking at the accounts of error that suggest themselves. What kind of distinction is at work here? It seems that there are at least two possible ways of locating error:

  • linguistic errors occur on the level of behavioural interaction between language users: in this case an error is a deviation from a social practice;
  • cognitive errors occur on the level of (mental) representation: in this case an error is mismatch between a representation and a represented object.

The distinction between interaction and representation intimates two ways of thinking about minds. Representational models construe correctness and error on the relation between (mental) sign and object. Interactionist or social models construe correctness and error on the relation between (epistemic) agents. On the face of it, the representational model is the more traditional one, going back at least to Aristotle and the scholastics, before being famously reintroduced and radicalised by Descartes. By contrast, the interactionist model is taken to be relatively young, inspired by the later Wittgenstein, who attacked his own earlier representationalism and the whole tradition along with it. This historical picture is of course a bit of a caricature. But rather than adding necessary refinements, I think we should reject it entirely. Besides misconstruing much of the history of thinking about minds, it obscures commonalities that actually might help understanding Wittgenstein’s move towards the interactionist model.

What, then, might have inspired Wittgenstein’s later model? I think that Wittgenstein’s later philosophy of mind is driven, amongst other things, by two ideas, namely (a) that all kinds of mental activities (such as thinking and erring) are part of a shared practice and that (b) the rules constituting this practice have no further explanation or foundation. For illustration, think again of the linguistic error. Ad (a): Of course, calling a door a window is a case of mislabelling. But what turns this into an error is not any representational mismatch. What is amiss is not any match between utterance and object but Fred’s violation of your expectation (that he would ask, if anything, to close the door but not the window). Ad (b): This expectation is not grounded in anything further but the experienced practice itself. If you learn that people call a door a door, people should call a door a door. You begin to wonder if they don’t. There is no further explanation as to why that should be so. Taken together, these two ideas give priority to interaction over representation. Accordingly, Wittgensteinians will see error and correctness in reference to linguistic practice; not grounded in representation.

But where does this idea come from? Although Wittgenstein’s later thought is sometimes likened to that of earlier authors in early modern or medieval times, I haven’t seen that his ideas were placed in a larger tradition. Perhaps, then, straightforward philosophies of language and mind are not the best place to look. But what should we turn to? If we look for historical cues of the two ideas sketched above, we should watch out for theories that construe mental events on the model of action rather than representation. But if you think that such theorising begins only with what is commonly called ‘pragmatism’, you miss out on a lot. Let’s focus on (a) first. We should begin by giving up on the assumption that the representational model of the mind is the traditional one. Of course, representation looms large, but it is not always the crucial explanans of correct vs. erroneous thinking or speaking. Good places to start are discussions that tie error to acts of will. Why not try Descartes’ famous explanation of human error? In the Fourth Meditation, Descartes claims that error does not arise from misrepresentation as such. Rather I can err because my will reaches farther than my intellect. So my will might extend to the unknown, deviating from the true and good. And thus I am said to err and sin. Bringing together error and sin, Descartes appeals to a longstanding tradition that places error on the level of voluntary judgment and action. Accordingly, there is no sharp distinction between moral and epistemic errors. I can fail to act in the right way or I can fail to think in the right way. The source of my error is, then, not that I misrepresent objects but rather that I deviate from the way that God ordained. This is the way in which even perfect cognitive agents such as fallen angels and demons can err.

What is significant for the question at hand is that God is taken as presenting us with a standard that we can conform to or deviate from when representing objects. Thus, error is explained through deviation from the divine standard, not through a representational model. Of course, you might object, that divine standards are a far cry from social standards and linguistic rules.*** But what might have served as a crucial inspiration are the following three points: putting mental acts on a par with action, explaining error and correctness through a non-representational standard, and having a non-individualistic standard, for it is the relation of humans to God that enforces the standard on us. In this sense, error cannot be ascribed to a single individual that misrepresents an object; it must be a mind that is related to the standards set by God.

If we accept this historical comparison at least as a suggestion, we might say that divine standards play a theoretical role that is similar to the social practice in Wittgenstein. However, divine standards come in different guises. Not all philosophers who discuss error in relation to deviant acts of will are automatically committed to the thesis that the divine standards have no further foundation. Theological rationalists assume that divine standards can be justified, such that God wills the Good because it is good. By contrast, voluntarists assume that something is good because God wills it. Thus, rationalistic conceptions could allow for an explanation of error that is not ultimately explained by reference to the divine standard. In this sense, rationalism would clash with Wittgenstein’s anti-foundationalism, called (b) above, according to which rules have no further foundation over and above the practice. As Wittgenstein puts it in Philosophical Investigations, § 206: “Following a rule is analogous to obeying an order.”

How, then, does Wittgenstein see the traditional theological distinction? Given his numerous discussions of the will even in his early writings, it is clear that his work is informed by such considerations. Most striking is his remark on voluntarism reported in Waismann’s “Notes on Talks with Wittgenstein” (Philosophical Review 74 [1956]): “I think that the first conception is the deeper one: Good is what God orders. For this cuts off the path to any and every explanation ‘why’ it is good …” Here, Wittgenstein clearly sides with the voluntarists.**** Indeed, the idea of rule-following as obedience can be seen perfectly in line with the assumption that erring consists in violating a shared practice, just as the voluntarist tradition that Descartes belongs to deems erring a deviation from divine standards.

If these suggestions are pointing in a fruitful direction, they could open a path to relocating Wittgenstein’s thought in the context of the long tradition of voluntarism. They might downplay his claims to originality, but at the same time they might render both his work and the tradition more accessible.

___

* Originally posted on the blog of the Groningen Centre for Medieval and Early Modern Thought

** This is my variant of Davidson’s ketch-yawl example in his “On the very idea of a conceptual scheme”. I’d like to thank Laura Georgescu, Lodi Nauta and Tamer Nawar, who kindly heard me out when I introduced them to the ideas suggested here.

*** Thanks to Martin Kusch, who raised this objection in an earlier discussion on Facebook.

**** See David Bloor, Wittgenstein, Rules and Institutions, Routledge 2002, 126-133, who also discusses Wittgenstein’s voluntarism.