Ugly ducklings and progress in philosophy

Agnes Callard recently gave an entertaining interview at 3:16 AM. Besides her lovely list of views that should count as much less controversial than they do, she made an intriguing remark about her book:

“I had this talk on weakness of will that people kept refuting, and I was torn between recognizing the correctness of their counter-arguments (especially one by Kate Manne, then a grad student at MIT), and the feeling my theory was right. I realized: it was a bad theory of weakness of will, but a good theory of another thing. That other thing was aspiration. So the topic came last in the order of discovery.”

Changing the framing or framework of an idea might resolve seemingly persisting problems and make it shine in a new and favourable light. Reminded of Andersen’s fairy tale in which a duckling is considered ugly until it turns out that the poor animal is actually a swan, I’d like to call this the ugly duckling effect. In what follows, I’d like to suggest that this might be a good, if underrated, form of making progress in philosophy.

Callard’s description stirred a number of memories. You write and refine a piece, but something feels decidedly off. Then you change the title or topic or tweak the context ever so slightly and, at last, everything falls into place. It might happen in a conversation or during a run, but you’re lucky if it does happen at all. I know all too well that I abandoned many ideas, before I eventually and accidentally stumbled on a change of framework that restored (the reputation of) the idea. As I argued in my last post, all too often criticism in professional settings provides incentives to tone down or give up on the idea. Perhaps unsurprisingly, many criticisms focus on the idea or argument itself, rather than on the framework in which the idea is to function. My hunch is that we should pay more attention to such frameworks. After all, people might stop complaining about the quality of your hammer, if you tell them that it’s actually a screwdriver.

I doubt that there is a precise recipe to do this. I guess what helps most are activities that help you tweaking the context, topic or terminology. This might be achieved by playful conversations or even by diverting your attention to something else. Perhaps a good start is to think of precedents in which this happened. So let’s just look at some ugly duckling effects in history:

  • In my last post I already pointed to Wittgenstein’s picture theory of meaning. Recontextualising this as a theory of representation and connecting it to a use theory or a teleosemantic account restored the picture theory as a component that makes perfect sense.
  • Another precendent might be seen in the reinterpretations of Cartesian substance dualism. If you’re unhappy with the interaction problem, you might see the light when, following Spinoza, you reinterpret the dualism as a difference of aspects or perspectives rather than of substances. All of a sudden you can move from a dualist framework to monism but retain an intuitively plausible distinction.
  • A less well known case are the reinterpretations of Ockham’s theory of mental language, which was seen as a theory of ideal language, a theory of logical deep structure, a theory of angelic speech etc.

I’m sure the list is endless and I’d be curious to hear more examples. What’s perhaps important to note is that we can also reverse this effect and turn swans into ugly ducklings. This means that we use the strategy of recontextualisation also when we want to debunk an idea or expose it as problematic:

  • An obvious example is Wilfried Sellars’ myth of the given: Arguing that reference to sense data or other supposedly immediate elements of perception cannot serve as a foundation or justification of knowledge, Sellars dismissed a whole strand of epistemology.
  • Similarly, Quine’s myth of the museum serves to dismiss theories of meaning invoking the idea that words serve as labels for (mental) objects.
  • Another interesting move can be seen in Nicholas of Cusa’s coincidentia oppositorum, restricting the principle of non-contradiction to the domain of rationality and allowing for the claim that the intellect transcends this domain.

If we want to assess such dismissals in a balanced manner, it might help to look twice at the contexts in which the dismissed accounts used to make sense. I’m not saying that the possibility of recontextualisation restores or relativises all our ideas. Rather I think of this option as a tool for thinking about theories in a playful and constructive manner.

Nevertheless, it is crucial to see that the ugly duckling effect works in both ways, to dismiss and restore ideas. In any case, we should try to consider a framework in which the ideas in question make sense. And sometimes dismissal is the way to go.

At the end of the day, it could be helpful to see that the ugly duckling effect might not be owing to the duck being actually a swan. Rather, we might be confronted with duck-swan or duck-rabbit.

Spotting mistakes and getting it right

“Know thyself” is probably a fairly well known maxim among philosophers. But the maxim we live by rather seems to be one along the lines of “know the mistakes of others”. In calling this out I am of course no better. What prompts me to write about this now is a recent observation, not new but clearly refreshed with the beginning of the academic year: it is the obvious desire of students to “get it right”, right from the start. But what could be wrong with desiring to be right?

Philosophers these days don’t love wisdom but truth. Now spotting the mistakes of others is often presented as truth-conducive. If we refute and exclude the falsehoods of others, it seems, we’re making progress on our way to finding out the truth. This seems to be the reason why most papers in philosophy build their cases on refuting opposing claims and why most talks are met with unwavering criticism of the view presented. Killing off all the wrongs must leave you with the truth, no? I think this exclusion principle has all sorts of effects, but I doubt that it helps us in making the desired progress. Here is why.

A first set of reasons relates to the pragmatic aspects of academic exchange: I believe that the binary distinction between getting it right or wrong is misleading. More often than not the views offered to us are neither right nor wrong. This is owing to the fact that we have to present views successively, by putting forward a claim and explaining and arguing for it. What such a process exposes is normally not the truth or falsity of the view, but a need for further elaboration: by unpacking concepts and consequences, ruling out undesired implications, clarifying assumptions etc.

Now you might object that calling a view false is designed to prompt exactly that: clarification and exploration. But I doubt that this is the case. After all, much of academic exchange is driven by perceived reputation: More often than not criticism makes the speaker revert to defensive moves, if it doesn’t paralyse them: Rather than exploring the criticised view, speakers will be tempted to use strategies of immunising their paper against further criticism. If speakers don’t retract, they might at least reduce the scope of their claims and align themselves with more accepted tenets. This, I believe, blocks further exploration and sets an incentive for damage control and conformism. If you doubt this, just go and tell a student (or colleague) that they got it wrong and see what happens.

Still, you might object, such initial responses can be overcome. It might take time, but eventually the criticised speaker will think again and learn to argue for their view more thoroughly. – I wish I could share this optimism. (And I sometimes do.) But I guess the reason that this won’t happen, or not very often, is simply this: What counts in scholarly exchange is the publicly observable moment. Someone criticised by an opponent will see themselves as challenged not only as a representative of a view but as a member of the academic community. Maintaining or restoring our reputation will seem thus vital in contexts in which we consider ourselves as judged and questioned: If we’re not actually graded, under review or in a job talk, we will still anticipate or compare such situations. What counts in these moments is not the truth of our accounts, but whether we convince others of the account and, in the process, of our competence. If you go home as defeated, your account will be seen as defeated too, no matter whether you just didn’t mount the courage or concentration to make a more convincing move.

A second set of reasons is owing to the conviction that spotting falsehoods is just that: spotting falsehoods. As such, it’s not truth-conducive. Refuting claims does not (or at least not necessarily) lead to any truth. Why? Spotting a falsehood or problem does not automatically make any opposing claim true. Let me give an example: It is fairly common to call the so-called picture theory of meaning, as presented in Wittgenstein’s Tractatus, a failure. The perhaps intuitive plausibility that sentences function as pictures of states of affairs seems quickly refuted when asking how such pictures can be said to be true or false of a supposed chunk of reality. What do you do? Step out of the picture and compare it with the proper chunk? Haha! – Refuting the picture theory, then, seems to bring us one step closer to an appropriate theory of meaning. But such a dismissal makes us overlook that the picture theory has enormous merits. Once you see it as a theory of representation and stop demanding that it also accounts for the truth and falsity of representations, you begin to realise that it can work very well when combined with a theory of use or a teleosemantic theory. (See e.g. Ruth Millikan’s recontextualisation) The upshot is that our dismissals are often resulting from overlooking crucial further assumptions that would reinstate the dismissed account.

Now you might object that an incomplete account is still a bad account. Pointing this out is not per se wrong but will eventually prompt a recontextualisation that works. In this sense, you might say, the criticism becomes part of the recontextualised account. – To this I agree. I also think that such dialogues can prompt more satisfying results. But bearing the pragmatic aspects of academic exchange in mind, I think that such results are more likely if we present our criticism for what it is: not as barking at falsehoods but attempts to clarify, complete or complement ideas.

Now you might object that the difference between barking at falsehoods and attempts to clarify can be seen as amounting just to a matter of style. – But why would you think that this is an objection? Style matters. Much more than is commonly acknowledged.

Do rejections of our claims presuppose that we are abnormal?

Discussions about meaning and truth are often taken as merely theoretical issues in semantics. But as soon as you consider them in relation to interactions between interlocutors, it’s clear that they are closely related to our psychology. In what follows, I’d like to suggest that people questioning our claims might in fact be questioning whether we are normal people. Sounds odd? Please hear me out. Let’s begin with a well known issue in semantics:

Imagine you’re a linguist, studying a foreign language of a completely unknown people. You’re with one of the speakers of that language when a white rabbit runs past. The speaker says “gavagai”. Now what does “gavagai” mean?

According to Quine, who introduced the gavagai example, the expression could mean anything. It might mean: “Look, there’s a rabbit” or “Lovely lunch” or “That’s very white” or “Rabbithood instantiated”. The problem is that you cannot determine what “gavagai” means. Our ontology is relative to the target language we’re translating into. And you cannot be sure that the source language carves up the world in the same way ours does. Now it is crucial to see that this is not just an issue of translation. The problem of indeterminacy starts at home: meaning is indeterminate. And this means that the problems of translations also figure in the interaction between speakers and hearers of the same language.

Now Davidson famously turns the issue upside down: we don’t begin with meaning but with truth. We don’t start out by asking what “gavagai” means. If we assume that the speaker is sincere, we’ll just translate the sentence in such a way that it matches what we take to be the truth. So we start by thinking: “Gavagai” means something like “Look, there’s a rabbit”, because that’s the belief we form in the presence of the rabbit. So we start out by ascribing the same belief to the speaker of the foreign language and translate accordingly. That we start out this way is not optional. We’d never get anywhere, if we were to start out by wondering what “gavagai” might or might not mean. Rather we cannot but start out from what we take to be true.

Although Davidson makes an intriguing point, I don’t think he makes a compelling case against relativism. When he claims that we translate the utterances of others into what we take to be true, I think he is stating a psychological fact. If we take someone else to be a fellow human being and think that she or he is sincere, then translating her or his utterances in a way that makes them come out true is what we count as normal behaviour. Conversely, to start from the assumption that our interlocutor is wrong and to translate the other’s utterances as something alien or blatantly false, would amount an abnormal behaviour on our part (unless we have reason to think that our interlocutor is seriously impaired). The point I want to make is that sincerity and confirmation of what we take to be true will correlate with normality.

If this last point is correct, it has a rather problematic consequence: If you tell me that I’m wrong after I have sincerely spoken what I take to be the truth, this will render either me or you as abnormal. Unless we think that something is wrong with ourselves, we will be inclined to think that people who listen to us but reject our claims are abnormal. This is obvious when you imagine someone stating that there is no rabbit while you clearly take yourself to be seeing a rabbit. When the “evidence” for a claim is more abstract, in philosophical debates for instance, we are of course more charitable, at least so long as we can’t be sure that we both have considered the same evidence. Alternatively, we might think the disagreement is only verbal. But what if we think that we both have considered the relevant evidence and still disagree? Would a rejection not amount to a rejection of the normality of our interlocutor?

Embracing mistakes in music and speech

Part of what I love about improvised music is the special relation to mistakes. If you listen to someone playing a well known composition, a deviation from the familiar melody, harmony or perhaps even from the rhythm might appear to be a mistake. But what if the “mistake” is played with confidence and perhaps even repeated? Compare: “An apple a day keeps the creeps away.” Knowing the proverb, you will instantly recognise that something is off. But did I make a downright mistake or did play around with the proverb? That depends I guess. But what does it depend on? On the proverb itself? On my intentions? Or does it depend on your charity as a listener? It’s hard to tell. The example is silly and simple but the phenomenon is rather complex if you think about mistakes in music and speech. What I would like to explore in the following is what constitutes the fine line between mistake and innovation. My hunch is there is no such thing as a mistake (or an innovation). Yes, I know what you’re thinking, but you’re mistaken. Please hear me out.

Like much else, the appreciation of music is based on conventions that guide our expectations. Even if your musical knowledge is largely implicit (in that you might have had no exposure to theory), you’ll recognise variations or oddities – and that even if you don’t know the piece in question. The same goes for speech. Even if you don’t know the text in question and wouldn’t recognise if the speaker messed up a quotation, you will recognise mispronunciations, oddities in rhythm and syntax and such like. We often think of such deviations from conventions as mistakes. But while you might still be assuming that the speaker is sounding somewhat odd, they might in fact be North Americans intonating statements as if they were questions, performing funny greeting rituals or even be singing rap songs. Some things might strike people as odd while others catch on, so much so that they end up turning into conventions. – But why do we classify one thing as a variation and the other as a mistake?

Let’s begin with mistakes in music. You might assume that a mistake is, for instance, a note that shouldn’t be played. We speak of a “wrong note” or a “bum note”. Play an F# with much sustain over a C Major triad and you get the idea. Even in the wildest jazz context that could sound off. But what if you hold that F# for half a bar and then add a Bb to the C Major triad? All else being equal, the F# will sound just fine (because the C Major can be heard as a C7 and the F# as a the root note of the tritone substitution F#7) and our ear might expect the resolution to a F Major triad.* Long story short: Whether something counts as a mistake does not depend on the note in question, but on what is played afterwards.**

Let this thought sink in and try to think through situations in which something sounding off was resolved. If you’re not into music, you might begin with a weird noise that makes you nervous until you notice that it’s just rain hitting the roof top. Of course, there are a number of factors that matter, but the upshot is that a seemingly wrong note will count as fine or even as an impressive variation if it’s carried on in an acceptable way. This may be through a resolution (that allows for a reinterpretation of the note) or through repetition (allowing for interpreting it as an intended or new element in its own right) or another measure. Repetition, for example, might turn a strange sequence into an acceptable form, even if the notes in question would not count as acceptable if played only once. It’s hard to say what exactly will win us over (and in fact some listeners might never be convinced). But the point is not that the notes themselves are altered, but that repetition is a form of creating a meaningful structure, while a one-off does not afford anything recognisable. That is, repetition is a means to turn mistakes into something acceptable, a pattern. If this is correct, then it seems sensible to say that the process of going through (apparent) mistakes is not only something that can lead to an amended take on the music, but also something that leads to originality. After all, it’s turning apparent mistakes into something acceptable that makes us see them as legitimate variations.

I guess the same is true of speech. Something might start out striking you as unintelligible, but will be reinterpreted as a meaningful pattern if it is resolved into something acceptable. But how far does this go? You might think that the phenomenon is merely of an aesthetic nature, pertaining to the way we hear and recontextualise sounds in the light of what comes later. We might initially hear a string of sounds that we identify as language once we recognise a pattern in the light of what is uttered later. But isn’t this also true of the way we understand thoughts in general? If so, then making (apparent) mistakes is the way forward – even in philosophy.

Now you might object that the fact that something can be identified as an item in a language (or in music) does not mean that the content of what is said makes sense or is true. If I make a mistake in thinking, it will remain a mistake, even if the linguistic expression can be amended. – Although it might seem this way, I’d like to claim that the contrary is true: The same that goes for music and basic speech comprehension also goes for thought. Thoughts that would seem wrong at the time of utterance can be adjusted in the light of what comes later. Listening to someone, we will do everything to try and make their thoughts come out true. Trying to understand a thought that might sound unintelligible and wrong in the beginning might lead us to new insights, once we find ways in which it rhymes with things we find acceptable. “Ah, that is what you mean!” As Donald Davidson put it, charity is not optional.*** And yes, bringing Davidson into the picture should make it clear that my idea is not new. Thoughts that strike us as odd might turn out fine or even original once we identify a set of beliefs that makes them coherent. — Only among professional philosophers, it seems, we are all too often inclined to make the thoughts of our interlocutors come out false. But seen in analogy to musical improvisation, the talk of mistakes is perhaps just conservatism. Branding an idea as mistaken might merely reveal our clinging to familiar patterns.

___

* Nicer still is this resolution: You hold that F# for half a bar and then add a F# in the bass. All else being equal, the F# will sound just fine (because the C Major can be heard as a D7 add9/11 without the root note) and our ear might expect the resolution to a G Major triad.

** See also Daniel Martin Feige’s Philosophie des Jazz, p. 77, where I found some inspiration for my idea: “Das, was der Improvisierende tut, erhält seinen spezifischen Sinn erst im Lichte dessen, was er später getan haben wird.”

*** The basic idea is illustrated by the example at the beginning of an older post on the nature of error.

What Is an Error? Wittgenstein’s Voluntarism*

Imagine that you welcome your old friend Fred in your study. Pointing at the door, he asks you whether he should shut the window. You’re confused. Did Fred just call the door a window? He’s getting old, but surely not that old. You assume that Fred has made a simple mistake. But what kind of mistake was it? Did he make a linguistic mistake by mixing up the words? Or did he make a cognitive mistake by misrepresenting the facts and taking the door to be a window? “Fred, you meant to say ‘door’, didn’t you?” If he nods agreement, everything is fine. If he doesn’t, you will probably begin to worry about Fred’s cognitive system or conceptual scheme. You might wonder whether his vision is impaired or something worse has happened, unless it turns out that you, in turn, misread Fred’s gesture, while he did indeed mean the window opposite the door.

This example can be considered in various ways.** We usually take such mistakes to lie in an erroneous use of words rather than in a misrepresentation on part of the cognitive system, such as a hallucination. The latter case seems way more drastic. But are the cases of linguistic and cognitive mistakes related? Is one prior to the other? In what follows, I’d like to consider them through the lens of Wittgenstein’s later philosophy of mind and suggest that his account it has roots in theological voluntarism.

Let’s begin by looking at the accounts of error that suggest themselves. What kind of distinction is at work here? It seems that there are at least two possible ways of locating error:

  • linguistic errors occur on the level of behavioural interaction between language users: in this case an error is a deviation from a social practice;
  • cognitive errors occur on the level of (mental) representation: in this case an error is mismatch between a representation and a represented object.

The distinction between interaction and representation intimates two ways of thinking about minds. Representational models construe correctness and error on the relation between (mental) sign and object. Interactionist or social models construe correctness and error on the relation between (epistemic) agents. On the face of it, the representational model is the more traditional one, going back at least to Aristotle and the scholastics, before being famously reintroduced and radicalised by Descartes. By contrast, the interactionist model is taken to be relatively young, inspired by the later Wittgenstein, who attacked his own earlier representationalism and the whole tradition along with it. This historical picture is of course a bit of a caricature. But rather than adding necessary refinements, I think we should reject it entirely. Besides misconstruing much of the history of thinking about minds, it obscures commonalities that actually might help understanding Wittgenstein’s move towards the interactionist model.

What, then, might have inspired Wittgenstein’s later model? I think that Wittgenstein’s later philosophy of mind is driven, amongst other things, by two ideas, namely (a) that all kinds of mental activities (such as thinking and erring) are part of a shared practice and that (b) the rules constituting this practice have no further explanation or foundation. For illustration, think again of the linguistic error. Ad (a): Of course, calling a door a window is a case of mislabelling. But what turns this into an error is not any representational mismatch. What is amiss is not any match between utterance and object but Fred’s violation of your expectation (that he would ask, if anything, to close the door but not the window). Ad (b): This expectation is not grounded in anything further but the experienced practice itself. If you learn that people call a door a door, people should call a door a door. You begin to wonder if they don’t. There is no further explanation as to why that should be so. Taken together, these two ideas give priority to interaction over representation. Accordingly, Wittgensteinians will see error and correctness in reference to linguistic practice; not grounded in representation.

But where does this idea come from? Although Wittgenstein’s later thought is sometimes likened to that of earlier authors in early modern or medieval times, I haven’t seen that his ideas were placed in a larger tradition. Perhaps, then, straightforward philosophies of language and mind are not the best place to look. But what should we turn to? If we look for historical cues of the two ideas sketched above, we should watch out for theories that construe mental events on the model of action rather than representation. But if you think that such theorising begins only with what is commonly called ‘pragmatism’, you miss out on a lot. Let’s focus on (a) first. We should begin by giving up on the assumption that the representational model of the mind is the traditional one. Of course, representation looms large, but it is not always the crucial explanans of correct vs. erroneous thinking or speaking. Good places to start are discussions that tie error to acts of will. Why not try Descartes’ famous explanation of human error? In the Fourth Meditation, Descartes claims that error does not arise from misrepresentation as such. Rather I can err because my will reaches farther than my intellect. So my will might extend to the unknown, deviating from the true and good. And thus I am said to err and sin. Bringing together error and sin, Descartes appeals to a longstanding tradition that places error on the level of voluntary judgment and action. Accordingly, there is no sharp distinction between moral and epistemic errors. I can fail to act in the right way or I can fail to think in the right way. The source of my error is, then, not that I misrepresent objects but rather that I deviate from the way that God ordained. This is the way in which even perfect cognitive agents such as fallen angels and demons can err.

What is significant for the question at hand is that God is taken as presenting us with a standard that we can conform to or deviate from when representing objects. Thus, error is explained through deviation from the divine standard, not through a representational model. Of course, you might object, that divine standards are a far cry from social standards and linguistic rules.*** But what might have served as a crucial inspiration are the following three points: putting mental acts on a par with action, explaining error and correctness through a non-representational standard, and having a non-individualistic standard, for it is the relation of humans to God that enforces the standard on us. In this sense, error cannot be ascribed to a single individual that misrepresents an object; it must be a mind that is related to the standards set by God.

If we accept this historical comparison at least as a suggestion, we might say that divine standards play a theoretical role that is similar to the social practice in Wittgenstein. However, divine standards come in different guises. Not all philosophers who discuss error in relation to deviant acts of will are automatically committed to the thesis that the divine standards have no further foundation. Theological rationalists assume that divine standards can be justified, such that God wills the Good because it is good. By contrast, voluntarists assume that something is good because God wills it. Thus, rationalistic conceptions could allow for an explanation of error that is not ultimately explained by reference to the divine standard. In this sense, rationalism would clash with Wittgenstein’s anti-foundationalism, called (b) above, according to which rules have no further foundation over and above the practice. As Wittgenstein puts it in Philosophical Investigations, § 206: “Following a rule is analogous to obeying an order.”

How, then, does Wittgenstein see the traditional theological distinction? Given his numerous discussions of the will even in his early writings, it is clear that his work is informed by such considerations. Most striking is his remark on voluntarism reported in Waismann’s “Notes on Talks with Wittgenstein” (Philosophical Review 74 [1956]): “I think that the first conception is the deeper one: Good is what God orders. For this cuts off the path to any and every explanation ‘why’ it is good …” Here, Wittgenstein clearly sides with the voluntarists.**** Indeed, the idea of rule-following as obedience can be seen perfectly in line with the assumption that erring consists in violating a shared practice, just as the voluntarist tradition that Descartes belongs to deems erring a deviation from divine standards.

If these suggestions are pointing in a fruitful direction, they could open a path to relocating Wittgenstein’s thought in the context of the long tradition of voluntarism. They might downplay his claims to originality, but at the same time they might render both his work and the tradition more accessible.

___

* Originally posted on the blog of the Groningen Centre for Medieval and Early Modern Thought

** This is my variant of Davidson’s ketch-yawl example in his “On the very idea of a conceptual scheme”. I’d like to thank Laura Georgescu, Lodi Nauta and Tamer Nawar, who kindly heard me out when I introduced them to the ideas suggested here.

*** Thanks to Martin Kusch, who raised this objection in an earlier discussion on Facebook.

**** See David Bloor, Wittgenstein, Rules and Institutions, Routledge 2002, 126-133, who also discusses Wittgenstein’s voluntarism.

 

Abstract cruelty. On dismissive attitudes

Do you know the story about the PhD student whose supervisor overslept and refused to come to the defence, saying he had no interest in such nonsense? – No? I don’t know it either, by which I mean: I don’t know exactly what happened. However, some recurrent rumours have it that on the day of the PhD student’s defence, the supervisor didn’t turn up and was called by the secretary. After admitting that he overslept, he must indeed have said that he didn’t want to come because he wasn’t convinced that the thesis was any good. Someone else took over the supervisor’s role in the defence, and the PhD was ultimately conferred. I don’t know the details of the story but I have a vivid imagination. There are many aspects to this story that deserve attention, but in the following I want to concentrate on the dismissive attitude of the supervisor.

Let’s face it, we all might oversleep. But what on earth brings someone to say that they are not coming to the event because the thesis isn’t any good? The case is certainly outrageous. And I keep wondering why an institution like a university lets a professor get away with such behaviour. As far as I know the supervisor was never reprimanded, while the candidate increasingly went to bars rather than the library. I guess many people can tell similar stories, and we all know about the notorious discussions around powerful people in philosophy. Many of those discussions focus on institutional and personal failures or power imbalances. But while such points are doubtlessly worth addressing, I would like to focus on something else: What is it that enables such dismissive attitudes?

Although such and other kinds of unprofessional behaviour are certainly sanctioned too rarely, we have measures against it in principle. Oversleeping and rejecting to fulfil one’s duties can be reprimanded effectively, but what can we do about the most damning part of it: the dismissive attitude according to which the thesis was just no good? Of course, using it as a reason to circumvent duties can be called out, but the problem is the attitude itself. I guess that all of us think every now and then that something is so bad that, at least in principle, it isn’t worth getting up for. What is more, there is in principle nothing wrong with finding something bad. Quite the contrary, we have every reason to be sincere interlocutors and call a spade a spade, and sometimes this involves severe criticism.

However, some cases do not merely constitute criticism but acts of cruelty. But how can we distinguish between the two? I have to admit that I am not entirely sure about this, but genuine criticism strikes me as an invitation to respond, while in the case under discussion the remark about the quality of the thesis was given as a reason to end the conversation.* Ending a conversation or dismissing a view like that is cruel. It leaves the recipient of the critique with no means to answer or account for their position. Of course, sometimes we might have good reasons for ending a conversation like that. I can imagine political contexts in which I see no other way than turning my back on people. But apart from the fact that a doctoral defence shouldn’t be such an occasion, I find it suspicious if philosophers end conversations like that. What is at stake here?

First of all, we should note that this kind of cruelty is much more common than meets the eye. Sure, we rarely witness that a supervisor refuses to turn up for a defence. But anyone sitting in on seminars, faculty talks or lectures will have occasion to see that sometimes criticism is offered not as an invitation for response, but as a dismissal that is only thinly disguised as an objection. How can we recognise such a dismissal? The difference is that an opinion is not merely criticised but considered a waste of time. This and other slogans effectively end a conversation. Rather than addressing what one might find wanting, the opponent’s view will be belittled and portrayed as not being worth to be taken seriously. As I see it, such speech acts are acts of cruelty because they are always (even if tacitly) ad hominem. The conjunction of critical remarks and of ending a conversation shows that it is not merely the opinion that is rejected but that there is no expectation that the argument could be improved by continuing the conversation. In this sense, ending a conversation is owing to a severe lack of charity, ultimately dismissing the opponent as incapable or even irrational.

You would think that such behaviour gets called out quickly, at least among philosophers. But the problem is that this kind of intellectual bullying is actually rather widespread: Whenever we say that an opinion isn’t worth listening to, when we say, for instance, that analytical or continental philosophy is just completely wrongheaded or something of the kind, we are at least in danger of engaging in it.** Often this goes unnoticed because we move within circles that legitimise such statements. Within such circles we enjoy privilege and status; outside our positions are belittled as a waste of time. And the transition from calling something bad to calling something a waste of time is rather smooth, if no one challenges such a speech act.

Having said as much, you might think I am rather pessimistic about the profession. But I am not. In fact I think there is a straightforward remedy. Decouple criticisms from ending conversations! But now you might respond that sometimes a conversation cannot continue because we really do not share standards of scholarship or argument. And we certainly shouldn’t give up our standards easily. – I totally agree, but I think that rather than being dismissive we might admit that we have a clash of intuitions. Generally speaking, we might distinguish between two kinds of critical opposition: disagreements and clashes of intuition. While disagreements are opposing views that can be plotted on a common ground, clashes of intuition mark the lack of relevant common ground. In other words, we might distinguish between internal and external criticism, the latter rejecting the entire way of framing an issue. I think that it is entirely legitimate to utter external criticism and signal such a clash. It is another way of saying that one doesn’t share sufficient philosophical ground. But it also signals that the opposing view might still deserve to be taken seriously, provided one accepts different premises or priorities.*** Rather than bluntly dismissing a view because one feels safeguarded by the standards of one’s own community, diagnosing a clash respects that the opponent might have good reasons and ultimately engages in the same kind of enterprise.

The behaviour of the supervisor who overslept is certainly beyond good and evil. Why do I find this anecdote so striking? Because it’s so easy to call out the obvious failure on part of the supervisor. It’s much harder to see how we or certain groups are complicit in legitimising the dismissive attitude behind it. While we might be quick to call out such a brutality, the damning dismissive attitude is more widespread than meets the eye. Yet, it could be amended by admitting to a clash of intuitions, but that requires some careful consideration of the nature of the clash and perhaps the decency of getting out of bed on time.

_____

This post by Regina Rini must have been at the back of my mind when I thought about conversation-enders; not entitrely the same issue but a great read anyway.

**A related instance can be to call a contemporary or a historical view “weird”. See my post on relevance and othering.

*** Examples of rather respectable clashes are dualism vs. monism or representationalism vs. inferentialism. The point is that the debates run into a stalemate, and picking a side is a matter of decision rather than argument.

What are we on about? Making claims about claims

A: Can you see that?

B: What?

A: [Points to the ceiling:] That thing right there!

B: No. Could you point a bit more clearly?

You probably know this, too. Someone points somewhere assuming that pointing gestures are sufficient. But they are not. If you’re pointing, you’re always pointing at a multitude of things. And we can’t see unless we already know what kind of thing we’re supposed to look for. Pointing gestures might help, but without prior or additional information they are underdetermined. Of course we can try and tell our interlocutor what kind of thing we’re pointing at. But the problem is that quite often we don’t know ourselves what kind of thing we’re pointing at. So we end up saying something like “the black one there”. Now the worry I’d like to address today is that texts offer the same kind of challenge. What is this text about? What does it claim? These are recurrent and tricky questions. And if you want to produce silence in a lively course, just ask one of them.

But why are such questions so tricky? My hunch is that we notoriously mistake the question for something else. The question suggests that the answer could be discovered by looking into the text. In some sense, this is of course a good strategy. But without further information the question is as underdetermined as a pointing gesture. “Try some of those words” doesn’t help. We need to know what kind of text it is. But most things that can be said about the text are not to be found in the text. One might even claim that there is hardly anything to discover in the text. That’s why I prefer to speak of “determining” the claim rather than “finding out” what it is about.

In saying this I don’t want to discourage you from reading. Read the text, by all means! But I think it’s important to take the question about the claim of a text in the right way. Let’s look at some tacit presuppositions first. The question will have a different ring in a police station and a seminar room or lecture hall. If we’re in a seminar room, we might indeed assume that there is a claim to be found. So the very room matters. The date matters. The place of origin matters. Authorship matters. Sincerity matters. In addition to these non-textual factors, the genre and language matter. So what if we’re having a poem in front of us, perhaps a very prosaic poem? And is the author sincere or joking? How do you figure this out?

But, you will retort, there is the text itself. It does carry information. OK then. Let’s assume all of the above matters are settled. How do you get to the claim? A straightforward way seems to be to figure out what a text is intended to explain or argue for. For illustrating this exercise, I often like to pick Ockham’s Summa logicae. It’s a lovely text with a title and a preface indicating what it is about. So, it’s about logic, innit? Well, back in the day I read and even added to a number of studies determining what the first chapters of that book are about. In those chapters, Ockham talks about something called “mental propositions”, and my question is: what are mental propositions supposed to account for? Here are a few answers:

  • Peter Geach: Mental propositions are invoked to explain grammatical features of Latin (1957)
  • John Trentman: Mental propositions form an ideal language, roughly in the Fregean sense (1970)
  • Joan Gibson: Mental propositions form a communication system for angels (1976)
  • Calvin Normore: Mental propositions form a mental language, like Fodor’s mentalese (1990)
  • Sonja Schierbaum: Ockham isn’t Fodor (2014)

Now imagine this great group of people in a seminar and tell them who gave the right answer. But note that all of them have read more than one of Ockham’s texts carefully and provided succinct arguments for their reading. In fact, most of them are talking to one another and respectfully agree on many things before giving their verdicts on what the texts on mental propositions claim. All of them point at the same texts, what they “discover” there is quite different, though. And as you will probably know, by determining the claim you also settle what counts as a support or argument for the claim. And depending on whether you look out for arguments supporting an angelic communication system or the mental language humans think in, you will find what you discover better or worse.

So what is it that determines the claim of a text?* By and large it might be governed by what we find (philosophically) relevant. This is tied to the question why a certain problem arises for you in the first place. While many factors are set by the norms and terms of the scholarly discussion that is already underway, the claims seem to go with the preferred or fashionable trends in philosophy. While John Trentman seems to have favoured early analytic ideal language philosophy, Calvin Normore was clearly guided by one of the leading figures in the philosophy of mind. Although Peter Geach is rather dismissive, all of these works are intriguing interpretations of Ockham’s text. That said, we all should get together more often to discuss what we are actually on about when we determine the claims of texts. At least if we want to avoid that we are mostly greeted with the parroting of the most influential interpretations.

____

* You’ll find more on this question in my follow-up piece.