I don’t know what I think or feel. On psychological indeterminacy

Somewhere in his Metaphysics, Aristotle says that, if you don’t think something determinate, you think nothing at all. I guess this assumption did catch on, because among philosophers of mind it’s still common to say that beliefs and desires are individuated by their content. So what makes your current mental state the state it is is that it’s p and not q you’re thinking or desiring. Although I can understand the idea, I always thought that this was odd in view of my actual mental life. I often think that I’m not sure what I believe or desire. In what follows, I’d like to suggest that this indeterminacy of mental states should perhaps be taken more seriously.* Why? Well, simply because I think it’s fairly pervasive. Our conversational maxims might demand that we be clear, but I think what’s actually going on is more like a duck-rabbit situation: given the context, we might be sad or angry, but we don’t really know, and there might not be a fact of the matter as to what is actually the case. So what’s going on?

“Do you love me?” This is a question we’d probably like to have a determinate answer to. But do we? Stating how we feel or what we think is common in our daily exchanges. If you asked me how I am now and what I think, I’d answer that I am fine, but a bit tired, and that I’m wondering whether to stay up or go to bed. It seems, then, that my mental states come in fairly clear categories: I feel fine in a certain way to a certain degree; I feel tired, and that makes me think whether I should go to bed. It seems, then, that my feelings and thoughts are determinate: I’m not angry or sad, but fine. My thought has a certain content: it’s about whether I want to go to bed, not about the aftertaste of the wine I had a moment ago. However, perhaps more often than I am aware, I don’t know how I feel and I don’t know what goes on in my mind. If you were to ask me in these moments how I am, I’d feel slightly embarrassed because I couldn’t tell. So my hunch is that we make our mental states seem more determinate than they actually are, not because we’d know how we are, but to spare ourselves and others embarrassment.

Now you might want to object that our own insecurity about what we think doesn’t actually matter. As a good content externalist, you might want to say that our thoughts are often about things we don’t know, but that doesn’t mean they are not determined by something definite; it just means that we don’t have the means to tell what that definite content is. To pick up an example my friend Markus Wild once gave me: You might be bitten by a poisonous or non-poisonous snake; even if you don’t know the least thing about snakes, it will definitely be one or the other. What matters is not what you know about snakes but the kind of snake that bit you. The upshot is that the content of our thoughts or desires or feelings might be determined whether we know it or not. In other words, the content that I am aware of might not at all be the content that my mental state is about. This is an important objection: I might want chocolate, but my body might in fact crave some sort of sugar, whether I know it or not.

That said, this externalist account might be important if we talk about beliefs and desires regarding natural kinds. I’m less sure this account figures in any instructive way when it comes to the question of whether we love someone or whether we have this or that opinion or association etc. What I mean is: even an externalist must accept that there are some thoughts and desires and feelings with regard to which it matters whether or not we are aware of their determinacy. If you ask me whether I love you, it’s no way out to say that I’m a content externalist…

So again: why doesn’t this figure in the philosophy of mind? If it does, please let me know. But as far as I can see, the fact of psychological indeterminacy is pretty underrepresented. That said, this is not quite true outside the narrow confines of philosophy. Although most philosophers (at least the ones I know, except perhaps for Wittgenstein) don’t seem to have picked up on it, literature and art is brimming with it. Thus, I’d like to close this post with one example.

Although there might be a number of instances, the short story “Suspicion” by my fellow medievalist and writer Evelina Miteva is the best illustration I can think of. It suggests psychological indeterminacy on four levels:

  • firstly, you don’t know what the main characters think of each other; so you don’t know whether they can ascribe determinate mental states to one another;
  • secondly, you as a reader cannot guess what the mental states of the protagonists are;
  • thirdly, the author does nothing decisive to make the mental states of the protagonists appear to be determinate;
  • fourthly, the protagonists themselves are portrayed as being unsure about their actual mental states.

Of course, the story offers cues as to what you (or the protagonists or the author) might believe, but it never reassures you about your guesses. I guess that is pretty much what our (mental) lives are like anyway. It’s not just that we don’t know what we think or feel; it’s indeterminate what the content of our mental states is. Given the complexity of thoughts, feelings and perhaps traumata that are present beneath the surface of what we are aware of, it is not surprising that many of our occurent states appear to be indeterminate. But if this is so, why does it not receive more attention in theories of mind?


* Tim Crane kindly points out an intriguing paper on the issue. Here, the idea that mental states are determinate is succinctly questioned as a “textbook view”: “A lot of what we believe is incomplete, partial, confused and even contradictory. The single proposition-plus-individual belief state picture makes it hard to see how this can be the case, tending to attribute these features to our knowledge of our belief states, rather than to the states themselves. […] So we need to be able to say that it may simply be indeterminate whether Sam believes that his son is a great artist. But this is not because there are no psychological facts about what he believes — it’s rather because there are too many. Complexity and confusion can go right to the bottom of our worldview.”

On taking risks. With an afterthought on peer review

Jumping over a puddle is both fun to try and to watch. It’s a small risk to take, but some puddles are too large to cross… There are greater risks, but whatever the stakes, they create excitement. And in the face of possible failure, success feels quite different. If you play a difficult run on the piano, the listeners will equally feel relief when you manage to land on the right note in time. The same goes for academic research and writing. If you start out with a provocative hypothesis, people will get excited about the way you mount the evidence. Although at least some grant agencies ask for risks taken in proposals, risk taking is hardly ever addressed in philosophy or writing guides. Perhaps people think it’s not a serious issue, but I believe it might be one of the crucial elements.

In philosophy, every move worth our time probably involves a risk. Arguing that mistakes or successes depend on their later contextualisation, I already looked at the “the fine line between mistake and innovation.” But how do we get onto that fine line? This, I think, involves taking a risk. Taking a risk in philosophy means saying or doing something that will likely be met with objections. That’s probably why criticising interlocutors is so widespread. But there are many ways of taking risks. Sitting in a seminar, it might already feel risky to just raise your voice and ask a question. You feel you might make a fool of yourself and lose the respect of your fellow students or instructor. But if you make the effort you might also be met with the admiration for going through with an only seemingly trivial point. I guess it’s that oscillation between the possibility of failure and success that also moves the listeners or readers. It’s important to note that risk taking has a decidedly emotional dimension. Jumping across the puddle might land you in the puddle. But even if you don’t make it all the way, you’ll have moved more than yourself.

In designing papers or research projects, risk taking is most of the time rewarded, at least with initial attention. You can make an outrageous sounding claim like “thinking is being” or “panpsychism is true”. You can present a non-canonical interpretation or focus on a historical figure like “Hume was a racist” or “Descartes was an Aristotelian”. You can edit or write on the work of a non-canonical figure or provide an uncommon translation of a technical term. This list is not exhaustive, and depending on the conventions of your audience all sorts of moves might be risky. Of course, then there is work to be done. You’ve got to make your case. But if you’re set to make a leap, people will often listen more diligently than when you merely promise to summarise the state of the art. In other words, taking a risk will be seen as original. That said, the leap has to be well prepared. It has to work from elements that are familiar to your audience. Otherwise the risk cannot be appreciated for what it is. On the other hand, mounting the evidence must be presented as feasible. Otherwise you’ll come across as merely ambitious.

Whatever you do, in taking a risk you’ll certainly antagonise some people. Some will be cheering and applauding your courage and originality. Others will shake their heads and call you weird or other endearing things. What to do? It might feel difficult to live with opposition. But if you have two opposed groups, one positive, one negative, you can be sure you’re onto something. Go for it! It’s important to trust your instincts and intuitions. You might make it across the puddle, even if half of your peers don’t believe it. If you fail, you’ve just attempted what everyone else should attempt, too. Unless it’s part of the job to stick to reinventing the wheel.

Now the fact that risks will be met with much opposition but might indicate innovation should give us pause when it comes to peer review. In view of the enormous competition, journals seem to encourage that authors comply with the demands of two reviewers. (Reviewer #2 is a haunting meme by now.)  A paper that gets one wholly negative review will often be rejected. But if it’s true that risks, while indicative of originality, will incur strong opposition, should we not think that a paper is particularly promising when met with two opposing reviews? Compliance with every possible reviewer seems to encourage risk aversion. Conversely, looking out for opposing reviews would probably change a number of things in our current practice. I guess managing such a process wouldn’t be easier. So it’s not surprising if things won’t change anytime soon. But such change, if considered desirable, is probably best incentivised bottom-up. And this would mean to begin in teaching.

The fact, then, that a claim or move provokes opposition or even refutation should not be seen as a negative trait. Rather it indicates that something is at stake. It is important, I believe, to convey this message, especially to beginners who should learn to enjoy taking risks and listening to others doing it.

Embracing mistakes in music and speech

Part of what I love about improvised music is the special relation to mistakes. If you listen to someone playing a well known composition, a deviation from the familiar melody, harmony or perhaps even from the rhythm might appear to be a mistake. But what if the “mistake” is played with confidence and perhaps even repeated? Compare: “An apple a day keeps the creeps away.” Knowing the proverb, you will instantly recognise that something is off. But did I make a downright mistake or did play around with the proverb? That depends I guess. But what does it depend on? On the proverb itself? On my intentions? Or does it depend on your charity as a listener? It’s hard to tell. The example is silly and simple but the phenomenon is rather complex if you think about mistakes in music and speech. What I would like to explore in the following is what constitutes the fine line between mistake and innovation. My hunch is there is no such thing as a mistake (or an innovation). Yes, I know what you’re thinking, but you’re mistaken. Please hear me out.

Like much else, the appreciation of music is based on conventions that guide our expectations. Even if your musical knowledge is largely implicit (in that you might have had no exposure to theory), you’ll recognise variations or oddities – and that even if you don’t know the piece in question. The same goes for speech. Even if you don’t know the text in question and wouldn’t recognise if the speaker messed up a quotation, you will recognise mispronunciations, oddities in rhythm and syntax and such like. We often think of such deviations from conventions as mistakes. But while you might still be assuming that the speaker is sounding somewhat odd, they might in fact be North Americans intonating statements as if they were questions, performing funny greeting rituals or even be singing rap songs. Some things might strike people as odd while others catch on, so much so that they end up turning into conventions. – But why do we classify one thing as a variation and the other as a mistake?

Let’s begin with mistakes in music. You might assume that a mistake is, for instance, a note that shouldn’t be played. We speak of a “wrong note” or a “bum note”. Play an F# with much sustain over a C Major triad and you get the idea. Even in the wildest jazz context that could sound off. But what if you hold that F# for half a bar and then add a Bb to the C Major triad? All else being equal, the F# will sound just fine (because the C Major can be heard as a C7 and the F# as a the root note of the tritone substitution F#7) and our ear might expect the resolution to a F Major triad.* Long story short: Whether something counts as a mistake does not depend on the note in question, but on what is played afterwards.**

Let this thought sink in and try to think through situations in which something sounding off was resolved. If you’re not into music, you might begin with a weird noise that makes you nervous until you notice that it’s just rain hitting the roof top. Of course, there are a number of factors that matter, but the upshot is that a seemingly wrong note will count as fine or even as an impressive variation if it’s carried on in an acceptable way. This may be through a resolution (that allows for a reinterpretation of the note) or through repetition (allowing for interpreting it as an intended or new element in its own right) or another measure. Repetition, for example, might turn a strange sequence into an acceptable form, even if the notes in question would not count as acceptable if played only once. It’s hard to say what exactly will win us over (and in fact some listeners might never be convinced). But the point is not that the notes themselves are altered, but that repetition is a form of creating a meaningful structure, while a one-off does not afford anything recognisable. That is, repetition is a means to turn mistakes into something acceptable, a pattern. If this is correct, then it seems sensible to say that the process of going through (apparent) mistakes is not only something that can lead to an amended take on the music, but also something that leads to originality. After all, it’s turning apparent mistakes into something acceptable that makes us see them as legitimate variations.

I guess the same is true of speech. Something might start out striking you as unintelligible, but will be reinterpreted as a meaningful pattern if it is resolved into something acceptable. But how far does this go? You might think that the phenomenon is merely of an aesthetic nature, pertaining to the way we hear and recontextualise sounds in the light of what comes later. We might initially hear a string of sounds that we identify as language once we recognise a pattern in the light of what is uttered later. But isn’t this also true of the way we understand thoughts in general? If so, then making (apparent) mistakes is the way forward – even in philosophy.

Now you might object that the fact that something can be identified as an item in a language (or in music) does not mean that the content of what is said makes sense or is true. If I make a mistake in thinking, it will remain a mistake, even if the linguistic expression can be amended. – Although it might seem this way, I’d like to claim that the contrary is true: The same that goes for music and basic speech comprehension also goes for thought. Thoughts that would seem wrong at the time of utterance can be adjusted in the light of what comes later. Listening to someone, we will do everything to try and make their thoughts come out true. Trying to understand a thought that might sound unintelligible and wrong in the beginning might lead us to new insights, once we find ways in which it rhymes with things we find acceptable. “Ah, that is what you mean!” As Donald Davidson put it, charity is not optional.*** And yes, bringing Davidson into the picture should make it clear that my idea is not new. Thoughts that strike us as odd might turn out fine or even original once we identify a set of beliefs that makes them coherent. — Only among professional philosophers, it seems, we are all too often inclined to make the thoughts of our interlocutors come out false. But seen in analogy to musical improvisation, the talk of mistakes is perhaps just conservatism. Branding an idea as mistaken might merely reveal our clinging to familiar patterns.


* Nicer still is this resolution: You hold that F# for half a bar and then add a F# in the bass. All else being equal, the F# will sound just fine (because the C Major can be heard as a D7 add9/11 without the root note) and our ear might expect the resolution to a G Major triad.

** See also Daniel Martin Feige’s Philosophie des Jazz, p. 77, where I found some inspiration for my idea: “Das, was der Improvisierende tut, erhält seinen spezifischen Sinn erst im Lichte dessen, was er später getan haben wird.”

*** The basic idea is illustrated by the example at the beginning of an older post on the nature of error.

The purpose of the canon

Inspired through a blog post by Lisa Shapiro and a remark by Sandra Lapointe, I began to think about the point of (philosophical) canons again: in view of various attempts to diversify the canon in philosophy, Sandra Lapointe pointed out that we shouldn’t do anything to the canon before we understand its purpose. That demand strikes me as very timely. In what follows I’d like to look at some loose ends and argue that we might not be able to diversify the canon in any straightforward manner.

Do canons have a purpose? I think they do. In a broad sense, I assume that canons have the function of coordinating educational needs. In philosophy, we think of canons as something that should be known. The same goes for literature, visual arts or music. Someone who claims to have studied music is taken to have heard of, say, Bach. Someone who claims to have studied philosophy is taken to have heard of, say, Margaret Cavendish. Wait! What? – Off the top of my head, I could name a quite few people who won’t have heard of Cavendish, but they will have heard of Plato or Descartes and recognise them as philosophers. But why is someone like Cavendish not canonical? Why hasn’t the attempt to diversify the canon already taken some hold?

If you accept my attempt at pinning down a general purpose, the interesting question with regard to specific canons is: why should certain things be known? A straightforward answer would be: because someone, say, your teacher, wanted you to know. But I don’t think that we can rely on the intentions of individuals or even groups to pin down a canon. Aquinas is not canonical because your professor likes him. – How, then, do canons evolve? I tend to think of canons as part of larger systems like (political) ideologies. Adapting David L. Smith’s account of ideology, I would endorse a teleofunctional account of canons. (Yes, I think what Ruth Millikan said about language as a biological category can be applied to canons.) Canons survive or have stability at least so long as they promote specific educational purposes linked to a system or ideology. Just think of the notorious Marx-Engels editions in Western antiquaries.

One of the crucial features of a teleofunctional understanding of canons is that they are not decided on by a person or a group of people, not even by the proverbial “old white men”. Rather they grow, get stabilised and perhaps decline again through historical periods that transcend the lives of individuals or groups. If canons get stabilised by promoting certain educational purposes, then the evolution of a canon will depend on the persistence of the educational purposes that they promote. I don’t know what would tip the balance in favour of a certain diversification, but at the moment I rather fear that philosophy itself might lose the status of serving an educational purpose. At least, if the dominant political climate is anything to go on.

If any of this is remotely correct, what are we to think of attempts to diversify the canon? I am not sure. I am myself in favour of challenging the canon. I’m not sure that this will alter the canon. It might or might not, depending perhaps on how much potential for challenge is built into the canon already. We currently witness a number of very laudable attempts to make new material and interpretations available. And as Lisa Shapiro argues, the sheer availability might alter what gets in. At the end of the day, we can make a difference in our courses and in what we write. How that relates to the evolution of the canon is an intriguing question – and one that I’d like to think about more in the near future. But what we should watch out for, too, is how the (political) climate will affect the very status of philosophy as a canonical subject in universities and societies.

What are you good at?

Many philosophy papers have a similar structure. That is quite helpful, since you know your way around quickly. It’s like walking through a pedestrian zone: even if you are in a completely strange town, you immediately know where you find the kinds of shops you’re looking for. But apart from the macro-structure, this often also is true of the micro-structure: the way the sentences are phrased, the vocabulary is employed, the rhythm of the paragraphs. “I shall argue” in every introduction.

I don’t mean this as a criticism. I attempt to write like that myself, and I also try to teach it. Our writing is formulaic, and that’s fine. But what I would like to suggest is that we try to teach and use more ingredients in that framework. I vividly remember these moments when a fellow student or colleague got up and, instead of stringing yet another eloquent sentence together, drew something on the blackboard or attempted to impose order by presenting some crucial concepts in a table. For some strange reason, these ways of presenting a thought or some material rarely find their ways into our papers. Why not?

I think of these and other means as styles of thinking. Visualising thoughts, for instance, is something that I’m not very good at myself. But that’s precisely why I learn so much from them. And even if I can’t draw, I can attempt to describe the visualisations. Describing a visualisation (or a sound or taste) is quite different from stringing arguments together. You might extend this point to all the other arts: literature, music, what have you!

Trying to think of language as one sense-modality amongst others might help to think differently about certain questions. Visit your phenomenologist! On the one hand, you can use such styles as aids in a toolkit that will not replace but enrich your ways of producing evidence or conveying an idea. On the other hand, they might actually enrich the understanding of an issue itself. In any case, such styles should be encouraged and find their way into our papers and books more prominently.

As I said, I’m not good at visualising, but it helps me enormously if someone else does it. Assuming that we all have somewhat different talents, I often ask students: “What are you good at?” Whatever the answer is, there is always something that will lend itself to promoting a certain style of thinking, ready to be exploited in the next paper to be written.