I don’t know what I think. A plea for unclarity and prophecy

Would you begin a research project if there were just one more day left to work on it? I guess I wouldn’t. Why? Well, my assumption is that the point of a research project is that we improve our understanding of a phenomenon. Improvement seems to be inherently future-directed, meaning that we understand x a bit better tomorrow than today. Therefore, I am inclined to think that we would not begin to do research, had we not the hope that it might lead to more knowledge of x in the future. I think this not only true of research but of much thinking and writing in general. We wouldn’t think, talk or write certain things, had we not the hope that this leads to an improved understanding in the future. You might find this point trivial. But a while ago it began to dawn on me that the inherent future-directedness of (some) thinking and writing has a number of important consequences. One of them is that we are not the (sole) authors of our thoughts. If this is correct, it is time to rethink our ways of evaluating thoughts and their modes of expression. Let me explain.

So why am I not the (sole) author of my thoughts? Well, I hope you all know variations of the following situation: You try to express an idea. Your interlocutor frowns and points out that she doesn’t really understand what you’re saying. You try again. The frowning continues, but this time she offers a different formulation. “Exactly”, you shout, “this is exactly what I meant to say!” Now, who is the author of that thought? I guess it depends. Did she give a good paraphrase or did she also bring out an implication or a consequence? Did she use an illustration that highlights a new aspect? Did she perhaps even rephrase it in such a way that it circumvents a possible objection? And what about you? Did you mean just that? Or do you understand the idea even better than before? Perhaps you are now aware of an important implication. So whose idea is it now? Hers or yours? Perhaps you both should be seen as authors. In any case, the boundaries are not clear.

In this sense, many of my thoughts are not (solely) authored by me. We often try to acknowledge as much in forewords and footnotes. But some consequences of this fact might be more serious. Let me name three: (1) There is an obvious problem for the charge of anachronism in history of philosophy (see my inaugural lecture).  If future explications of thoughts can be seen as improvements of these very thoughts, then anachronistic interpretations should perhaps not merely be tolerated but encouraged. Are Descartes’ Meditations complete without the Objections and Replies? Can Aristotle be understood without the commentary traditions? Think about it! (2) Another issue concerns the identity of thoughts. If you are a semantic holist of sorts you might assume that a thought is individuated by numerous inferential relations. Is your thought that p really what it is without it entailing q? Is your thought that p really intelligible without seeing that it entails q? You might think so, but the referees of your latest paper might think that p doesn’t merit publication without considering q. (3) This leads to the issue of acceptability. Whose inferences or paraphrases count? You might say that p, but perhaps p is not accepted in your own formulation, while the expression of p in your superviser’s form of words is greeted with great enthusiasm. In similar spirit, Tim Crane has recently called for a reconsideration of peer review.  Even if some of these points are controversial, they should at least suggest that authorship has rather unclear boundaries.

Now the fact that thoughts are often future-directed and have multiple authors has, in turn, a number of further consequences. I’d like to highlight two of them by way of calling for some reconsiderations: a due reconsideration of unclarity and what Eric Schliesser calls “philosophic prophecy”.*

  • A plea for reconsidering unclarity. Philosophers in the analytic tradition pride themselves on clarity. But apart from the fact that the recognition of clarity is necessarily context-dependent, clarity ought to be seen as the result of a process rather than a feature of the thought or its present expression. Most texts that are considered original or important, not least in the analytic tradition, are hopelessly unclear when read without guidance. Try Russell’s “On Denoting” or Frege’s “On Sense and Reference” and you know what I mean. Or try some other classics like Aristotle’s “De anima” or Hume’s “Treatise”. Oh, your own papers are exempt from this problem? Of course! Anyway, we all know this: we begin with a glimpse of an idea. And it’s the frowning of others that either makes us commit it to oblivion or try an improvement. But if this is remotely true, there is no principled reason to see unclarity as a downside. Rather it should be seen as a typical if perhaps early stage of an idea that wants to grow.
  • A plea for coining concepts or philosophic prophecy. Simplifying an idea by Eric Schliesser, we should see both philosophy and history of philosophy as involved in the business of coining concepts that “disclose the near or distant past and create a shared horizon for our philosophical future.”* As is well-known some authors (such as Leibniz, Kant or Nietzsche) have sometimes decidedly written rather for future audiences than present ones, trying to pave conceptual paths for future dialogues between religions, metaphysicians or Übermenschen. For historians of philosophy in particular this means that history is never just an antiquarian enterprise. By offering ‘translations’ and explanations we can introduce philosophical traditions to the future or silence them. In this sense, I’d like to stress, for example, that Ryle’s famous critique of Descartes, however flawed historically, should be seen as part of Descartes’ thought. In the same vain, Albert the Great or Hilary Putnam might be said to bring out certain versions of Aristotle. This doesn’t mean that they didn’t have any thoughts of their own. But their particular thoughts might not have been possible without Aristotle, who in turn might not be intelligible (to us) without the later developments. In this sense, much if not all philosophy is a prophetic enterprise.

If my thoughts are future-directed and multi-authored in such ways, this also means that I often couldn’t know at all what I actually think, if it were not for your improvements or refinements. This is of course one of the lessons learned from Wittgenstein’s so-called private language argument. But it does not only concern the possibility of understanding and knowing. A fortiori it also concerns understanding our own public language and thought. As I said earlier, I take it to be a rationality constraint that I must agree to some degree with others in order to understand myself. This means that I need others to see the point I am trying to make. If this generalises, you cannot know thyself without listening to others.


* See Eric Schliesser, “Philosophic Prophecy”, in Philosophy and It’s History, 209.



What are we on about? Making claims about claims

A: Can you see that?

B: What?

A: [Points to the ceiling:] That thing right there!

B: No. Could you point a bit more clearly?

You probably know this, too. Someone points somewhere assuming that pointing gestures are sufficient. But they are not. If you’re pointing, you’re always pointing at a multitude of things. And we can’t see unless we already know what kind of thing we’re supposed to look for. Pointing gestures might help, but without prior or additional information they are underdetermined. Of course we can try and tell our interlocutor what kind of thing we’re pointing at. But the problem is that quite often we don’t know ourselves what kind of thing we’re pointing at. So we end up saying something like “the black one there”. Now the worry I’d like to address today is that texts offer the same kind of challenge. What is this text about? What does it claim? These are recurrent and tricky questions. And if you want to produce silence in a lively course, just ask one of them.

But why are such questions so tricky? My hunch is that we notoriously mistake the question for something else. The question suggests that the answer could be discovered by looking into the text. In some sense, this is of course a good strategy. But without further information the question is as underdetermined as a pointing gesture. “Try some of those words” doesn’t help. We need to know what kind of text it is. But most things that can be said about the text are not to be found in the text. One might even claim that there is hardly anything to discover in the text. That’s why I prefer to speak of “determining” the claim rather than “finding out” what it is about.

In saying this I don’t want to discourage you from reading. Read the text, by all means! But I think it’s important to take the question about the claim of a text in the right way. Let’s look at some tacit presuppositions first. The question will have a different ring in a police station and a seminar room or lecture hall. If we’re in a seminar room, we might indeed assume that there is a claim to be found. So the very room matters. The date matters. The place of origin matters. Authorship matters. Sincerity matters. In addition to these non-textual factors, the genre and language matter. So what if we’re having a poem in front of us, perhaps a very prosaic poem? And is the author sincere or joking? How do you figure this out?

But, you will retort, there is the text itself. It does carry information. OK then. Let’s assume all of the above matters are settled. How do you get to the claim? A straightforward way seems to be to figure out what a text is intended to explain or argue for. For illustrating this exercise, I often like to pick Ockham’s Summa logicae. It’s a lovely text with a title and a preface indicating what it is about. So, it’s about logic, innit? Well, back in the day I read and even added to a number of studies determining what the first chapters of that book are about. In those chapters, Ockham talks about something called “mental propositions”, and my question is: what are mental propositions supposed to account for? Here are a few answers:

  • Peter Geach: Mental propositions are invoked to explain grammatical features of Latin (1957)
  • John Trentman: Mental propositions form an ideal language, roughly in the Fregean sense (1970)
  • Joan Gibson: Mental propositions form a communication system for angels (1976)
  • Calvin Normore: Mental propositions form a mental language, like Fodor’s mentalese (1990)
  • Sonja Schierbaum: Ockham isn’t Fodor (2014)

Now imagine this great group of people in a seminar and tell them who gave the right answer. But note that all of them have read more than one of Ockham’s texts carefully and provided succinct arguments for their reading. In fact, most of them are talking to one another and respectfully agree on many things before giving their verdicts on what the texts on mental propositions claim. All of them point at the same texts, what they “discover” there is quite different, though. And as you will probably know, by determining the claim you also settle what counts as a support or argument for the claim. And depending on whether you look out for arguments supporting an angelic communication system or the mental language humans think in, you will find what you discover better or worse.

So what is it that determines the claim of a text?* By and large it might be governed by what we find (philosophically) relevant. This is tied to the question why a certain problem arises for you in the first place. While many factors are set by the norms and terms of the scholarly discussion that is already underway, the claims seem to go with the preferred or fashionable trends in philosophy. While John Trentman seems to have favoured early analytic ideal language philosophy, Calvin Normore was clearly guided by one of the leading figures in the philosophy of mind. Although Peter Geach is rather dismissive, all of these works are intriguing interpretations of Ockham’s text. That said, we all should get together more often to discuss what we are actually on about when we determine the claims of texts. At least if we want to avoid that we are mostly greeted with the parroting of the most influential interpretations.


* You’ll find more on this question in my follow-up piece.

On saying “we” again. A response to Peter Adamson

Someone claiming that we today are interested in certain questions might easily obscure the fact that current interests are rather diverse. I called this phenomenon synchronic anachronism. While agreeing with the general point, Peter Adamson remarked that

“… as a pragmatic issue, at least professional philosophers who work, or want to work, in the English speaking world cannot easily avoid imagining a population of analytic philosophers who have a say in who gets jobs, etc. The historian is almost bound to speak to the interests of that imagined population, which is still a rough approximation of course but not, I think, a completely empty notion. In any case, whether it is empty or not, a felt tactical need to speak to that audience might explain why the “we” locution is so common.”

I think this is a rather timely remark and worth some further discussion. Clearly, it suggests a distinction between an indexical and a normative use of the word “we”. Using the word in the former sense, it includes all the people who are reading or (in the given cases) are studying history of philosophy. Thus, it might refer to a quite diverse set of individuals. In the latter sense, however, the word would pick out a certain group, specified as “analytic philosophers”. It is normative in that it does not merely pick out individuals who are interested in certain issues; rather it specifies what any individual should be interested in. Locutions with this kind of normative “we” are at once descriptive and directive. So the sentence “Currently, we have a heated debate about the trolley problem” has to be taken in the same vein as the sentence “We don’t eat peas with our fingers.” It states at once what we (don’t) do as well as what we should (or should not) do.*

Now where does the normative force of such locutions originate? Talking about historical positions, such “we” locutions seem to track the relevance of a given topic for a dominant group, the group of philosophers identifying as analytic. The relevance of such a topic, however, is reinforced not by the mere fact that all members of the dominant group are interested in that topic. Rather it is (also and perhaps crucially) reinforced by the fact that certain members of that group are quite powerful when it comes to the distribution of jobs, grants, publication space and other items relevant to survival in academia. This is worth noting because, if correct, it entails that the perceived relevance of topics is due to political power in academia. Some might say that this is a truism. Nevertheless, it is worth noting that topics of discussion are reinforced or excluded in this way. For if this is the case, then it follows that what Peter Adamson calls “tactical” and “pragmatic” has immediate repercussions on the philosophy and historiography itself. Being interested in topics that we are interested in might promote your career. Sidestepping them might be harmful. This being so, the career prospects related to a topic dictate its philosophical value.

Does this mean that someone writing “the history of the trolley problem” will merely do so out of tactical considerations? Or should we even encourage aspiring academics to go for such topics. It’s hard to tell, but it’s worth considering this practice and its implications. It might mean that our interest in certain topics, however genuine, is not reinforced because we all find these topics interesting, but because certain members of the dominant group are perceived as liking them. Successfully deviating from topics resonating with a dominant group, then, might require the privilege of a permanent job. Thus, if we really want to promote diversity in teaching and research on what is called the canon, it would be morally dubious to ask junior researchers to take the lead.


* Ruth Millikan discusses such pushmi-pullyu locutions at length in ch. 9 of her Language: A Biological Model.

Mistakes and objectivity. Myths in the history of philosophy (Part II)

“It’s raining.” While reading or writing this sentence now, I think many things. I think that the sentence is a rather common example in certain textbooks. I also think that it has a slightly sentimental ring. Etc. But there is one thing I can’t bring myself to think now: that it is true. Worse still, if someone sincerely uttered this sentence now in my vicinity, I would think that there is something severely wrong. A charitable view would be that I misheard or that he or she made a linguistic mistake. But I can’t bring myself to disagree with what I take to be the facts. The same is true when reading philosophy. If someone disagrees with what I take to be the facts, then … what?  – Since I am a historian of philosophy, people often seem to assume that I am able to suspend judgment in such cases. That is, I am taken to report what someone thought without judging whether the ideas in question are true or false. “Historians are interested in what people thought, not in the truth”, it is said. This idea of neutrality or objectivity is a rather pervasive myth. In what follows, I’d like to explain what I think is wrong with it.

Let’s begin by asking why this myth might be so pervasive. So why do we – wrongly – assume that we can think about the thoughts of others without judging them to be true or false? One reason might be the simple fact that we can use quotations. Accordingly, I’d like to trace this myth back to what I call the quotation illusion. Even if I believe that your claims are false or unintelligible, I can quote you – without adding my own view. I can say that you said “it’s raining”. Ha! Of course I can also use an indirect quote or a paraphrase, a translation and so on. Based on this convenient feature of language, historians of philosophy (often including myself) fall prey to the illusion that they can present past ideas without imparting judgment. What’s more, at least in the wake of Skinner, this neutral style is often taken as a virtue, and transgression is chided as anachronism (see my earlier post on this).

But the question is not whether you can quote without believing what you quote. Of course you can. The question is whether you can understand a sentence or passage without judging its truth. I think you can’t. (Yes, reading Davidson convinced me that the principle of charity is not optional.) However, some people will argue that you can. “Just like you can figure out the meaning of a sentence without judging its truth”, they will say, “you can understand and report sentences without judgment.” I beg to differ. You could not understand the sentence “It’s raining” without acknowledging that it is false, here and now at least. And this means that you can’t grasp the meaning without knowing what would have to be the case for it to be true. – The same goes for reading historical texts. Given certain convictions about, say, abstract objects, you cannot read, say, Frege without thinking that he must be wrong.

Did I just say that Frege was wrong? – I take that back. Of course, if a view does not agree with your beliefs, it seems a natural response to think that the author is wrong. But whenever people are quick to draw that conclusion, I start to feel uneasy. And this kind of hesitation might be another reason for why the myth of neutrality is so pervasive. On closer inspection, however, the feeling of uneasiness might not be owing to the supposed neutrality. Rather there is always the possibility that not the author but something else might be wrong. I might be wrong about the facts or I might just misunderstand the text. Even the text might be corrupt (a negation particle might be missing) or a pervasive canonical reading might prevent me from developing a different understanding.

The intriguing task is to figure out what exactly might be wrong. This is neither achieved by pretending to suspend judgment nor by calling every opponent wrong, but rather by exposing one’s own take to an open discussion. It is the multitude of different perspectives that affords objectivity, not their elimination.

Who are we? Myths in the history of philosophy (Part I)

“Instead of assuming that the historical figures we study are motivated by the same philosophical worries that worry us, we need to understand why they care about each issue they raise.” This is part of rule 7 from Peter Adamson’s Rules for the History of Philosophy, and it is very good advice indeed. When reading texts, we should try to be aware of certain traps of anachronism. (See the intriguing debate between Eric Schliesser and Peter Adamson) People don’t always care about the same things, and if they do, they might do so for different reasons.

While I don’t believe that we can avoid anachronism, I think it is important to be aware of it. We can’t sidestep our interests, but it helps to make these interests explicit. What I would like to focus on today are some of the personal pronouns in the quoted passage: “us” and “we”. Saying that there are “worries that worry us”, places the diversity in the past history but seems to presuppose a fair amount of unity amongst us. But who are we? I picked rule 7, but it is safe to say that most historians of philosophy give in to this inclination of presupposing a “we”. I do find that funny. Historians (of philosophy) often like to mock people who indulge in generalised ideas about past periods such as the Middle Ages. “You wouldn’t believe”, they will say, “how diverse they were. The idea that all their philosophy is in fact about God is quite mistaken.” But then they turn around, saying that the medievals were quite different from us, where “us” is indexing some unified idea of a current philosophical state of the art. What I find funny, then, is that historians will chide you for claiming something about the past that they are happy to claim about the present. Last time I checked there was no “current philosophical debate”. At the same time, I should admit that I help myself a lot to these pronouns and generalisations. So if I sound like I’m ridculing that practice, I should be taken as ridiculing myself most of all.

My point is simple. It’s not enough to be aware of diachronic anachronism, we also need to be aware of what I’d like to call synchronic anachronism. Why? Well, for one thing, claims about the “current debate” are supposed to track relevance in some domain. If something is debated currently or by us, it might signal that we have reason to study its history. Wanting to avoid anachronism, historians often use an inversion of this relevance tracker: facts about historical debates might be interesting because they are not relevant today, in this sense they can teach us how the past is intriguingly and instructively different.

The second reason for highlighting synchronic anachronism is that it obscures the heterogeneity of current debates and the fact that we are anachronistic beings. Looking closely, we will find that we are a jumble of things that render us anachronistic: we are part of different generations, have different educational pasts and cling to various fashions; we might be nostalgic or prophetic, we live in different social situations and adhere to different authorities and canons. And sometimes we even have to defer to “the taxpayer” for relevance. So the idea that rationality requires agreement with others (the third “agreement constraint”, mentioned in one of my previous posts) should be seen in connection with the idea that such agreement might involve quite different and even opposing groups. The idealised present tense in which we talk about “us” and “our current interests” is merely a handy illusion. Acknowledging and respecting synchronic anachronism might seem tedious, but at least historians of philosophy should see the fun in it.