What are we on about? Making claims about claims

A: Can you see that?

B: What?

A: [Points to the ceiling:] That thing right there!

B: No. Could you point a bit more clearly?

You probably know this, too. Someone points somewhere assuming that pointing gestures are sufficient. But they are not. If you’re pointing, you’re always pointing at a multitude of things. And we can’t see unless we already know what kind of thing we’re supposed to look for. Pointing gestures might help, but without prior or additional information they are underdetermined. Of course we can try and tell our interlocutor what kind of thing we’re pointing at. But the problem is that quite often we don’t know ourselves what kind of thing we’re pointing at. So we end up saying something like “the black one there”. Now the worry I’d like to address today is that texts offer the same kind of challenge. What is this text about? What does it claim? These are recurrent and tricky questions. And if you want to produce silence in a lively course, just ask one of them.

But why are such questions so tricky? My hunch is that we notoriously mistake the question for something else. The question suggests that the answer could be discovered by looking into the text. In some sense, this is of course a good strategy. But without further information the question is as underdetermined as a pointing gesture. “Try some of those words” doesn’t help. We need to know what kind of text it is. But most things that can be said about the text are not to be found in the text. One might even claim that there is hardly anything to discover in the text. That’s why I prefer to speak of “determining” the claim rather than “finding out” what it is about.

In saying this I don’t want to discourage you from reading. Read the text, by all means! But I think it’s important to take the question about the claim of a text in the right way. Let’s look at some tacit presuppositions first. The question will have a different ring in a police station and a seminar room or lecture hall. If we’re in a seminar room, we might indeed assume that there is a claim to be found. So the very room matters. The date matters. The place of origin matters. Authorship matters. Sincerity matters. In addition to these non-textual factors, the genre and language matter. So what if we’re having a poem in front of us, perhaps a very prosaic poem? And is the author sincere or joking? How do you figure this out?

But, you will retort, there is the text itself. It does carry information. OK then. Let’s assume all of the above matters are settled. How do you get to the claim? A straightforward way seems to be to figure out what a text is intended to explain or argue for. For illustrating this exercise, I often like to pick Ockham’s Summa logicae. It’s a lovely text with a title and a preface indicating what it is about. So, it’s about logic, innit? Well, back in the day I read and even added to a number of studies determining what the first chapters of that book are about. In those chapters, Ockham talks about something called “mental propositions”, and my question is: what are mental propositions supposed to account for? Here are a few answers:

  • Peter Geach: Mental propositions are invoked to explain grammatical features of Latin (1957)
  • John Trentman: Mental propositions form an ideal language, roughly in the Fregean sense (1970)
  • Joan Gibson: Mental propositions form a communication system for angels (1976)
  • Calvin Normore: Mental propositions form a mental language, like Fodor’s mentalese (1990)
  • Sonja Schierbaum: Ockham isn’t Fodor (2014)

Now imagine this great group of people in a seminar and tell them who gave the right answer. But note that all of them have read more than one of Ockham’s texts carefully and provided succinct arguments for their reading. In fact, most of them are talking to one another and respectfully agree on many things before giving their verdicts on what the texts on mental propositions claim. All of them point at the same texts, what they “discover” there is quite different, though. And as you will probably know, by determining the claim you also settle what counts as a support or argument for the claim. And depending on whether you look out for arguments supporting an angelic communication system or the mental language humans think in, you will find what you discover better or worse.

So what is it that determines the claim of a text?* By and large it might be governed by what we find (philosophically) relevant. This is tied to the question why a certain problem arises for you in the first place. While many factors are set by the norms and terms of the scholarly discussion that is already underway, the claims seem to go with the preferred or fashionable trends in philosophy. While John Trentman seems to have favoured early analytic ideal language philosophy, Calvin Normore was clearly guided by one of the leading figures in the philosophy of mind. Although Peter Geach is rather dismissive, all of these works are intriguing interpretations of Ockham’s text. That said, we all should get together more often to discuss what we are actually on about when we determine the claims of texts. At least if we want to avoid that we are mostly greeted with the parroting of the most influential interpretations.

____

* You’ll find more on this question in my follow-up piece.

Philosophy is History (Part I)

The relation between philosophy and history of philosophy is controversial. Some believe that history is of mere instrumental value; reading the odd classic might prevent us from reinventing the wheel and sharpen our wits. Others believe that history is an integral part of philosophy; working through our ancestors is necessary for finding our own place. Let’s call them the instrumentalists and the integralists. Of course, there are good reasons for both views. Although I have slight leanings towards the latter view, I don’t want to argue for it. Rather, I wonder whether even instrumentalists engage in some sort of history when they practise philosophy without any obvious historical ties. What, you might ask, is the point of showing such a thing? Well, I think that all camps, historians and philosophers, no matter whether they are instrumentalists or integralists, can learn from one another. So my point is not that philosophy is inherently historical; my point is that doing philosophy involves doing (some) history.

Now how does philosophy involve history? I think a very basic issue that any philosopher will have addressed (at least tacitly) is the question of why a certain problem arises in the first place. Imagine someone claims that p. If you think or want to argue that not-p, you must have some idea why you find p problematic. It doesn’t matter whether “p” = “all humans are equal” or “the mind is like a computer”. Any claim needs justification, and if you want to offer such a justification you will need to begin with an understanding of what precisely needs justifying. This means you need some understanding of why a certain problem arises for you or for certain participants in a debate. Now the point is that the question of why a problem arises is always sensitive to a certain context, no matter whether you ask why a problem arises for you or your contemporaries or for Spinoza. History doesn’t need to be about dead people; it can be about you, too.

But why, you might ask, does that matter? Certain problems just never go away, do they? That problems arise is, you might say, a fact about certain concepts or their application, not about your personal take on the matter. At this point, philosophers often invoke the distinction between context of justification and context of discovery. It doesn’t matter whether you discovered your dislike of Fodor’s computational theory of mind while having a shower or during a walk to the pub. Justification is one thing; the history of a justification or a problem is another.

But my point is not that certain biographical details might lead to the discovery of a problem; the issue is why a problem or a certain set of concepts is relevant. In other words, the question is why something is a problem. When you spell out why a problem arises for you, you won’t tell me the story of your life but you will appeal to facts about concepts: your rejection of Fodor’s theory will perhaps rely on the concept of computation. But such facts about concepts are (partly) historical facts, unless you want to claim that Hobbes and Fodor use “computational” in the same sense. Historical facts about our understanding and the relevance of concepts and problems are not external to our current debates. They determine whether we find certain intuitions relevant, whether we speak about the same thing or past one another. Of course, we often don’t take notice why things matter to us. But once we leave the boundaries of our specialisations or the field of philosophy we are reminded quickly of our synchronic anachronisms. This is why context is tied up with relevance. Thus, the answer to the question of why a problem arises remains unsaturated so long as we don’t spell out why it is relevant to whom.

That said, there are also crucial differences between philosophy and its history. While certain philosophers stress that they are interested in making true claims, historians will point out that they are also interested in why certain claims stick around. Facts that make a claim true are (often) different from what makes the claim stick in our minds and debates. But the question why something matters to us involves both kinds of facts. This is why philosophy is always partly history.

Is there a difference between offline and online discussions? A response to Amy Olberding

“My trouble is usually… that I don’t entirely know what I think. And not knowing what to think is itself sometimes cast as shameful.”

Thanks to Daily Nous, I recently came across these sentences in a moving blog post by Amy Olberding. The message is clear and there is perhaps nothing substantial I have to add. As Justin Weinberg aptly notes, there is an “irony to philosophers in particular—whose job description has long included undermining certainty and complicating the obvious …” I agree that questioning one’s beliefs is one of the main points of doing philosophy. Having opinions and especially “defending” them is seriously overrated. But if Amy Olberding’s observations are correct, we are mainly trained to question others rather than our own beliefs.

However, I wonder why she restricts her observations to the “online discourse”. It seems to me that aggressive assertiveness is encouraged in many places, not least among philosophers. Of course, there is a particularly worrying trend in anonymous comments on social media, but the attitude seems to be a (perhaps somewhat inflated) reflection of our common modes of offline interaction.

This makes me wonder about the general distinction between online and offline discourse. It is now common to distinguish between social media of the internet and our real life. But although online interaction requires technological aids and enables, among other things, anonymity, I don’t see a principled difference between the two. If I insult you, this is an impertinent behaviour, no matter whether I do it online or offline. Yes, I have more options to hide or pretend when online, but that does not alter the moral dimension of interactions. Online interaction can be good or bad, because we behave well or badly. And despite all the hate there are good interactions online. They just don’t receive as much attention.

So the one thought I would like to add, after all, is that there might be good reasons to deflate the distinction between online and offline interactions. It’s not as if we were angels who happen to turn into moral monsters (only) when online. This is also why I have mixed feelings about the idea of “leaving” online discussions and “returning” to real life. Our lives and interactions are real wherever we are. Leaving an online discussion does not just mean switching off a machine. It means to stop interacting with certain people (that one can only reach online).*

___

* That said, I don’t question Amy Olberding’s reasons for leaving the discussions from which she resigned. I just think such a resignation is not all that different from leaving a discussion in a room full of people.

What is synthetic philosophy? A response to Eric Schliesser

Much current philosophy is done in what could be called a piecemeal fashion. Rather than plotting huge systems of thought, many of us work on details by trying to tackle issues that can be handled in the length of a paper. Of course, the details or pieces are still parts of larger projects, but most of the time these projects are not philosophical systems. Nevertheless, there are some philosophers whose approach strikes me as resulting in a sort of system. My favourite example is Ruth Millikan. Not only did and does her work shape the landscape in philosophy of mind and language; rather her biofunctional approach provides underpinnings that run through all the details of her work and that even inspire novel accounts in other fields of philosophy. Perhaps it is no coincidence that other philosophers working in the tradition of naturalism appear systematic in a similar fashion. In an intriguing blog post on the work of Daniel Dennett, Eric Schliesser has coined the term synthetic philosophy:*

“Synthetic philosophy is the enterprise of bringing together insights, knowledge, and arguments from the special sciences with the aim to offer a theoretically (reasonably) unified, coherent account of complex systems and connect these to a wider culture or other philosophical projects (or both). It may, in turn, generate new research in the special sciences or a new science connected to the framework adopted in the synthetic philosophy.”

Note that Eric Schliesser does not simply speak of a kind of survey work or journalism about science. Rather he takes this approach to be a philosophical category or trend in nuanced opposition to analytic philosophy. In highlighting the traits of unification and connection to a wider culture, his description strikes me as a tacit appeal to the old-fashioned idea of a philosophical system. However, a distinctive feature of synthetic philosophy is that it is to be informed by what he calls “special sciences”. Having a “kinship” with early modern natural philosophy, its current version promises “new cognitive tools … for the special sciences and philosophical reflection, including (ahh) the development of useful new myths.”

But what kind of category is synthetic philosophy? Given that we are still in the grip of a highly problematic divide between analytic and continental philosophy, baptising emerging trends is not an innocent matter. Such baptising or coinage is involved in canonising. So what are the boundaries of synthetic philosophy? Is it a trend within or going beyond the divide? In view of Millikan, Dennett and others that people have named in discussion, this trend seems to emerge from naturalism in its Darwinian branches. “Naturalism” is said in many ways, but if unification and being informed by the special sciences are its main traits then we should perhaps be hopeful that it surpasses some of the old divisions. That said, I have at least two worries about Eric Schliesser’s coinage:

  1. Which of the special sciences does Eric Schliesser have in mind? Are the humanities included?** Aiming at unified explanations, many branches of naturalism would probably tend to exclude them. The term “special sciences” is of course itself problematic in that it lends itself to restrictions or even reductionism. But if there are restrictions in place, they should be there for a reason. As it stands, it is unclear whether synthetic philosophy is supposed to be merely a certain way of doing philosophy (building explanatory systems and being informed by special sciences) or the systematic development of a programme (unifying certain branches of naturalism).
  2. If we want to think about categorising emerging branches of philosophy, it might be problematic to tie these branches too closely to names of individual philosophers.*** In addition to the danger of feeding into the genius cult, there are good reasons to resist seeing philosophical or intellectual developments more generally as the achievements of a single person. One reason is that doing philosophy is essentially dialogical, happening between and not inside people. But if we accept this point, then what is it that distinguishes synthetic philosopjy from the piecemeal fashion alluded to in the beginning?

Now if we accept that philosophy is not the work of geniuses, then what is it that creates the synthesis in synthetic philosophy? If it is not the person, is it perhaps a programme after all? Or the union of philosophy and other disciplines? But which ones? Why is synthetic philosophy not just philosophy? – One way of tackling these worries would be perhaps to drop the label “synthetic philosophy” and just continue to speak of (Darwinian) naturalism. Another way would be to see this trend indeed as a “way of doing” philosophy. But then there can’t be a principled reason to exclude any field of study. In this case philosophical conversations invoking arguments from history or literature would be in the same business as those invoking biological or physical theories: synthetic philosophy.

_____

* Note that he notes that Herbert Spencer might have introduced the term. See also ES’s previous posts on Rachel Carson and Peter Godfrey-Smith.

** ES suggests this when saying that “Dennett brings Darwinian theory to bear on and connects existing work in offering empirically informed, but still speculative accounts of the origins of mind, language, and life (most of which already deeply influenced by Darwinism) and to open up a new meta-science of culture, memetics, that can draw upon and re-orient existing cultural studies and human/social sciences.”

*** Of course, ES is well aware of this problem and alludes to this when noting that in “Dennett’s case, Darwinisms provides the synthesizing glue. This is no coincidence because Darwin himself is the hard-to-classify (among the) last natural philosopher(s)/naturalists or (among) the first synthetic philosopher(s) (if Spencer had not jumped ahead of him).”

How do you turn a half-baked idea into a paper?

Chatting about yesterday’s post on reducing one’s ideas to one single claim, I received the question of what to do in the opposite scenario. “It’s quite a luxury to have too many ideas. Normally, I have just about half an idea and an imminent deadline.” Welcome to my world! Although I think that the problems of too many ideas and too little of an idea are closely related, I think this is worth an extra treatment.

Before trying to give some more practical advice, I think it’s important to see what it actually means to have a half-baked idea. So what is a half-baked idea or half an idea? What is it that is actually missing when we speak of such an idea? – The first thing that comes to mind is confidence. You might secretly like what you think but lack the confidence to go for it. What can you do about that? I think that the advice to work on one’s confidence is cold comfort. Contrary to common opinion, more often than not lack of confidence is not about you but about a lack of legitimacy or authority. If you were an old don, you probably wouldn’t worry too much whether people think your idea a bit underdeveloped. “Hey, it’s work in progress!” But if you are going to be marked or are on the market, then presenting real progress is a privilege you don’t necessarily enjoy.

Now if you lack certain privileges, you can’t do much about that yourself. Luckily, this is not the end of the story. I think that what we call “half-baked ideas” lacks visible agreement with other ideas. In keeping with the three agreement constraints I mentioned earlier, your idea might either lack (1) agreement with the ideas of others (authorities, secondary literature etc.), (2) with the facts or – in this case – with (textual) material or (3) with your own other ideas. If you can’t see where your agreement or disagreement lies, this might affect your confidence quite drastically, because you don’t know where you actually are in the philosophical conversation. In view of these agreement relations, I’d take two steps to amend this. The first thing I would advise to figure out is how your idea agrees on these different levels. So how does it relate to the literature, how does it relate to your material or the facts under discussion and how does it relate to your common or former intuitions? If you make these relations clearer, your idea will certainly become a bit clearer, too. (We often do this by rushing through the secondary literature, trying to see whether what we say is off the mark. But it’s important to see that this is just one step.) In a second and perhaps more crucial step, I would look for disagreements. Locating a disagreement within the literature will help you to work on the so-called USP, the “unique selling point” of your paper. If your idea doesn’t fit the material, it might be good to re-read the texts and see what makes you think about them in such a disparate way. If you disagree with your (former) intuitions, you might be onto something really intriguing, too. In any case, it’s crucial to locate your disagreements as clearly as possible. Because it is those disagreements that might add precision to your idea.*

Another way in which ideas can be half-baked is if they are too broad. Yes, it might be right that, say, Ockham is a nominalist, but if that’s your main claim, no one will want to read on. (History of) Philosophy is a conversation, and you won’t feel like you’re contributing anything if you come up with too broad a claim. But how can you narrow down your claim to a size that makes it interesting and gives structure to your paper? I think this is one of the hardest tasks ever, but here is what I think might be a start. Write an introduction or abstract, using the following headers:

  1. Topic: If your claim is too broad, then you’re probably talking about a topic rather than your actual claim. If you can’t narrow it down, begin by writing about the topic (say, Ockahm’s nominalism), bearing in mind that you will narrow it down later.
  2. Problem: If everything is fine, you won’t have something to write on. But if your questions are too broad, they are probably still referring to a common problem discussed in the literature. It’s fine to write about this in order to say what the common problem is, say, with Ockham’s nominalism.
  3. Hypothesis: Only in the light of a common problem can you formulate a solution. If you find that your solution is in total agreement with the literature, then it might be better to go back and see where your solution disagrees. (Don’t be discouraged by that. Even if you agree with a common claim, you might have different paths to the same goal or think of different material.) Anyway, in keeping with the “one idea per paper” rule, now is the time to say what you think about one single aspect of Ockham’s nominalism! That’s your hypothesis.
  4. Question: If you have such a claim, you’re nearly there. Now you have to think again: Which question has to be answered in order to show that your hypothesis is correct? Is there a special feature of Ockham’s nominalism that has to be shown as being present in his texts? Or is there a common misunderstanding in the literature that has to be amended? Or is there a thesis that needs some refinement? Spelling out that question as precisely as possible gives you a research question or a set of them. Answering that set of questions will support your claim.

Going through these steps, you can draw on your insights regarding the disagreements mentioned earlier. But even then you might still have the impression that your thesis is too broad to be interesting or too broad to be pursued in a single paper. What then? I’d say, take what you call the hypothesis and make it your topic, and take what you call the question and make it your problem. Then try to narrow down again until you reach a workable size. If you have that, you have written a kind of introduction. That doesn’t yet give you a complete structure. But once you break down the research question into manageable parts, you might get the structure of your paper out of that, too.

____

* It’s important to note that the task of locating agreement and disagreement requires an explicit point of contact on which the (dis)agreement can be plotted. So you should make sure to find a concrete sentence or passage about which you (dis)agree. You’ll find more on points of contact here.

One idea per paper!

The new academic year is approaching rapidly and I’m thinking about student essays again. In Groningen, we now devote a certain amount of course hours to the actual writing of term papers. This has made me think not only about the kind of advice I want to give, but also about the kind of advice that actually works, in the sense that it can be implemented demonstrably. Given that I’m better at giving than following advice myself, that is quite a difficult question for me. One of the best pieces of advice I ever received came rather late, during my postdoc years in Berlin. I was discussing my worries about a paper with a good friend of mine. It was a paper on Ockham’s theory of mental language, and most of these worries concerned what I could possibly leave out. So much needed to be said – and he just stopped my flow by exclaiming: “one idea per paper!”

At first I thought that he was just trying to mock me. But thinking about my actual worries, I soon began to realise that this advice was pure gold. It settled quite a number of questions. Unfortunately, it also raised new obstacles. Nevertheless, I now think it’s good advice even for monographs and will try to go through some issues that it settled for me.

(1) What do I actually want to claim? – When writing the paper in question, I wanted to say a number of different things. I was proud that had discovered a number of intriguing passages in Ockham that had not yet been taken seriously in the secondary literature. Reading these passages, I had a pile of ideas that I thought were new or deserved more attention, but I couldn’t quite put them into a proper sequence, let alone an argument. My new rule made me ask: what is it that I actually think is new? I initially came up with two and a half points, but soon realised that these points had different priorities. The one and half had to be shown in order to make the crucial point work. So the question I had asked myself had imposed an argumentative order onto my points. Now I was not just presenting bits of information, however new, but an argument for a single claim. (For the curious: this was the claim that Ockham’s mental language is conventional.)

(2) How much contextual information is required? – Once I had an argumentative order, a sequence of presenting the material suggested itself. But now that I had one single point at the centre of attention, another problem settled itself. Talking about any somewhat technical topic in a historically remote period requires invoking a lot of information. Even if you just want to explain what’s going on in some passages of a widely read text, you need to say at least bit about the origin of the issue and the discussion it’s placed in. If you have more than one idea under discussion this requires you to bring up multiple contexts. But if you’re confining yourself to one single claim, this narrows this demand considerably. As a rule of thumb I’d say: don’t bring up more than is required to make your one single claim intelligible.

(3) What do I have to argue for? – However, often contextual information that makes a claim intelligible is in itself not well explored and might need further argument to establish why it works as support of your claim. This could of course get you into an infinity of further demands. How do you interrupt the chain sensibly? Often this issue is settled by the fact that scholars (or your supervisor) simply take certain things for granted: the conventions of your discipline settle some of these issues, then. But I don’t think that this makes for a helpful strategy. My rule is: you should only commit yourself to argue for the one single claim at hand. – “But”, you will ask, “what about the intermediate claims that my argument depends on?” I’d say that you don’t have to argue for those. All you have to do is say that your argument is conditional on these further claims (and then name the claims in question). Rather than making the argument yourself, you can tackle these conditions by pointing out what you have to take for granted or what others have taken for granted (in the secondary literature) or what would have to be shown in order to take up these conditions individually. (To give a simple example, if you bring up textual evidence from Crathorn to address the consequences of Ockham’s theory, you don’t have to begin discussing Crathorn’s theory on its own terms. Why not? Well, because you support a claim about Ockham rather than Crathorn.) Of course, someone might question the plausibility of your supporting evidence, but then you have a different claim under discussion. In sum, it’s crucial to distinguish between the claim you’re committed to argue for and supporting evidence or information. For the latter you can shift the burden by indicating a possible route of tackling difficulties.

So the rule “one idea per paper” imposes structure in several ways: it provides an argumentative hierarchy, allows for restricting contextual information, and provides a distinction between tenets you’re committed to as opposed to tenets whose explication you can delegate to others (in the literature). Your paper may still contain many bits and pieces, but it they are all geared towards supporting one single idea. If you’re revising your first draft, always ask yourself: how does that paragraph contribute to arguing for that claim? If you can say how, state this explicitly at the beginning of the paragraph. If you can’t say how, delete the paragraph and save it for a later day.

Mistakes and objectivity. Myths in the history of philosophy (Part II)

“It’s raining.” While reading or writing this sentence now, I think many things. I think that the sentence is a rather common example in certain textbooks. I also think that it has a slightly sentimental ring. Etc. But there is one thing I can’t bring myself to think now: that it is true. Worse still, if someone sincerely uttered this sentence now in my vicinity, I would think that there is something severely wrong. A charitable view would be that I misheard or that he or she made a linguistic mistake. But I can’t bring myself to disagree with what I take to be the facts. The same is true when reading philosophy. If someone disagrees with what I take to be the facts, then … what?  – Since I am a historian of philosophy, people often seem to assume that I am able to suspend judgment in such cases. That is, I am taken to report what someone thought without judging whether the ideas in question are true or false. “Historians are interested in what people thought, not in the truth”, it is said. This idea of neutrality or objectivity is a rather pervasive myth. In what follows, I’d like to explain what I think is wrong with it.

Let’s begin by asking why this myth might be so pervasive. So why do we – wrongly – assume that we can think about the thoughts of others without judging them to be true or false? One reason might be the simple fact that we can use quotations. Accordingly, I’d like to trace this myth back to what I call the quotation illusion. Even if I believe that your claims are false or unintelligible, I can quote you – without adding my own view. I can say that you said “it’s raining”. Ha! Of course I can also use an indirect quote or a paraphrase, a translation and so on. Based on this convenient feature of language, historians of philosophy (often including myself) fall prey to the illusion that they can present past ideas without imparting judgment. What’s more, at least in the wake of Skinner, this neutral style is often taken as a virtue, and transgression is chided as anachronism (see my earlier post on this).

But the question is not whether you can quote without believing what you quote. Of course you can. The question is whether you can understand a sentence or passage without judging its truth. I think you can’t. (Yes, reading Davidson convinced me that the principle of charity is not optional.) However, some people will argue that you can. “Just like you can figure out the meaning of a sentence without judging its truth”, they will say, “you can understand and report sentences without judgment.” I beg to differ. You could not understand the sentence “It’s raining” without acknowledging that it is false, here and now at least. And this means that you can’t grasp the meaning without knowing what would have to be the case for it to be true. – The same goes for reading historical texts. Given certain convictions about, say, abstract objects, you cannot read, say, Frege without thinking that he must be wrong.

Did I just say that Frege was wrong? – I take that back. Of course, if a view does not agree with your beliefs, it seems a natural response to think that the author is wrong. But whenever people are quick to draw that conclusion, I start to feel uneasy. And this kind of hesitation might be another reason for why the myth of neutrality is so pervasive. On closer inspection, however, the feeling of uneasiness might not be owing to the supposed neutrality. Rather there is always the possibility that not the author but something else might be wrong. I might be wrong about the facts or I might just misunderstand the text. Even the text might be corrupt (a negation particle might be missing) or a pervasive canonical reading might prevent me from developing a different understanding.

The intriguing task is to figure out what exactly might be wrong. This is neither achieved by pretending to suspend judgment nor by calling every opponent wrong, but rather by exposing one’s own take to an open discussion. It is the multitude of different perspectives that affords objectivity, not their elimination.

Ockham’s razor as a principle of (epistemic) agency

[ Since I’m officially on holiday, I take the liberty to reblog this post. However, the main idea expressed here is still not part of the canonical reading of Ockham:) ]

During a recent workshop in Bucharest I asked the participants to connect two dots on a piece of paper.* Guess what! They all chose the simplest way of doing it and drew a perfectly straight line. This is perhaps not surprising. What I would like to suggest, however, is that this example might hint at a neglected way of understanding what is often called “Ockham’s razor”, the “principle of simplicity” or the “principle of parsimony”.

Along with the principle of non-contradiction and the principle of divine omnipotence, the principle of parsimony counts as one of the crucial principles in Ockham’s thought. Without much ado, he applies it to underpin his semantics, epistemology and ontology. But how, if at all, is the principle justified?

As Elliott Sober points out in a widely circulated article, the justification of Ockham’s razor and its variants is a matter of continuous debate. Already in medieval discussions we encounter the simplicity principle long before Ockham and in a number of contexts. Echoing the Aristotelian idea that nature does nothing in vain, much of the debates before and after Ockham are about the question whether the principle is founded on natural teleology. But Ockham, of all people, does not seem to offer any justification.

As I see it, the crucial context for this question is the debate about divine action and power. Comparing, for example, the positions of Thomas Aquinas and William of Ockham, we can clearly see two contrary versions of the simplicity principle. Aquinas endorses a teleological version, when he states that “Deus et natura nihil frustra faciunt” and that “natura non facit per duo, quod per unum potest facere.” Now, as is well known, Ockham often uses the simplicity principle in a merely explanatory sense when he writes, for instance: “frustra fit per plura quod fieri potest per pauciora”. Indeed, Ockham directly contradicts the claim of natural simplicity when he states that “frequenter facit Deus mediantibus pluribus quod posset facere mediantibus paucioribus, nec ideo male facit, quia eo ipso quod iste vult, bene et iuste facit.” (In I Sent., d. 17, q. 3)

So Ockham tells us that God often violates the principle of simplicity and takes diversions, even if there might be simpler ways. Now Ockham also clearly sees that, in claiming this, he might contradict the usual justification of simplicity. This is why he adds that God, in taking diversions, does not act without justification or badly. Rather it is the other way round: the fact that God wills to act thus and so makes it the case that it is good and apt.

What’s going on here? Although the distinction between rationalism and voluntarism is often misleading, it might help to use it for illustration. Aquinas is a rationalist, which means that for God reason is prior to will, not the other way round. God acts out of reasons that are at least partly determined by the way natural things and processes are set up. Doing “nothing in vain” means not to counter this order. Ockham takes the opposite position: something is rational or right because God wills it, not vice versa.

Now this result seems to render Ockham as an outright opponent of what is called Ockham’s razor. For if God sets the standards and God might often will complex diversions, there seems to be not only no justification for the simplicity principle, rather Ockham’s idea seems to undermine any epistemic value it might have.

So is there any non-teleological justification of the simplicity principle that Ockham could invoke? I think there might be an option once we consider the formulations of the principle. In the literature, discussions of the simplicity principle often concentrated on the nouns “natura”, “deus”, “entia”, “causae rerum” etc. But “frustra” is used as an adverb; it qualifies “facere”, “agere”, or “ponere” – making, acting, making assumptions. The point I want urge, then, is that the razor is about action. If you do something, there is a simple way of doing it. This would make it a principle of means-ends rationality as opposed to the divine or natural simplicity that Aquinas relies on.

While the natural-teleological version of the simplicity principle seems very much at home amongst fairly laden principles such as the principle of sufficient reason or the principle of the uniformity of nature, Ockham’s razor seems to be resonating with a different set of principles, such as the idea that explanations have to end somewhere and that infinite regresses should be avoided. These principles weigh with us not solely because we might reach an epistemic goal. Sometimes we don’t, and then we have to practise epistemic humility or agnosticism. It often makes sense for us limited beings to act with as little effort as possible, but it’s not always conclusive.

Connecting these ideas to the discussion about divine action might be insightful. Ockham contends that God can do things in complex ways without acting improperly. The upshot might be that humans cannot do this in the same way, since the human will does not set the norms of how things should be. Thus, for us, it is important to come to an end, not in the natural-teleological sense but in the profane sense of finishing or stopping.

You might say this is too profane to justify the principle. But maybe the point is conceptual. Maybe the simplest way of performing an action is what defines a certain type of action in the first place. As soon as you pick a more complex way, you do it improperly, unless you are God. So if you’re asked to combine two dots, you might think the goal is to combine them in a perfect way, whatever that might mean. But you might also assume that the point is to get it done with the least effort. And if you take a diversion, you do it improperly. One might even argue that a diversion constitutes a different action altogether. Combining three dots is different from combining two.

In any case, I hope to have pointed to a promising way of justifying Ockham’s razor (in the medieval discussion) without invoking a supposed simplicity in nature. As I hope to work on a project on the simplicity principle in medieval and early modern philosophy soonish, I would be very grateful for any kind of feedback.

_______

*Thanks to the participants of this workshop I now can connect a few more historical and conceptual dots. Special thanks to Peter Anstey, Laura Georgescu, Madalina Giurgea, Dana Jalobeanu and Doina-Cristina Rusu as well as to many of my colleagues in Groningen.

Voices inside my head. On constructive criticism

Most of the time when writing or just thinking, I hear voices uttering what I (want to) write. Often this is a version of my own voice, but there are also numerous other voices. These are voices of friends, colleagues and students. Sometimes I hear them because I remember what they said during a discussion. But more often I imagine them saying things in the way they would phrase an objection or a refinement of what I wanted to say. Yet, although it is me who imagines them, it’s their convictions and style that determines the content and phrasing. If I succeed in my attempts to write in a dialogical fashion, it is the achievement of others in leaving their traces in my memory and eventually in my texts. It is this kind of experience that makes writing fun. But what I want to claim now is that this is also a good way of integrating criticism. This way the strengths of others can become strengths in your own writing.

Why is this important? Philosophy is often taken to thrive on criticism. Some would even claim that it lies at the heart of intellectual exchange. Only if we take into account the critique of others, can we expect to have considered an issue sufficiently. I agree. Assuming that reason is social, philosophers need to expose themselves to others and see what they have to say. However, it’s not clear that the social nature of reason requires criticism as the primary mode of exchange. There are various styles of thinking; understanding and engaging with ideas can happen in many different ways.

Some people will ask to be “destroyed” by their interlocutors, while others might think that any question might amount to an impertinent transgression. Might there be a middle ground between these extremes? What is telling is that the metaphors around philosophical argumentation are mostly intimating opposition or even war. (See for instance the intriguing discussion in and of Catarina Dutilh Novaes’ great piece on metaphors for argumentation). In view of this practice, I think it’s crucial to remember that spotting mistakes does not turn anything into a good idea. The fact that you know how to find flaws does not mean that you’re able to improve an idea. (See Maarten Steenhagen’s excellent clip on this point and make sure to turn up the sound) In any case, it’s not surprising that there is an on-going debate and a bit of a clash of intuitions between philosophers who like and who dislike an adversarial style of conversation. Some think criticism fosters progress, while others think criticism blocks progress.

How can we move on? I think it’s crucial to consider the precise nature of the criticism in question. The point is not whether people are nice to one another; the point is whether criticism is genuine. But what is genuine criticism? I think genuine criticism takes a paper or talk on its own terms. Now what does that mean? Here, it helps to rely on a distinction between internal and external criticism. Internal criticism takes the premises of a contribution seriously and focuses on issues within an argument or view. A good example of a whole genre of internal criticism is the medieval commentary tradition. A commentary exposes the premises, and aims at clarification and refinement without undermining the proposed idea. By contrast, external criticism often starts from the assumption that the whole way of framing an issue is mistaken. A good example for such an external criticism is the debate between hylomorphists and mechanists in early modern philosophy.*

I think that only internal criticism is genuine. That doesn’t mean that external criticism is useless, but it is not an engagement with the opposing position; at least not in such a way that it attempts to leave the main claim intact. It is the view that the opponent’s position is not acceptable.** I think it is important to see that these different criticisms are completely different moves in the game or even wholly different games. Internal criticism happens on common ground; external criticism is the denial of common ground. Both forms are legitimate. But I think that a lot of discussions would be better if those involved would be clear about the question whether their criticism is internal or external. Ideally, both kinds of criticism are presented along with an indication of what an answer to the challenge would actually look like.

How can we apply this to our writing? I think it is vital to include both kinds of criticism. But it helps me to make the difference. If someone tells me that my argument for taking Locke as a social externalist about semantics might need more textual support or a refined exposition of how the evidence supports my claim, I will see their point as supporting my idea. (Of course, if I have no such evidence, this criticism would be fatal, however genuine.) If someone tells me that Locke’s semantics isn’t worth studying in the first place, their point is clearly external. That doesn’t mean that I don’t need to reply to the challenge of external criticism. But the point is that the latter targets my endeavour in a different way. External criticism questions the very point of doing what I do and cannot be addressed by amending this or that argument. Responding to internal criticism happens within a shared set of ideas. Responding to a clash of intuitions means to decide for or against a whole way of framing an issue. Only internal criticism is constructive, but we need to respond to external criticism in order to see why it is constructive. So when you work on your argument, don’t bother with external criticism. If you write the introduction or conclusion, by contrast, reach out to those who question your project entirely.

How then should we deal with such criticisms in practice? It’s sometimes difficult to deal with either. This is why I ultimately like the approach of internalising all kinds of criticism into the choir of voices inside my head. Once they are in my head, it feels like I can control them. They become part of my thinking and my chops, as it were. I can turn up the volume of any voice, and in writing it’s me who in charge of the volume.*** Thus, I’d suggest we should appropriate both kinds of criticism. It’s just crucial to recognise them for what they are. Appropriating all the voices gives you some control over each of them.

To be sure, at the end of the day it’s important to see that we’re all in this together. We’re doing philosophy. And even if people don’t agree with our projects, they endorse that larger project called philosophy. So even in external criticism there must be some sort of common ground. Most of the time, I can’t see what the precise features of this common ground are, but being lost in that way makes me feel at home.

______

* Of course, there are tipping points at which internal can turn into external criticism and vice versa.

** This doesn’t mean that external criticism is hostile or not genuine in other ways. One can criticise externally out of genuine concern, assuming perhaps that an idea requires a different kind of framework or that work on a seriously flawed position might prove a waste of time for the addressee.

*** Reaching a state of control or even balance is not easy, though. It is often the most critical voices that are loudest. In such cases, it might be best to follow Hume’s advice and seek good company.

The purpose of the canon

Inspired through a blog post by Lisa Shapiro and a remark by Sandra Lapointe, I began to think about the point of (philosophical) canons again: in view of various attempts to diversify the canon in philosophy, Sandra Lapointe pointed out that we shouldn’t do anything to the canon before we understand its purpose. That demand strikes me as very timely. In what follows I’d like to look at some loose ends and argue that we might not be able to diversify the canon in any straightforward manner.

Do canons have a purpose? I think they do. In a broad sense, I assume that canons have the function of coordinating educational needs. In philosophy, we think of canons as something that should be known. The same goes for literature, visual arts or music. Someone who claims to have studied music is taken to have heard of, say, Bach. Someone who claims to have studied philosophy is taken to have heard of, say, Margaret Cavendish. Wait! What? – Off the top of my head, I could name a quite few people who won’t have heard of Cavendish, but they will have heard of Plato or Descartes and recognise them as philosophers. But why is someone like Cavendish not canonical? Why hasn’t the attempt to diversify the canon already taken some hold?

If you accept my attempt at pinning down a general purpose, the interesting question with regard to specific canons is: why should certain things be known? A straightforward answer would be: because someone, say, your teacher, wanted you to know. But I don’t think that we can rely on the intentions of individuals or even groups to pin down a canon. Aquinas is not canonical because your professor likes him. – How, then, do canons evolve? I tend to think of canons as part of larger systems like (political) ideologies. Adapting David L. Smith’s account of ideology, I would endorse a teleofunctional account of canons. (Yes, I think what Ruth Millikan said about language as a biological category can be applied to canons.) Canons survive or have stability at least so long as they promote specific educational purposes linked to a system or ideology. Just think of the notorious Marx-Engels editions in Western antiquaries.

One of the crucial features of a teleofunctional understanding of canons is that they are not decided on by a person or a group of people, not even by the proverbial “old white men”. Rather they grow, get stabilised and perhaps decline again through historical periods that transcend the lives of individuals or groups. If canons get stabilised by promoting certain educational purposes, then the evolution of a canon will depend on the persistence of the educational purposes that they promote. I don’t know what would tip the balance in favour of a certain diversification, but at the moment I rather fear that philosophy itself might lose the status of serving an educational purpose. At least, if the dominant political climate is anything to go on.

If any of this is remotely correct, what are we to think of attempts to diversify the canon? I am not sure. I am myself in favour of challenging the canon. I’m not sure that this will alter the canon. It might or might not, depending perhaps on how much potential for challenge is built into the canon already. We currently witness a number of very laudable attempts to make new material and interpretations available. And as Lisa Shapiro argues, the sheer availability might alter what gets in. At the end of the day, we can make a difference in our courses and in what we write. How that relates to the evolution of the canon is an intriguing question – and one that I’d like to think about more in the near future. But what we should watch out for, too, is how the (political) climate will affect the very status of philosophy as a canonical subject in universities and societies.