On saying “we” again. A response to Peter Adamson

Someone claiming that we today are interested in certain questions might easily obscure the fact that current interests are rather diverse. I called this phenomenon synchronic anachronism. While agreeing with the general point, Peter Adamson remarked that

“… as a pragmatic issue, at least professional philosophers who work, or want to work, in the English speaking world cannot easily avoid imagining a population of analytic philosophers who have a say in who gets jobs, etc. The historian is almost bound to speak to the interests of that imagined population, which is still a rough approximation of course but not, I think, a completely empty notion. In any case, whether it is empty or not, a felt tactical need to speak to that audience might explain why the “we” locution is so common.”

I think this is a rather timely remark and worth some further discussion. Clearly, it suggests a distinction between an indexical and a normative use of the word “we”. Using the word in the former sense, it includes all the people who are reading or (in the given cases) are studying history of philosophy. Thus, it might refer to a quite diverse set of individuals. In the latter sense, however, the word would pick out a certain group, specified as “analytic philosophers”. It is normative in that it does not merely pick out individuals who are interested in certain issues; rather it specifies what any individual should be interested in. Locutions with this kind of normative “we” are at once descriptive and directive. So the sentence “Currently, we have a heated debate about the trolley problem” has to be taken in the same vein as the sentence “We don’t eat peas with our fingers.” It states at once what we (don’t) do as well as what we should (or should not) do.*

Now where does the normative force of such locutions originate? Talking about historical positions, such “we” locutions seem to track the relevance of a given topic for a dominant group, the group of philosophers identifying as analytic. The relevance of such a topic, however, is reinforced not by the mere fact that all members of the dominant group are interested in that topic. Rather it is (also and perhaps crucially) reinforced by the fact that certain members of that group are quite powerful when it comes to the distribution of jobs, grants, publication space and other items relevant to survival in academia. This is worth noting because, if correct, it entails that the perceived relevance of topics is due to political power in academia. Some might say that this is a truism. Nevertheless, it is worth noting that topics of discussion are reinforced or excluded in this way. For if this is the case, then it follows that what Peter Adamson calls “tactical” and “pragmatic” has immediate repercussions on the philosophy and historiography itself. Being interested in topics that we are interested in might promote your career. Sidestepping them might be harmful. This being so, the career prospects related to a topic dictate its philosophical value.

Does this mean that someone writing “the history of the trolley problem” will merely do so out of tactical considerations? Or should we even encourage aspiring academics to go for such topics. It’s hard to tell, but it’s worth considering this practice and its implications. It might mean that our interest in certain topics, however genuine, is not reinforced because we all find these topics interesting, but because certain members of the dominant group are perceived as liking them. Successfully deviating from topics resonating with a dominant group, then, might require the privilege of a permanent job. Thus, if we really want to promote diversity in teaching and research on what is called the canon, it would be morally dubious to ask junior researchers to take the lead.

___

* Ruth Millikan discusses such pushmi-pullyu locutions at length in ch. 9 of her Language: A Biological Model.

Learning from medievalists. A response to Robert Pasnau

In a recent blog post, Robert Pasnau makes a strong case for designing a canonical survey in medieval philosophy. He rightly points out that there is a striking shortage of specialists in medieval philosophy in philosophy departments:

“As things are, it seems to me that our colleagues in other fields have been persuaded that there’s a lot of interesting material in the medieval period. But they have not yet been persuaded that the study of medieval philosophy is obligatory, or that it’s obligatory for even a large department to have a specialist in the field. And no wonder this is so, given that we ourselves have failed to articulate a well-defined course of study that strikes us as having canonical status.”

If this is correct, the lack of jobs for medievalists is at least partly due to the lack of a medieval canon. While I agree that the lack of jobs is problematic, I am not entirely convinced by the reasons provided. Could we really expect the number of hires to go up, if we had more agreement on a set of canonical texts? Of course, Robert Pasnau’s reasoning is not that simplistic. The idea is that presenting a set of good canonical texts could persuade our colleagues that we have an obligation to study and teach those texts. As he points out, such a set of texts would be canonical in that they are united by a “shared narrative”; but unlike early modernists, for instance, medievalists have “failed” to produce such a narrative.

Is it really true that we failed to produce such a narrative? I am not sure. Firstly, I don’t think that canons can be designed at will; rather they evolve in conjunction with larger ideologies. Secondly, looking at histories of philosophy, there is an ample set of narratives surrounding the supposed rise and decline of “scholastic synthesis”. This and other narratives are embedded in a larger story about the dominance of theology in the Middle Ages and the subsequent secularisation and scientific revolution. Of course, all these narratives are rightly contested, but they clearly form the basis of a canon that is still pervasive in our surveys. In this grand narrative, medieval thought is seen as theological rather than philosophical. Accordingly, I think that the shortage of jobs for medievalists is not due to the lack but to the dominance of the canon. What separates medieval and early modern studies is not that only the latter has a set of canonical texts. Rather it’s the fact that only early modern philosophy is seen as bound up with the rise of science.

What to do? I think Robert Pasnau is right that we should think carefully about texts that we want to make available and teach in our courses. But rather than introducing these texts as part of a new canon, it might be more persuasive to use them to challenge the existing narratives about medieval thought and the rest of philosophy. In this regard, it’s perhaps crucial to stress the continuities between the medieval and other periods when thinking about selections of texts:* One way to do this would be to challenge the supposed conjunction of early modernity and science by encouraging people to study more medieval natural philosophy – a field that seems enormously fruitful but largely understudied. The same goes for late scholasticism and the relation of discussions inside and outside the schools in the 16th and 17th centuries. The list of possible moves to look for continuities could be extended, but the central point is this: rather than designing a competing canon for medieval philosophy, we should convince our colleagues that their stories are not intelligible without invoking the medieval discussions.

_____

* It is worth noting that Robert Pasnau has contributed to this endeavour himself more than once, for instance, in his Metaphysical Themes 1274-1671, his works on Aquinas and his Theories of Cognition in the Later Middle Ages as well as in his translations for the Cambridge Translations Series.

Mistakes and objectivity. Myths in the history of philosophy (Part II)

“It’s raining.” While reading or writing this sentence now, I think many things. I think that the sentence is a rather common example in certain textbooks. I also think that it has a slightly sentimental ring. Etc. But there is one thing I can’t bring myself to think now: that it is true. Worse still, if someone sincerely uttered this sentence now in my vicinity, I would think that there is something severely wrong. A charitable view would be that I misheard or that he or she made a linguistic mistake. But I can’t bring myself to disagree with what I take to be the facts. The same is true when reading philosophy. If someone disagrees with what I take to be the facts, then … what?  – Since I am a historian of philosophy, people often seem to assume that I am able to suspend judgment in such cases. That is, I am taken to report what someone thought without judging whether the ideas in question are true or false. “Historians are interested in what people thought, not in the truth”, it is said. This idea of neutrality or objectivity is a rather pervasive myth. In what follows, I’d like to explain what I think is wrong with it.

Let’s begin by asking why this myth might be so pervasive. So why do we – wrongly – assume that we can think about the thoughts of others without judging them to be true or false? One reason might be the simple fact that we can use quotations. Accordingly, I’d like to trace this myth back to what I call the quotation illusion. Even if I believe that your claims are false or unintelligible, I can quote you – without adding my own view. I can say that you said “it’s raining”. Ha! Of course I can also use an indirect quote or a paraphrase, a translation and so on. Based on this convenient feature of language, historians of philosophy (often including myself) fall prey to the illusion that they can present past ideas without imparting judgment. What’s more, at least in the wake of Skinner, this neutral style is often taken as a virtue, and transgression is chided as anachronism (see my earlier post on this).

But the question is not whether you can quote without believing what you quote. Of course you can. The question is whether you can understand a sentence or passage without judging its truth. I think you can’t. (Yes, reading Davidson convinced me that the principle of charity is not optional.) However, some people will argue that you can. “Just like you can figure out the meaning of a sentence without judging its truth”, they will say, “you can understand and report sentences without judgment.” I beg to differ. You could not understand the sentence “It’s raining” without acknowledging that it is false, here and now at least. And this means that you can’t grasp the meaning without knowing what would have to be the case for it to be true. – The same goes for reading historical texts. Given certain convictions about, say, abstract objects, you cannot read, say, Frege without thinking that he must be wrong.

Did I just say that Frege was wrong? – I take that back. Of course, if a view does not agree with your beliefs, it seems a natural response to think that the author is wrong. But whenever people are quick to draw that conclusion, I start to feel uneasy. And this kind of hesitation might be another reason for why the myth of neutrality is so pervasive. On closer inspection, however, the feeling of uneasiness might not be owing to the supposed neutrality. Rather there is always the possibility that not the author but something else might be wrong. I might be wrong about the facts or I might just misunderstand the text. Even the text might be corrupt (a negation particle might be missing) or a pervasive canonical reading might prevent me from developing a different understanding.

The intriguing task is to figure out what exactly might be wrong. This is neither achieved by pretending to suspend judgment nor by calling every opponent wrong, but rather by exposing one’s own take to an open discussion. It is the multitude of different perspectives that affords objectivity, not their elimination.

Ockham’s razor as a principle of (epistemic) agency

[ Since I’m officially on holiday, I take the liberty to reblog this post. However, the main idea expressed here is still not part of the canonical reading of Ockham:) ]

During a recent workshop in Bucharest I asked the participants to connect two dots on a piece of paper.* Guess what! They all chose the simplest way of doing it and drew a perfectly straight line. This is perhaps not surprising. What I would like to suggest, however, is that this example might hint at a neglected way of understanding what is often called “Ockham’s razor”, the “principle of simplicity” or the “principle of parsimony”.

Along with the principle of non-contradiction and the principle of divine omnipotence, the principle of parsimony counts as one of the crucial principles in Ockham’s thought. Without much ado, he applies it to underpin his semantics, epistemology and ontology. But how, if at all, is the principle justified?

As Elliott Sober points out in a widely circulated article, the justification of Ockham’s razor and its variants is a matter of continuous debate. Already in medieval discussions we encounter the simplicity principle long before Ockham and in a number of contexts. Echoing the Aristotelian idea that nature does nothing in vain, much of the debates before and after Ockham are about the question whether the principle is founded on natural teleology. But Ockham, of all people, does not seem to offer any justification.

As I see it, the crucial context for this question is the debate about divine action and power. Comparing, for example, the positions of Thomas Aquinas and William of Ockham, we can clearly see two contrary versions of the simplicity principle. Aquinas endorses a teleological version, when he states that “Deus et natura nihil frustra faciunt” and that “natura non facit per duo, quod per unum potest facere.” Now, as is well known, Ockham often uses the simplicity principle in a merely explanatory sense when he writes, for instance: “frustra fit per plura quod fieri potest per pauciora”. Indeed, Ockham directly contradicts the claim of natural simplicity when he states that “frequenter facit Deus mediantibus pluribus quod posset facere mediantibus paucioribus, nec ideo male facit, quia eo ipso quod iste vult, bene et iuste facit.” (In I Sent., d. 17, q. 3)

So Ockham tells us that God often violates the principle of simplicity and takes diversions, even if there might be simpler ways. Now Ockham also clearly sees that, in claiming this, he might contradict the usual justification of simplicity. This is why he adds that God, in taking diversions, does not act without justification or badly. Rather it is the other way round: the fact that God wills to act thus and so makes it the case that it is good and apt.

What’s going on here? Although the distinction between rationalism and voluntarism is often misleading, it might help to use it for illustration. Aquinas is a rationalist, which means that for God reason is prior to will, not the other way round. God acts out of reasons that are at least partly determined by the way natural things and processes are set up. Doing “nothing in vain” means not to counter this order. Ockham takes the opposite position: something is rational or right because God wills it, not vice versa.

Now this result seems to render Ockham as an outright opponent of what is called Ockham’s razor. For if God sets the standards and God might often will complex diversions, there seems to be not only no justification for the simplicity principle, rather Ockham’s idea seems to undermine any epistemic value it might have.

So is there any non-teleological justification of the simplicity principle that Ockham could invoke? I think there might be an option once we consider the formulations of the principle. In the literature, discussions of the simplicity principle often concentrated on the nouns “natura”, “deus”, “entia”, “causae rerum” etc. But “frustra” is used as an adverb; it qualifies “facere”, “agere”, or “ponere” – making, acting, making assumptions. The point I want urge, then, is that the razor is about action. If you do something, there is a simple way of doing it. This would make it a principle of means-ends rationality as opposed to the divine or natural simplicity that Aquinas relies on.

While the natural-teleological version of the simplicity principle seems very much at home amongst fairly laden principles such as the principle of sufficient reason or the principle of the uniformity of nature, Ockham’s razor seems to be resonating with a different set of principles, such as the idea that explanations have to end somewhere and that infinite regresses should be avoided. These principles weigh with us not solely because we might reach an epistemic goal. Sometimes we don’t, and then we have to practise epistemic humility or agnosticism. It often makes sense for us limited beings to act with as little effort as possible, but it’s not always conclusive.

Connecting these ideas to the discussion about divine action might be insightful. Ockham contends that God can do things in complex ways without acting improperly. The upshot might be that humans cannot do this in the same way, since the human will does not set the norms of how things should be. Thus, for us, it is important to come to an end, not in the natural-teleological sense but in the profane sense of finishing or stopping.

You might say this is too profane to justify the principle. But maybe the point is conceptual. Maybe the simplest way of performing an action is what defines a certain type of action in the first place. As soon as you pick a more complex way, you do it improperly, unless you are God. So if you’re asked to combine two dots, you might think the goal is to combine them in a perfect way, whatever that might mean. But you might also assume that the point is to get it done with the least effort. And if you take a diversion, you do it improperly. One might even argue that a diversion constitutes a different action altogether. Combining three dots is different from combining two.

In any case, I hope to have pointed to a promising way of justifying Ockham’s razor (in the medieval discussion) without invoking a supposed simplicity in nature. As I hope to work on a project on the simplicity principle in medieval and early modern philosophy soonish, I would be very grateful for any kind of feedback.

_______

*Thanks to the participants of this workshop I now can connect a few more historical and conceptual dots. Special thanks to Peter Anstey, Laura Georgescu, Madalina Giurgea, Dana Jalobeanu and Doina-Cristina Rusu as well as to many of my colleagues in Groningen.

Voices inside my head. On constructive criticism

Most of the time when writing or just thinking, I hear voices uttering what I (want to) write. Often this is a version of my own voice, but there are also numerous other voices. These are voices of friends, colleagues and students. Sometimes I hear them because I remember what they said during a discussion. But more often I imagine them saying things in the way they would phrase an objection or a refinement of what I wanted to say. Yet, although it is me who imagines them, it’s their convictions and style that determines the content and phrasing. If I succeed in my attempts to write in a dialogical fashion, it is the achievement of others in leaving their traces in my memory and eventually in my texts. It is this kind of experience that makes writing fun. But what I want to claim now is that this is also a good way of integrating criticism. This way the strengths of others can become strengths in your own writing.

Why is this important? Philosophy is often taken to thrive on criticism. Some would even claim that it lies at the heart of intellectual exchange. Only if we take into account the critique of others, can we expect to have considered an issue sufficiently. I agree. Assuming that reason is social, philosophers need to expose themselves to others and see what they have to say. However, it’s not clear that the social nature of reason requires criticism as the primary mode of exchange. There are various styles of thinking; understanding and engaging with ideas can happen in many different ways.

Some people will ask to be “destroyed” by their interlocutors, while others might think that any question might amount to an impertinent transgression. Might there be a middle ground between these extremes? What is telling is that the metaphors around philosophical argumentation are mostly intimating opposition or even war. (See for instance the intriguing discussion in and of Catarina Dutilh Novaes’ great piece on metaphors for argumentation). In view of this practice, I think it’s crucial to remember that spotting mistakes does not turn anything into a good idea. The fact that you know how to find flaws does not mean that you’re able to improve an idea. (See Maarten Steenhagen’s excellent clip on this point and make sure to turn up the sound) In any case, it’s not surprising that there is an on-going debate and a bit of a clash of intuitions between philosophers who like and who dislike an adversarial style of conversation. Some think criticism fosters progress, while others think criticism blocks progress.

How can we move on? I think it’s crucial to consider the precise nature of the criticism in question. The point is not whether people are nice to one another; the point is whether criticism is genuine. But what is genuine criticism? I think genuine criticism takes a paper or talk on its own terms. Now what does that mean? Here, it helps to rely on a distinction between internal and external criticism. Internal criticism takes the premises of a contribution seriously and focuses on issues within an argument or view. A good example of a whole genre of internal criticism is the medieval commentary tradition. A commentary exposes the premises, and aims at clarification and refinement without undermining the proposed idea. By contrast, external criticism often starts from the assumption that the whole way of framing an issue is mistaken. A good example for such an external criticism is the debate between hylomorphists and mechanists in early modern philosophy.*

I think that only internal criticism is genuine. That doesn’t mean that external criticism is useless, but it is not an engagement with the opposing position; at least not in such a way that it attempts to leave the main claim intact. It is the view that the opponent’s position is not acceptable.** I think it is important to see that these different criticisms are completely different moves in the game or even wholly different games. Internal criticism happens on common ground; external criticism is the denial of common ground. Both forms are legitimate. But I think that a lot of discussions would be better if those involved would be clear about the question whether their criticism is internal or external. Ideally, both kinds of criticism are presented along with an indication of what an answer to the challenge would actually look like.

How can we apply this to our writing? I think it is vital to include both kinds of criticism. But it helps me to make the difference. If someone tells me that my argument for taking Locke as a social externalist about semantics might need more textual support or a refined exposition of how the evidence supports my claim, I will see their point as supporting my idea. (Of course, if I have no such evidence, this criticism would be fatal, however genuine.) If someone tells me that Locke’s semantics isn’t worth studying in the first place, their point is clearly external. That doesn’t mean that I don’t need to reply to the challenge of external criticism. But the point is that the latter targets my endeavour in a different way. External criticism questions the very point of doing what I do and cannot be addressed by amending this or that argument. Responding to internal criticism happens within a shared set of ideas. Responding to a clash of intuitions means to decide for or against a whole way of framing an issue. Only internal criticism is constructive, but we need to respond to external criticism in order to see why it is constructive. So when you work on your argument, don’t bother with external criticism. If you write the introduction or conclusion, by contrast, reach out to those who question your project entirely.

How then should we deal with such criticisms in practice? It’s sometimes difficult to deal with either. This is why I ultimately like the approach of internalising all kinds of criticism into the choir of voices inside my head. Once they are in my head, it feels like I can control them. They become part of my thinking and my chops, as it were. I can turn up the volume of any voice, and in writing it’s me who in charge of the volume.*** Thus, I’d suggest we should appropriate both kinds of criticism. It’s just crucial to recognise them for what they are. Appropriating all the voices gives you some control over each of them.

To be sure, at the end of the day it’s important to see that we’re all in this together. We’re doing philosophy. And even if people don’t agree with our projects, they endorse that larger project called philosophy. So even in external criticism there must be some sort of common ground. Most of the time, I can’t see what the precise features of this common ground are, but being lost in that way makes me feel at home.

______

* Of course, there are tipping points at which internal can turn into external criticism and vice versa.

** This doesn’t mean that external criticism is hostile or not genuine in other ways. One can criticise externally out of genuine concern, assuming perhaps that an idea requires a different kind of framework or that work on a seriously flawed position might prove a waste of time for the addressee.

*** Reaching a state of control or even balance is not easy, though. It is often the most critical voices that are loudest. In such cases, it might be best to follow Hume’s advice and seek good company.

The purpose of the canon

Inspired through a blog post by Lisa Shapiro and a remark by Sandra Lapointe, I began to think about the point of (philosophical) canons again: in view of various attempts to diversify the canon in philosophy, Sandra Lapointe pointed out that we shouldn’t do anything to the canon before we understand its purpose. That demand strikes me as very timely. In what follows I’d like to look at some loose ends and argue that we might not be able to diversify the canon in any straightforward manner.

Do canons have a purpose? I think they do. In a broad sense, I assume that canons have the function of coordinating educational needs. In philosophy, we think of canons as something that should be known. The same goes for literature, visual arts or music. Someone who claims to have studied music is taken to have heard of, say, Bach. Someone who claims to have studied philosophy is taken to have heard of, say, Margaret Cavendish. Wait! What? – Off the top of my head, I could name a quite few people who won’t have heard of Cavendish, but they will have heard of Plato or Descartes and recognise them as philosophers. But why is someone like Cavendish not canonical? Why hasn’t the attempt to diversify the canon already taken some hold?

If you accept my attempt at pinning down a general purpose, the interesting question with regard to specific canons is: why should certain things be known? A straightforward answer would be: because someone, say, your teacher, wanted you to know. But I don’t think that we can rely on the intentions of individuals or even groups to pin down a canon. Aquinas is not canonical because your professor likes him. – How, then, do canons evolve? I tend to think of canons as part of larger systems like (political) ideologies. Adapting David L. Smith’s account of ideology, I would endorse a teleofunctional account of canons. (Yes, I think what Ruth Millikan said about language as a biological category can be applied to canons.) Canons survive or have stability at least so long as they promote specific educational purposes linked to a system or ideology. Just think of the notorious Marx-Engels editions in Western antiquaries.

One of the crucial features of a teleofunctional understanding of canons is that they are not decided on by a person or a group of people, not even by the proverbial “old white men”. Rather they grow, get stabilised and perhaps decline again through historical periods that transcend the lives of individuals or groups. If canons get stabilised by promoting certain educational purposes, then the evolution of a canon will depend on the persistence of the educational purposes that they promote. I don’t know what would tip the balance in favour of a certain diversification, but at the moment I rather fear that philosophy itself might lose the status of serving an educational purpose. At least, if the dominant political climate is anything to go on.

If any of this is remotely correct, what are we to think of attempts to diversify the canon? I am not sure. I am myself in favour of challenging the canon. I’m not sure that this will alter the canon. It might or might not, depending perhaps on how much potential for challenge is built into the canon already. We currently witness a number of very laudable attempts to make new material and interpretations available. And as Lisa Shapiro argues, the sheer availability might alter what gets in. At the end of the day, we can make a difference in our courses and in what we write. How that relates to the evolution of the canon is an intriguing question – and one that I’d like to think about more in the near future. But what we should watch out for, too, is how the (political) climate will affect the very status of philosophy as a canonical subject in universities and societies.

Getting started: exploiting embarrassment

You probably all know this moment at the beginning of almost every course. “Any questions?” – Silence. No one, it seems, has a question. The same thing might happen after a conference talk. – Silence. The silence after a talk, in a reading group or seminar is quite embarrassing. It is particularly embarrassing because it happens out of embarrassment. You know it: it’s not because there are no questions; it’s because no one wants to make a fool of themselves. – What I would like to suggest today is that there is a simple technique of exploiting this embarrassment in order to get started both with a discussion and with writing.

How then can we exploit this embarrassment? By making it worse of course! We are embarrassed when we want to look smart and are unsure how to achieve that. Seeing other people stuck like that who see you stuck like that doesn’t help either. When we want to speak up or start writing, we probably focus too much on what we know (and then we think that everyone already knows that and that we’d look foolish by saying something trivial). My suggestion is: don’t focus on what you know; focus on what you don’t understand. That might seem worse, but that’s the point. To be sure, you should not just use the donnish phrase “I don’t understand”, while implying that something is just stupid. What I mean is: focus on something you genuinely don’t understand; that way you’ll raise a genuine question. And everyone will be grateful to you for breaking the ice in a genuine way.

How then do you find something that you genuinely don’t understand? – You might be surprised to learn that this will require some practice. That is because it is often difficult to pin down what precisely it is that you don’t understand. Anyway, in philosophy we can be sure of one thing: nothing is ever (sufficiently) justified by itself. Going from this premise, you can develop a question in two steps. Step one: you need to locate a phrase or passage that you find doubtful. (You don’t find one? Well, then ask yourself why everything is so incredibly clear. Are you omniscient?) Step two: ask yourself why you don’t understand that passage. Yes, you will deepen your embarrassment now, but only for a second. Because in asking that question, you will start looking for reasons for your lack of understanding. And reasons are a good thing in philosophy. The other upside of this technique is that you will begin phrasing a question with your own voice. Why? Well, because that question zooms in on the relation between the passage and yourself. But there is no need to fear exposure, for this “you” is not the personal “you”; it is the presuppositions, biases, and convictions of the epistemic culture you are part of.

Where to begin then? Quote the passage and highlight the move or term that you find doubtful. Then begin to spell out the presupposition that makes it seem doubtful to you. “This passage seems to presuppose that p. But I would presuppose that q. So why does it seem apt for the author to presuppose that p?” – Now you have a genuine question. And if you don’t know the answer, you can move on by exploring reasons for your own presupposition: “Why does it seem natural (to me) to presuppose that q?” And then you can ask what reasons there might be for giving up your presupposition.

If this is a helpful strategy, then why don’t we do this more often? I suppose that we assume something like the following: I don’t know why the author presupposes p, but the rest of my peers certainly knows why! – Well, even if they do know the answer, it is still required to make the reasons for accepting the presupposition explicit. Because in philosophy nothing is ever justified just by itself.

Who are we? Myths in the history of philosophy (Part I)

“Instead of assuming that the historical figures we study are motivated by the same philosophical worries that worry us, we need to understand why they care about each issue they raise.” This is part of rule 7 from Peter Adamson’s Rules for the History of Philosophy, and it is very good advice indeed. When reading texts, we should try to be aware of certain traps of anachronism. (See the intriguing debate between Eric Schliesser and Peter Adamson) People don’t always care about the same things, and if they do, they might do so for different reasons.

While I don’t believe that we can avoid anachronism, I think it is important to be aware of it. We can’t sidestep our interests, but it helps to make these interests explicit. What I would like to focus on today are some of the personal pronouns in the quoted passage: “us” and “we”. Saying that there are “worries that worry us”, places the diversity in the past history but seems to presuppose a fair amount of unity amongst us. But who are we? I picked rule 7, but it is safe to say that most historians of philosophy give in to this inclination of presupposing a “we”. I do find that funny. Historians (of philosophy) often like to mock people who indulge in generalised ideas about past periods such as the Middle Ages. “You wouldn’t believe”, they will say, “how diverse they were. The idea that all their philosophy is in fact about God is quite mistaken.” But then they turn around, saying that the medievals were quite different from us, where “us” is indexing some unified idea of a current philosophical state of the art. What I find funny, then, is that historians will chide you for claiming something about the past that they are happy to claim about the present. Last time I checked there was no “current philosophical debate”. At the same time, I should admit that I help myself a lot to these pronouns and generalisations. So if I sound like I’m ridculing that practice, I should be taken as ridiculing myself most of all.

My point is simple. It’s not enough to be aware of diachronic anachronism, we also need to be aware of what I’d like to call synchronic anachronism. Why? Well, for one thing, claims about the “current debate” are supposed to track relevance in some domain. If something is debated currently or by us, it might signal that we have reason to study its history. Wanting to avoid anachronism, historians often use an inversion of this relevance tracker: facts about historical debates might be interesting because they are not relevant today, in this sense they can teach us how the past is intriguingly and instructively different.

The second reason for highlighting synchronic anachronism is that it obscures the heterogeneity of current debates and the fact that we are anachronistic beings. Looking closely, we will find that we are a jumble of things that render us anachronistic: we are part of different generations, have different educational pasts and cling to various fashions; we might be nostalgic or prophetic, we live in different social situations and adhere to different authorities and canons. And sometimes we even have to defer to “the taxpayer” for relevance. So the idea that rationality requires agreement with others (the third “agreement constraint”, mentioned in one of my previous posts) should be seen in connection with the idea that such agreement might involve quite different and even opposing groups. The idealised present tense in which we talk about “us” and “our current interests” is merely a handy illusion. Acknowledging and respecting synchronic anachronism might seem tedious, but at least historians of philosophy should see the fun in it.

On translating as a philosophical skill

Having been raised as a medievalist, doing translations was part of my education. I don’t think highly of my own few translations, but I think translating should figure in philosophical curricula. Yes, I mean philosophical, not merely historical curricula! The reason is that doing translations will familiarise you with what is often praised as “rigour” among philosophers. Although most philosophers think that logics (and sometimes statistics) are crucial, I think that the subtleties of ordinary language can be explored quite thoroughly by trying to translate a small piece of text. It keeps you pacing through all the nuances of formulations. But what is it that keeps you going? Perhaps it is the fact that you have to succeed somehow, while a perfect translation remains impossible.

Have you ever pored over a sentence for hours on end? Once you’ve figured out the grammatical construction and have an idea of the standard word meanings, the real fun might just begin. I remember finishing the translation of a short text by Ockham and not understanding a word of the German that I had just jotted down. It was on the question of whether articles of faith can be demonstrated. The Latin was easy, but the terminology remained a mystery, and it seemed as if a whole theory was lurking behind every expression. I had not read any other text dealing with the same problem. I had no real idea about the tradition of translations, i.e. other translations of such texts, and as far as I could see, there was no secondary literature available. Now I had a text in my native language and didn’t understand a word of it. – To cut a long story short: I turned my work on that text into my MA thesis. This way, I went from complete blankness (in my mind) to an attempt of actually explaining what I found out. Yes, sometimes I even enjoyed it…

Now what is it that makes translating also a philosophical rather than merely a historical or philological skill? It is often assumed that translating requires a good command of the source language (i.e. the language you’re translating from). That might be true, but it’s your target language (i.e. the language you’re translating into, mostly your native language) that is truly challenging. Whatever you lack in your target language, will be lost. The process of translating makes you believe in the possibility of a correct translation, perhaps because failure is always with you. And it is this belief that keeps you pacing through your mental lexicon until it “clicks”. Understanding or perhaps even justifying what that click means, is where you begin to see the limits of your language as the limits of your world.

I think there are two extreme ways of viewing this clicking. (1) You might think that you finally found a translation that matches your source. But then doubts will arise as to what it actually is that guarantees that match. (Think of Quine’s Gavagai example, if you like) Aren’t you bringing in your presuppositions? Should you not try to replace them by more knowledge about the context? This is the view that you need to figure out the meaning of sentences. (2) The other way is to think that that you cannot even hope to rid yourself of your presuppositions. Rather you have to embrace them. If you take the source to be sincere, any translation that will make the sentence come out true in the target language will be fine. And you will pick the context in accordance with what you believe to be true. (Think of Davidson’s critique of relativism, if you like) This is the view that meaning presupposes an understanding of what makes a sentences true. – Yes, sorry, this paragraph is a bit dense. I’ll translate or reformulate it some other time.*

For the moment, I would just like to ask you to consider adding translating to the philosophical curriculum. In philosophy language is crucial. And except for writing and conversing, translating is perhaps the most intimate way of engaging with other people’s texts and one’s own shortcomings. In addition to that, our philosophical culture is lacking respect: translations are still too rarely acknowledged as serious work. Even if we can’t teach all the languages it takes to keep up the conversation in a global world, we need to teach the appropriate sensibilties that provide at least a glimpse of the efforts necessary for moving between the languages.

______

*I think that Quine and Davidson can be read as endorsing two opposing  ways of viewing the Gavagai example: a relatvist one, prioritising meaning over truth, and an anti-relativist one, prioritising truth over meaning. I’ll happily go into that another time.

What are you good at?

Many philosophy papers have a similar structure. That is quite helpful, since you know your way around quickly. It’s like walking through a pedestrian zone: even if you are in a completely strange town, you immediately know where you find the kinds of shops you’re looking for. But apart from the macro-structure, this often also is true of the micro-structure: the way the sentences are phrased, the vocabulary is employed, the rhythm of the paragraphs. “I shall argue” in every introduction.

I don’t mean this as a criticism. I attempt to write like that myself, and I also try to teach it. Our writing is formulaic, and that’s fine. But what I would like to suggest is that we try to teach and use more ingredients in that framework. I vividly remember these moments when a fellow student or colleague got up and, instead of stringing yet another eloquent sentence together, drew something on the blackboard or attempted to impose order by presenting some crucial concepts in a table. For some strange reason, these ways of presenting a thought or some material rarely find their ways into our papers. Why not?

I think of these and other means as styles of thinking. Visualising thoughts, for instance, is something that I’m not very good at myself. But that’s precisely why I learn so much from them. And even if I can’t draw, I can attempt to describe the visualisations. Describing a visualisation (or a sound or taste) is quite different from stringing arguments together. You might extend this point to all the other arts: literature, music, what have you!

Trying to think of language as one sense-modality amongst others might help to think differently about certain questions. Visit your phenomenologist! On the one hand, you can use such styles as aids in a toolkit that will not replace but enrich your ways of producing evidence or conveying an idea. On the other hand, they might actually enrich the understanding of an issue itself. In any case, such styles should be encouraged and find their way into our papers and books more prominently.

As I said, I’m not good at visualising, but it helps me enormously if someone else does it. Assuming that we all have somewhat different talents, I often ask students: “What are you good at?” Whatever the answer is, there is always something that will lend itself to promoting a certain style of thinking, ready to be exploited in the next paper to be written.