Cavendish’s Triumvirate and the Writing Process

I’m working through Margaret Cavendish’s Observations upon Experimental Philosophy (1666) at the moment. It’s not the first time (in fact, I taught a course on it after Christmas), but her writing is dense and is neither as systematic as someone like Descartes nor as succinct as someone like Berkeley. But the pay-off is a philosophy rich full of insights that genuinely does seem to be, if not ahead of its time (I don’t want to be accused of anachronism), then idiosyncratic to its immediate historical context in some striking ways. For example, I’m reading Cavendish alongside Keith Allen’s A Naïve Realist Theory of Colour (OUP, 2016), and there are clear signs that she had thought deeply about phenomena such as colour constancy (whereby we take objects to have remained the same colour even though a different coloured light is shining on them) and metamerism (objects with different microphysical qualities that appear to be the same colour) that are central to contemporary perception debates (Colin Chamberlain has written a great article on Cavendish’s atypical philosophy of colour). As far as I am aware, these aren’t issues that her contemporaries (Hobbes, Descartes, Berkeley, et al) were much preoccupied with. And while reading and working through Cavendish’s philosophy is a bit like trying to untangle a charger cable that’s been kept in a box in a drawer too long – each time you think you’ve untangled all the knots another one appears – it tends to be rewarding, even if it is near impossible to pin down exactly what she thinks about any given issue ‘X’.

Perhaps because of the inevitable struggle that comes with defending an interpretation of Cavendish’s philosophy, I’m also thinking a lot about the trials and tribulations of the writing process (it may also be because I have literally nothing else to do). For a long time, I’ve thought that one of the best pieces of writing advice came from Daniel Dennett who, in various platforms (including a keynote he gave here in Dublin last September) has encouraged writers to ‘blurt something out, and then you have something to work with’. I’ve regurgitated this advice to students several times, and it chimes well with me because I find it much easier to shape and mould a pre-existing block of text, than to face the task of squeezing something out of the ether (or my brain – wherever it comes from) and onto the page. Like Leibniz, I prefer a block to chip away from than a Lockean blank page. With that in mind, I’ve started to wonder whether a particular aspect of Cavendish’s metaphysics might provide us with a nice model for the writing process.

Perhaps one of the most interesting, and remarkable, aspects of Cavendish’s system of nature is her claim that all parts of nature contain what she calls a “triumvirate” of matter (note: Cavendish is a materialist, even the mind is composed of material substance in her system). She claims that each and every part of nature is made up of three kinds of matter: (1) rational matter, (2) sensitive matter, and (3) inanimate matter. Even if you could pick out an atomistic unit (although she rejects atomism herself), she thinks, you would find varying degrees of all three kinds of matter. Inanimate matter is matter as we would ordinarily think of it, bulky stuff that weighs the other kinds of matter down and does the important job of filling up space (a job I’ve gotten very good at myself during lockdown). Cavendish compares inanimate matter to the bricks and mortar used to build a house. Continuing this analogy, she suggests that sensitive matter plays the role of the team of builders, moving inanimate matter around and getting it to take up particular shapes and forms. The variety of ways that inanimate matter is put together, she thinks, explains the variety of things in the natural world around us. What’s more, if there were no sensitive matter to move inanimate matter around, she claims, the world would be entirely homogenous. Finally, she compares rational matter to the architect responsible for it all. For the sensitive matter wouldn’t know what to do with all the inanimate matter if it wasn’t told what to do by someone with a plan. In the section of the Observations entitled ‘An Argumental Discourse’ (one of the strangest philosophical dialogues out there, between two ‘halves’ of her own mind who are ‘at war’) she sums up the triumvirate of matter like so:

as in the exstruction of a house there is first required an architect or surveyor, who orders and designs the building, and puts the labourers to work; next the labourers or workmen themselves; and lastly the materials of which the house is built: so the rational part… in the framing of natural effects, is, as it were, the surveyor or architect; the sensitive, the labouring or working part; and the inanimate, the materials: and all these degrees are necessarily required in every composed action of nature.

Observations upon Experimental (Cambridge Texts Edition, edited by Eileen O’Neill (2001)) pp. 24

This is, then, a top-down approach to understanding both orderliness and variety of things in nature. It’s all possible, Cavendish thinks, because there’s an ‘architect’ (the rational part of a thing in nature) that devises a plan and decides what to do the with bulky mass of inanimate matter. (Another note: Cavendish is a vitalist materialist or what we might retrospectively call a panpsychist: she thinks that every part of nature, from grains of sand to plants, animals, and people, has life and knowledge of things in the world around it.)

Right, so how does all this relate to the writing process? I don’t quite know whether this is intended to be a helpful normative suggestion, or just a descriptive claim, but I suggest that Cavendish’s triumvirate might provide a model for thinking about how writing works. In this case, the role of bulky, cumbersome inanimate matter is played by the words on the page you’ve managed to ‘blurt out’, to use Dennett’s technical terminology. Or, perhaps it’s the thoughts/ ideas you’ve still got in your head. Either way, it’s a mass of sentences, propositions, textual references, and so on, that you’ve got to do something with (another tangled charger cable, if you will). What options have you got? Well, structure and presentation are important – and while these are facilitated by your word processor (for example), they constitute a kind of medium between your thought and the words on the page. So I’d suggest that presentation, structure, perhaps even the phrasing of individual sentences, is what plays the role of sensitive matter: Cavendish’s labourers or workmen.

Finally, there’s the role of rational matter: the architect or surveyor who’s plan the sensitive matter is just waiting to carry out. I actually think this may be the hardest comparison to draw. It would be easy to simply say ‘you’ are the architect of your writing, but once you’ve taken away the words/ ideas as well the as the way they are presented or structured, it’s hard to know exactly what’s doing the work or what’s left (just ask Hume). Last year, I saw Anna Burns, author of the brilliant Milkman, give a talk where she was asked about her writing process. Her answer, which in the mouth of another could have sounded pompous or pretentious, was honest and revealing: she had literally nothing to say. She couldn’t explain what the real source of her writing was and, even more remarkably, she wasn’t particularly interested. In any case, there’s something that’s grouping together, or paying selective attention to, some ideas or notions and advocating that they should become a piece of writing. Whatever that is, I suggest it plays the role of rational matter: Cavendish’s architect.

How might this be helpful to writers? I’m not sure it can in any practical way, but I find it helpful when I hit upon a nice description of something I’ve grappled with or when it seems that someone is describing my own experiences (it’s one of the reasons I like reading both philosophy and fiction). Perhaps Cavendish’s triumvirate model can be useful in this way. It may also, and I have begun to think in these terms myself, provide you with a measure of where you are in the writing process. Am I still sourcing the bricks and mortar? Are the labourers at work? Or are they waiting for instructions from the architect? Sometimes, it’s helpful to know where you are, because it lets you take stock of what there is still to do – and, in keeping with Cavendish’s analogy, who’s going to do it.

Questions – an underrated genre

Looking at introductions to philosophy, I realise that we devote much attention to the reconstruction of arguments and critical analysis of positions. Nothing wrong with that. Yet, where are the questions? Arguably, we spend much of our time raising questions, but apart from very few exceptions questions are rarely treated as a genre of philosophy. (However, here is an earlier post, prompted by Sara Uckelman’s approach, on which she elaborates here. And Lani Watson currently runs a project on philosophical questions.) Everyone who has tried to articulate a question in public will have experienced that it is not all that simple, at least not if you want to go beyond “What do you mean?” or “What time is it?” In what follows, I’d hope to get a tentative grip on it by looking back at my recent attempt to teach students asking questions.

This year, I gave an intense first-year course on medieval philosophy.* I say “intense” because it comprises eight hours per week: two hours lecture and two hours reading seminar on Thursday and Friday morning. It’s an ideal setting to do both, introduce material and techniques of approaching it as well as applying the techniques by doing close reading in the seminars. Often students are asked to write a small essay as a midterm exam. Given the dearth of introductions to asking questions, I set a “structured question” instead. The exercise looks like this:

The question will have to be about Anselm’s Proslogion, chapters 2-4. Ideally, the question focuses on a brief passage from that text. It must be no longer than 500 words and contain the following elements:

– Topic: say what the question is about;
– Question: state the actual question (you can also state the presupposition before stating the question);
– Motivation: give a brief explanation why the question arises;
– Answer: provide a brief anticipation of at least one possible answer.

What did I want to teach them? My declared goal was to offer a way of engaging with all kinds of texts. When doing so I assumed that understanding (a text) can be a general aim of asking questions. I often think of questions as a means of making contact with the text or interlocutor. For a genuine question brings two aspects together: on the one hand, there is your question, on the other, there is that particular bit of the text that you don’t understand or would like to hear more about. But … that’s more easily said than done. During the lectures and seminars we would use some questions from students to go through the motions. What I noticed almost immediately is that this was obviously really hard. One day, a student came up and said:

“Look, this focus on questions strikes me as a bit much. I’m used to answer questions, not raising them. It seems to require knowledge that I don’t have. As it is, it is rather confusing and I feel like drowning out at sea.”

I’m quoting from memory, but the gist should be clear. And while I now think of a smallish group of students as particularly brave and open, this comment probably represents the attitude of the majority. The students wanted guidance, and what I wanted to offer them instead was tools to guide themselves. I had and have a number of different reactions to the student’s confession. My first thought was that this is a really brave stance to take: Being so open about one’s own limits and confusion is rarely to be found even among established people. At the same time, I began to worry about my approach. To be sure, the confusion was caused intentionally to some degree, and I said so. But for this apporach to work one has to ensure that asking questions eventually provides tools to orient oneself and to recognise the reasons for the confusion. Students need to learn to consider questions such as: Why am I confused? Could it be that my own expectations send me astray? What am I expecting? What is it that the text doesn’t give me? Arguably, they need to understand their confusion to make contact to the text.  In other words, questions need to be understood. But this takes time and, above all, trust that the confusion lands us somewhere in the end.

When I taught this kind of course in the past, I did what the student seemed to miss now: I gave them not only guiding questions to provide a general storyline through the material, but also detailed advice on what to look for in the texts. While that strikes me as a fine way of introducing material, it doesn’t help them develop questions autonomously. In any case, we had to figure out the details of this exercise. So what is behind the four elements in the task above?

Since questions are often used for other purposes, such as masking objections or convey irritation, it is vital to be explicit about the aim of understanding. Thus, finding the topic had to be guided by a passage or concept that left the questioner genuinely confused. Admitting to such confusion is trickier than meets the eye, because it requires you to zoom in on your lack of understanding or knowledge. You might think that the topic just is the passage. But it’s important to attempt a separate formulation for two reasons: firstly, it tells the listener or reader what matters to you; secondly, it should provide coherence in that the question, motivation and answer should all be on the same topic.

In the beginning, I spent most of the time with analysing two items: the motivation and the formulation of the actual question. After setting out an initial formulation of the question, students had to spell out why the question arises. But why do questions arise? In a nutshell, most questions arise because we make a presupposition or have an expectation that the text does not meet. (Here is a recent post with more on such expectations.) A simple example is that you expect evidence or an argument for a claim p, while the author might simply say that p is self-evident. You can thus begin by jotting down something like “Why would p be self-evident, according to the author?” This means that now, at last, you can talk about something that you do know: your expectations. Ideally, this provides a way of spelling out what you expect and thus what the text lacks (from that perspective). Going from there, the tentative answer will have to provide a reason that shows why p is self-evident for the author. Put differently, while the motivation brings out your presuppositions, the answer is an attempt at spelling out the presuppositions guiding the text (or author). With hindsight, you can now also fix the topic, e.g. self-evidence.

But things are never that straightforward. What I noticed after a while was that many students went off in a quite different direction when it came to answering the question. Rather than addressing the possible reasons of the author, the students began to spell out why the author was wrong. At least during the first letures, they would sometimes not try to see what reasons the author could invoke. Instead, they would begin by stating why their own presupposition was right and the author wrong, whatever the author’s reasons.

This is not surprising. Most discussions inside and outside of philosophy have exactly this structure. Arguably, most philsophy is driven by an adversarial culture rather than by the attempt to understand others. A question is asked, not to target a difficulty in understanding, but to justify the refutation of the interlocutor’s position. While this approach can be one legitimate way of interacting, it appears particularly forced in engaging with historical texts. Trying to say why Anselm or any other historical author was wrong, by contemporary standards, just is a form of avoiding historical analysis. You might as well begin by explaining your ideas and leave Anselm out of the equation altogether.

But how can an approach to understanding the text (rather than refuting it) be encouraged? If you start out from the presupposition that Anselm is wrong, an obvious way would be to ask for the reasons that make his position seem right. It strikes me as obvious that this requires answering the question on Anselm’s behalf. It is at this point that we need to move from training skills (of asking questions) to imparting (historical) knowledge. Once the question arises why an author claims that p, and p does not match our expectations, we need to teach students to recognise certain moves as belonging to different traditions and ways of doing philosophy, ways that do not square with our current culture. My hope is that, if we begin with teaching to raise questions, it will become more desirable to acquire the knowledge relevant to providing answers and to understanding our own questions.

_____

* I’ve really enjoyed teaching this course and think I’ve learned a lot from it. Special thanks to my patient students, particularly to my great TAs, Elise van de Kamp and Mark Rensema, whose ideas helped me enormously in shaping the course. – Now, if you’ve read this far, I’d like to thank you, too, for bearing with me. Not only for the length of this post. Today is a special occasion: this is post number 101.

How philosophy does not make progress. A note on Scott Soames’ new book

Every now and then, philosophers like to discuss whether philosophy makes progress. Although the notion of progress is problematic, I often find these discussions rewarding, for they bring out how varied our understandings actually are. For me, “progress” is a term qualifying interaction, e.g. between interlocutors. In this sense, a conversation can be progressive in that it becomes more refined. And insofar as philosophy can be seen as a form of conversation, it certainly allows for progress. I don’t particularly care whether the progressive elements lie more in the problems or answers or in the methods of tackling them. After all, it depends on what the interlocutors make of them. On Twitter, Michael Schmitz recently suggested that the impact of philosophical ideas on other fields (sciences, arts, politics) might make for an interesting measure of progress, and I wondered whether there are histories of philosophy that put such impact centre stage. When studying linguistics, for instance, I was struck how often Wittgenstein would be named as an inspiration, but my question of how exactly the interaction between linguists and philosophers went remained unanswered. While I have no doubts that there are crucial interactions between philosophy and other fields, I think the precise relation between them would be an intriguing topic for historical research: What was the impact of philosophy, perhaps decisive in the foundation of disciplines, policies or other developments? More than once, Scott Soames’ new book The World Philosophy Made: From Plato to the Digital Age was mentioned as an example for this kind of history. So I began to read. In what follows, I don’t want to present a thorough review. Rather I want to point out in what ways this book is an exemplar of the kind of book that might block progress.

The book does indeed set out from what I’d call an interactionist account of progress. In the introduction, Soames notes that “this book is about the contributions philosophers have made, and continue to make, to our civilization.” (xi) On Daily Nous and elsewhere, Soames’ book has already been noted for its intriguing view on progress:

Philosophers help by giving us new concepts, reinterpreting old truths, and reconceptualizing questions to expand their solution spaces. Sometimes philosophers do this when sciences are born, but they also do it as disciplines mature. As science advances, there is more, not less, for philosophy to do.” (ibid.)

Given the subtitle of the book, it’s clear that we should not expect a very detailed account of such interactions. Fair enough, it might be a start. However, we know that the notion of philosophy might have changed since Plato. Trying to depict any interaction between philosophers and other fields requires an idea of how to identify agents of different fields, doesn’t it? Soames loses no ink over demarcating philosophy from other endeavours. There are no remarks on the shaping of disciplines or even on research on such developments. For instance, the chapter on the “science of language” begins with Chomsky, whose work is deemed as crucial for the empirical study of natural language (133). Is he to be seen as a linguist or a philosopher? We are not told. What did he draw on? Who cares? Not a single word about the Neogrammarians in the nineteenth century; nothing about the early and later works of Bloomfield or the relations of linguistics and warfare in the early twentieth century. Soames is known for history of analytic philosophy and work in the philosophy of language, but one gets the impression that Soames merely works from the top of his head when drawing distinctions or picking his heroes, even in his area of specialisation. To his credit, I should note that he was “initially not inclined to” write this book (ix). Rather he portrays himself as having been persuaded by his editor at Princeton University Press.

Of course, I was particularly curious what Soames would make of medieval philosophy, for in this case we have numerous assumptions and prejudices about the relation between, for instance philosophy and theology. The schematic “timeline” at the end of the introduction notes three stages in medieval philosophy: Aquinas’ Summa Theologica, the “revival of the Aristotelian study of nature” and “Ockham’s razor”. The chapter itself is called “A Truce between Faith and Reason” and also mentions Augustine, Avicenna, Averroes, Albert the Great, Bonaventure, Roger Bacon and Duns Scotus – a serious expansion of the canon relied on! Despite these honourable mentions, it’s clear that Aquinas is the hero of the chapter. Without any questions Soames repeats the story of the “grand synthesis” (39) between faith and reason, Augustine and Aristotle, that the Neothomists of the nineteenth and twentieth centuries have handed down to us. If you follow the sparse footnotes (409-410) you will find that, besides Aquinas himself, Copleston’s history (1946-1975) is the true source of erudition. And why bother with any later scholarship?

But what is Soames’ verdict? In his own words, the “genius of the High Christian Middle Ages – its foremost contribution to the world philosophy made – was in finding a way to give Greek philosophy a second chance …” (21-22) This might indeed pass as a witty remark. After all, medieval philosophy is known as a set of commentary traditions, and the talk of a “second chance” – isn’t it just? But substantially, what acclaim do you receive, if I note that what you do best is give voice to someone else’s thoughts? Similarly great are the achievements of individual men (yes, what else?). Albert the Great’s “most lasting contribution was his influence on his brilliant student Thomas Aquinas.” Of course, the Editio Coloniensis, the critical edition of Albert’s works comprises roughly 50 percent, but Soames can already assess the lasting influence of this work.

These assertions of the traditional canon are so lame that even challenging them can by now count as canonical. As the résumé of his chapter makes clear (39), he merely repeats a teleological narrative of philosophy as striving towards a rational autonomy designed to foster the development of the sciences. Soames writes: “… as time wore on, philosophy asserted its natural critical autonomy, the synthesis [of faith and reason, M.L.] eroded, and philosophers created the intellectual space they needed to begin laying the foundations for the spectacular growth of mathematics and natural sciences that was to come.” (39, italics mine) I italicised parts of the text that indicate the familiar teleological reasoning driving this well worn idea. It’s a story of decline and growth, and its hero or rather heroine is philosophia, endowed with natural properties that come to flourish in the course of history. It’s a story with agents and detractors, of destination towards the fate that we have come to now.

It’s 2019, and we see a major academic publisher disseminating a piece of work, admittedly reluctantly composed by someone who is not a specialist, who grounds at least part of his work on no research, who does not pause to question the categories and descriptions he applies. You might object that this is not “scholarship” but intended for a larger audience. If so, what kind of audience is this? An audience that needs to be told that Plato was a philosopher or that Bach had interest in organ music? Does that audience not deserve to be served state of the art research? Or at least something based on research of the last thirty years? Or, if that would be asking too much, something that highlights caveats or open questions in the introduction?

I’ve read too many books to ignore the fact that Soames’ book is not the only exemplar of this kind of work. Nor is it a special problem that Soames publicly endorses the politics of Trump.  Indeed, there are many books by famous philosophers who get to share their sometimes ideosyncratic views in an unquestioning manner with a major publishing house. The problem is not just that some of the chapters might be outdated. Given that the actual question of the project behind this book is rather interesting, this publication will represent the state of the art on this issue for years to come. Aspiring scholars wanting to engage with this kind of project will have to reference this work and discuss it, thereby perpetuating the impact of unquestioned teleological bullshit.

Dismissing (religious) belief. On a problematic kind of anachronism

I’m currently teaching an intro course on medieval philosophy. Although I really enjoy teaching medieval philosophy, I am always somewhat shocked at the generally dismissive attitude towards the religious or theological aspects of the material. A widespread assumption is that we can and should bypass such issues. Why bother with God or angels, if we can focus on philosophy of language or ethics? That said, there is no reason to blame students. Looking at various histories of philosophy, it’s clear that the selection of material often follows what is currently deemed most relevant. In fact, bits of my own work might serve as a case in point. However, in what follows I’d like to present three reasons for the claim that, in bypassing such aspects, we miss out on core ideas, not only in history of philosophy.

(1) The illusion of modernity. – If you ask people why they think we can happily ignore theological aspects, a common answer is that they are indeed no longer relevant, because the world is supposedly progressing towards an increasingly enlightened state of a scientific rather than a religious view of the world. This is of course not the last word. Criticisms of progress narratives aside, it is also clear that we live in a world that is currently deeply conflicted between adherents of religion and a scientific worldview. Moreover, this assumption makes us overlook that this conflict is a deeply medieval one, testified already in the writings of Augustine, culminating perhaps in the famous condemnation of 1277, and continuing well into what is known as modern philosophy. Thus, the idea that dissociating reason from faith is a trait of Enlightenment or modernity is a cherished illusion. After deciding to address this issue head-on in my current course, I made the condemnation of 1277 the first focal point. Amongst other things, it clearly shows that the battlefield of faith versus reason, along with the discussion of different kinds of truth, not to speak of alternative facts, has venerable precedents in the 13th century. In other words, the distinction between adherents of faith versus adherents of science is not a diachronic one (between medieval and modern) but a synchronic one.

(2) Theology is philosophy. – But even if you agree that conflicts of faith versus reason might be relevant even today, you might still deny that they are philosophically significant. If you turn to philosophers of the medieval or other periods, you might go straight to the philosophically interesting stuff. The assumption seems to be that certain problems or topics can be stripped of their theological content without much loss. Going from this assumption, material that cannot be stripped from such overtones is “not philosophy.” One problem with this view is that a number of philosophical systems have notions such as “god” at the core. For a number of medieval and early modern philosophers, their metaphysics are unintelligible without reference to a god. Trying to bypass this, means bypassing these metaphysics. The idea of stripping such systems from theological notions strikes me a consequence of the illusion of modernity. But in fact we find a number of 20th-century or present-day philosophers who rely on such notions. And as is well known by now, readers of Wittgenstein’s Tractatus Logico-Philosophicus, one of the foundational texts of the early analytic tradition, should not ignore his approach to the “mystical” and related ideas. This doesn’t mean that there is no philosophy without theology. But we are prone to serious misunderstanding if we wilfully ignore such foundations.

(3) The significance of belief. – My third and perhaps most important point is that the foundational role of belief is often ignored. Reading, for instance, that Anselm opens his Proslogion with the idea that we have to believe in order to understand, this and other remarks on (religious) belief are often taken as confessions that do not affect the arguments in question. As I see it, such estimations miss a crucial philosophical point: unquestioned belief is foundational for many further (mental or physical) acts. Arguably, there is a long tradition of philosophers (e.g. Augustine, Anselm, Gregory of Rimini, Spinoza, Hume, William James, Davidson) who exposed the foundational role of belief, showing that there are reasons to accept certain assumptions on faith. The need to rely on axioms is not only a trait of special sciences. Indeed, many aspects of our life depend on the fact that we hold certain unquestioned beliefs. Unless we have startling evidence to the contrary, we’re inclined to believe whatever we perceive. We believe that we weren’t lied to when our parents or other people informed us about our date of birth, and we don’t normally question that there is an external world. Challenging certain beliefs would probably deeply unsettle us, and we certainly wouldn’t begin searching, if we didn’t believe we had a chance of finding what we’re looking for. In this sense, certain beliefs are not optional.

The upshot is that the dismissal of (religious) belief is not only problematic in that it distorts some niches of medieval philosophy. Rather, it’s based on a misconception of our very own standards of rationality, which rely much on more on unquestioned beliefs than might meet the eye. So if the dismissal of religious belief is anachronistic, it’s not only distorting our view of the past but distorting the understanding of our current discourses. In this regard, much medieval philosophy should not be seen as strangely invested in religion but rather as strangely familiar, even if unbeknownst to ourselves. As Peter Adamson succinctly put it, for some “a proper use of reason is unattainable without religious commitment.” I agree, and would only add that we might recognise this attitude more readily as our own if we deleted the word “religious”. But that is perhaps more of a purely very verbal matter than we like to believe.

How do I figure out what to think? (Part I)

Which view of the matter is right? When I started out studying philosophy, I had a problem that often continues to haunt me. Reading a paper on a given topic, I thought: yes, that makes sense! Reading a counterargument the next day, I thought: right, that makes more sense! Reading a defence of paper one, I thought: oh, I had better swing back. Talking to others about it, I found there were two groups of people: those who had made up their mind for one side, and those who admitted to swinging back and forth just like I did. I guess we all experience this swinging back and forth in many aspects of life, but in philosophy it felt unsettling because there seemed to be the option of just betting on the wrong horse. But there was something even worse than betting on the wrong horse and finding myself in disagreement with someone I respected. It was the insight that I had no clue how to make up my mind in such questions. How did people end up being compatibilists about freedom and determinism? Why do you end up calling yourself an externalist about meaning? Why do you think that Ruth Millikan or Nietzsche make more sense than Jerry Fodor or Kant? – I thought very hard about this and related questions and came up with different answers, but today I thought: right, I actually have something to say about it! So here we go.

Let’s first see how the unsettling feeling arises. The way much philosophy is taught is by setting out a problem and then presenting options to solve it. Sometimes they are presented more historically, like: Nietzsche tried to refute Schopenhauer. Sometimes they are presented as theoretical alternatives, like: this is an argument for compatibilism and here is a problem for that argument. I had a number of reactions to such scenarios, but my basic response was not: right, so these are the options. It was rather: I have no idea how to oversee them. How was I supposed to make up my mind? Surely that would require overseeing all the consequences and possible counterarguments, when I had already trouble to get the presented position in the first place. I went away with three impressions: (1) a feeling of confusion, (2) the feeling that some of the views must be better than others, and (3) the assumption that I had to make up my mind about these options. But I couldn’t! Ergo, I sucked at philosophy.

In this muddle, history of philosophy seemed to come to the rescue. It seemed to promise that I didn’t have to make up my mind, but merely give accurate accounts of encountered views. – Ha! The sense of relief didn’t last long. First, you still have to make up your mind about interpretations, and somehow the views presented in primary texts still seemed to pull me in different directions. My problem wasn’t solved but worsened, because now you were supposed to figure out philological nuances and historical details on top of everything else. Ergo, the very idea of reporting ideas without picking a side turned out to be misguiding.

Back to square one, I eventually made what I thought was a bold move: I just picked a side, more or less at random. The unease about not seeing through the view I had picked didn’t really go away, but who cares: we’re all just finite mortals! – Having picked a side gave me a new feeling: confidence. I had not seen the light, but hey, I belonged to a group, and some people in that group surely had advanced. Picking a side feels random only at the beginning: then things fall into place; soon you start to foresee and refute counterarguments; what your interlocutors say matters in a new way. You listen not just in an attempt to understand the view “an sich”, but you’re involved. Tensions arise. It’s fun, at least for a while. In any case, picking a side counters lack of confidence: it gives your work direction and makes exchanges meaningful.

For better or worse, I would recommend picking a side if your confusion gets the better of you all the time. At least as a pragmatic device. It’s how you make things fall into place and can take your first steps. However, the unease doesn’t go away. At least for me it didn’t. Why? Let’s face it, I often felt like an actor who impersonates someone who has a view. Two questions remained: What if people could find out that I had just randomly picked a side? This is part of what nourished impostor syndrome (for the wrong reasons, as might turn out later). And how could I work out what I should really think about certain things? – While getting a job partly helped with the first question, a lot of my mode of working revolves around the second question. I got very interested in questions of norms, of methodology and the relation between philosophy and its history. And while these issues are intriguing in their own right, they also helped me with the questions of what to think and how to figure out what to think. So here are a few steps I’d like to consider.

Step one: You don’t have to pick a side. – It helps to look more closely at the effect of picking a side. I said that it gave direction and meaning to my exchanges. It did. But how? Picking a side means to enter a game, by and large an adversarial game. If you pick a side, then it seems that there is a right and wrong side just as there is winning and losing in an argumentative setting. Well, I certainly think there is winning and losing. But I doubt that there is right and wrong involved in picking a side. So here is my thesis: Picking a side helps you to play the game. But it doesn’t help you in figuring out what you should think. In other words, in order to work out what to think, you don’t have to pick a side at all.

Step two: Picking a side does not lead you to the truth. – As I noted, the way much philosophy is taught to us is by setting out a problem and then presenting options to solve it. The options are set up as better or worse options. And now it seems that picking a side does not only associate you with winning, say, a certain argument, but also with truth. And the truth is what you should think and be convinced of, right? But winning an argument doesn’t (necessarily) mean to hit on the truth of a matter. The fact that you win in an exchange does not mean that you win the next crucial exchange. In fact, it’s at least possible that you win every argument and never hit on any truth. It’s merely the adversarial practice of philosophy that creates the illusion that winning is related to finding the truth.

Now you might want to object that I got things the wrong way round. We argue, not to win, but about what’s true. That doesn’t make winning automatically true, but neither does it dissociate truth from arguing. Let’s look at an example: You can argue about whether it was the gardener or the butler who committed the murder. Of course, you might win but end up convicting, wrongly, the gardener. Now that does show that not all arguments bring out the truth. But they still can decide between true and false options. Let me address this challenge in the next step.

Step three: In philosophy, there are no sides. – It’s true that presenting philosophical theories as true or false, or at least as better or worse solutions to a given problem makes them look like gardeners or butlers in a whodunit. Like a crime novel, problems have solutions, and if not one solution, then at least one kind of solution. – This is certainly true of certain problems. Asking about an individual cause or element as being responsible or decisive is the sort of setting that allows for true and false answers. But the problems of philosophy are hardly ever of that sort. To see this, consider the example again. Mutatis mutandis, what matters to the philosopher is not mainly who committed the crime, but whether the gardener and the butler have reasons to commit the murder. And once someone pins down the gardener as the culprit, philosophers will likely raise the question whether we have overlooked other suspects or whether the supposed culprit is really to blame (rather than, say, society). This might sound as if I were making fun of philosophy, but the point is that philosophers are more engaged in understanding than in providing the one true account.

How does understanding differ from solving a problem? Understanding involves understanding both or all the options and trying to see where they lead. Understanding is a comprehensive analysis of an issue and an attempt to integrate as many facts as possible in that analysis. This actually involves translating contrary accounts into one another and seeing how different theories deal with the (supposedly) same facts. Rather than pinning down the murderer you’ll be asking what murder is. But most of the time, it’s not your job to conclusively decide what murder is (in the sense of what should count as murder in a given jurisdiction), but to analyse the factual and conceptual space of murder. Yes, we can carve up that space differently. But this carving up is not competitive; rather it tells us something about our carving tools. To use a different analogy, asking which philosophical theory is right is like asking whether you should play a certain melody on the piano or on the trombone. There are differences: the kinds of moves you need to make to produce the notes on a trombone differ vastly from those you need to make on the piano. Oh, and your preference might differ. But would you really want to say there is a side to be taken? – Ha! You might say that you can’t produce chords on a trombone, so it’s less qualified for playing chord changes. Well, just get more trombone players then!

I know that the foregoing steps raise a number of questions, which is why I’d like to dedicate a number of posts to this issue. To return to swinging back and forth between contrary options, this feeling does not indicate that you are undecided. It indicates that you are trying to understand different options in a setting. Ultimately, this feeling measures our attempts to integrate new facts, while we are confronted with pressures arising from observing people who actually adhere to one side or another. For the time being, I’d like to conclude by repeating that it is the adversarial style that creates the illusion that winning and losing are related to giving true and false accounts. The very idea of having to pick a side is, while understandable in the current style of playing the game, misguided. If there are sides, they are already picked, rooted in what we call perspectives. In other words, one need not worry which side to choose, but rather think through the side you already find yourself on. There are no wrong sides. Philosophy is not a whodunit. And the piano might be out of tune.

What is a debate? On the kinds of things we study in history of philosophy

Philosophers focus on problems; historians of philosophy also focus on texts. That’s what I sometimes say when I have to explain the difference between doing philosophy and history of philosophy. The point is that historians, in addition to trying and understanding what’s going on in a text or between texts, also deal with the ‘material basis’ on which the problems are handed down to us: the genres, dates, production and dissemination, the language, style and what have you. But what is it that we actually find in the texts? Of course, we are used to offer interpretations, but I think that, before we even start reading, we all tend to have presumptions about what we find. Now these presumptions can be quite different. And it matters greatly what we think we find. In the following, I want to say a few things about this issue, not to offer conclusions, but to get the ball rolling.

An assumption that is both common and rightly contested is that we might find the intention of the author. Wanting to get Aristotle, Cavendish or Fodor right, seems to mean that we look for what the author meant to say. It’s understandable that this matters to us, but apart from the fact that such a search is often in vain, we can understand texts independently from intentions. – Another unit is of course the focus on arguments. We can read a text as an argument for a conclusion and thus analyse its internal structure. Getting into the details of arguments often involves unpacking and explaining claims, concepts, assumptions in the background, and examples. Evaluating the arguments will mean, in turn, to assess how well they support the claims (I like to think of an evaluation as indicating the distance between claim and argument). But while all this is a crucial part in the philosophical analysis, it does not explain what is going on in the text, that is: it does not explain why and on what basis an author might argue for a certain conclusion, reject a certain view, make a certain move, use a certain strategy, use a certain term or concept. In other words, in addition to the internal analysis we need to invoke some of the so-called context.

As I see it, a fruitful approach to providing context, at least in the history of philosophy, is to study texts as elements of debates. One reason I like this is that it immediately opens up the possibility to locate the text (and the claims of an author) in a larger interaction. We hardly ever write just because we want to express a view. Normally we write in response to other texts, no matter whether we reply to a question, reject a claim, highlight a point of interest etc., and no matter whether that other text is a day or thousand years old.

But even if you agree that debates are a helpful focus both for studying a historical or contemporary text (in research as well as in teaching), there might be quite some disagreement as to what a debate actually is or what we are looking for in a debate. I think this matters not only for historians but also for understanding debates more generally. – Currently, for instance, we have a public debate about climate change. What kind of ‘unit’ is this? There are conditions under which the debate arose quite some decennia ago, with claims being put forward in research contexts, schools and the media. These conditions vary greatly: there are political, technological, scientific, educational and many other kinds of conditions. Then there are different participants, many kinds of scientists, citizens, politicians, journalists. Then there are different genres: scientific publications, media outlets, referee reports for politicians, interviews, protests in the streets and online etc. What is it that holds all this together and makes it part of a debate? My hunch is that it is a question. But which one? Here, I think it is important to get the priorities right. There are sub-questions, follow-up questions, all sorts, but is there a main question? This is tricky. But I guess it should be the most common and salient point of contact between all the items constituting the debate. For this debate, it is perhaps the question: How shall we respond to climate change?

Once we determine such a question, we can group the items, especially the texts, accordingly. The debate is one of the crucial factors that makes the text meaningful, that places it in a dialogical space, even if we do not understand very much of what it says (yet). Even if I am not a climate scientist, I understand the role of a paper within the debate and might be able to place it quite well just by reading the abstract. The same is true of a medieval treatise on logic or an early modern text on first philosophy. – So this is a good way in, I guess. But where do we go from here? You probably can already guess that I want to say something critical now. Yes, I do. The point I want to address is this: How is a debate structured?

When we think about debates in philosophy, we obviously start out from what we perceive debates to be nowadays. As pointed out earlier, much philosophical exchange is based on criticising others. Therefore, it seems fair to assume that debates are structured by opposition. There is a question and opposing answers to it. Indeed, many categories in philosophical historiography are ordered in oppositions and it helps to understand one term through thinking in relation to its opposite. Just think of empiricism versus rationalism, realism versus nominalism etc. That’s all fine. But it only gets you so far. Understanding the content, motivation and addressees of a text as a response in an actual debate requires going far beyond such oppositions. Of course, we can place someone by saying he’s a climate change denier; but that doesn’t help us in understanding the motivations and contents of the text. It’s just a heuristic device to get started.

Today I had the pleasure of listening in on a meeting of Andrea Sangiacomo’s ERC project team working on a large database to study trends in early modern natural philosophy.* It’s a very exciting project, not least in that they are trying to analyse the social and semantic networks in which some of the teaching took place. Not being well-versed in digital humanities myself, I was mainly in awe of the meticulous attention to details of working with the data. But then it struck me: They are tracking teaching practices and yet they were making their first steps by tracing opposing views (on occasionalism). Why would you look for oppositions, I wondered half aloud. Of course, it is a heuristic way of structuring the field. It was then that I began to wonder how we should analyse debates, going beyond oppositions.

Now you might ask why one should go beyond. My answer is that debates, even though the term might suggest critical opposition over a question, might be structured by opposition. But the actual moves that explain what’s going on in a text on a more detailed level, that is: from one passage or even one sentence to the next, are way more fine-grained. Again, as in the case of the straightforward opposition, these moves should be thought of as (implicit) responses to other texts.** Here is a list of moves I think of ad hoc:

  • reformulating a claim
  • quoting a claim (with or without acknowledgement)
  • paraphrasing a claim
  • translating a claim (into a different language, terminology)
  • formalising a claim
  • simplifying a claim
  • embedding a claim into a more complex one
  • ascribing a claim (to someone)
  • (intentionally) misacscribing a claim
  • making up a claim (as a view of someone)
  • commenting on a claim
  • elaborating or developing an idea
  • locating a view in a context
  • deriving (someone’s) claim from another claim
  • deriving (someone’s) claim from the Bible
  • asserting that a claim, actually, is another claim
  • asserting that a claim is ambiguous
  • asserting that a claim is self-evident
  • asserting that a claim is true, false, paradoxical, contradictory, opposing another one, an axiom, demonstrable, not demonstrable
  • asserting that a claim is confirmed by experience
  • asserting that a claim is intuitive, plausible, implausible, unbelievable
  • raising (new) questions
  • answering a question raised by a claim
  • doubting and questioning a result
  • revising a claim
  • revising one’s own claim in view of another claim
  • understanding a view
  • failing to understand a view
  • misrepresenting a view
  • distorting a view
  • evaluating a view
  • dismissing a view
  • re-interpreting a (well-known) view
  • undermining a claim, one’s own claim
  • exposing assumptions
  • explaining an idea in view of its premises or implications
  • illustrating a view
  • finding (further) evidence for or against a view
  • transforming or applying a concept or view to a new issue, in philosophy or elsewhere
  • recontextualising a view
  • repairing a view or argument
  • popularising a view
  • trying to conserve a view
  • trying to advance a view
  • juxtaposing views
  • comparing views
  • resolving a tension between views
  • highlighting a tension between views
  • associating a view with another one
  • appropriating a view
  • pretending to merely repeat a traditional view, while presenting a bold re-interpretation of it [yes, what Ockham does to Aristotle]
  • explicitly accepting a view
  • pretending to accept a view
  • accepting a view, while condemning the proponent
  • rejecting a view, while praising the proponent
  • pretending to reject a view, while actually appropriating (part of) it [yes, I’m thinking of Reid]
  • pretending to accept a view, while rejecting its premises
  • highlighting relations between views (analogies etc.)
  • ridiculing a view
  • belittling a view
  • shunning a view
  • showing societal consequences of a view
  • suppressing or hiding a claim
  • disavowing a claim
  • retracting a claim
  • putting a view in euphemistic terms
  • showing that a claim is outrageous, heretical, controversial, complacent
  • polemicising against a view
  • etc.

This list is certainly not exhaustive. And “view” or “claim” might concern the whole or a part, an argument, a term or concept. Even if we have some more positive or negative forms of responses, we have to see that all of these ways go beyond mere opposition, counterargument or criticism. Sometimes the listed moves are made explicitly; sometimes a move in a text might be explicable as result of such a move. What is perhaps most salient is that they often say as much about the commitments of the respondent as they are intended to say about the other text that is being responded to. While mere criticism of an opponent does not require us to expose our commitments, much of what we find in (historical) texts is owing to commitments. (In other words, adversarial communication in current professional settings, such as the Q&A after talks, might often be taken as people merely showing off their chops, without invoking their own commitments and vulnerabilities. But this is not what we should expect to find in historical texts.)*** So if we look at Spinoza as criticising Descartes, for instance, we should not overlook that the agreements between the commitments and interests of these authors are just as important as the tensions and explicit disagreement. Looking again at the issue of climate change, it is clear that most moves probably consist in understanding claims and their implications, establishing agreement and noting tensions, corroborating ideas, assessing consequences, providing evidence, trying to confirm results etc. So the focus on opposition might be said to give us a wrong idea of the real moves within a historical debate and of the moves that stabilise a debate or make it stick.

Anyway, the main idea of beginning such a list is to see the variety of moves we might find in a text responding to someone else. To analyse a text merely as an opposing move with pertinent counterarguments or as presenting a contrary theory makes us overlook the richness of the philosophical interactions.

____

*Here is a recent blog post by Raluca Tanasescu, Andrea Sangiacomo, Silvia Donker, and Hugo Hogenbirk on their work. I’m only beginning to learn about the methods and considerations in digital humanities. But I have to say that this field strikes me as holding a lot of (methodological) inspiration (for history of philosophy and science etc.) even if you continue to work mostly in more traditional ways.

** Besides texts of different authors, this might of course also concern other texts of oneself or parts or temporal stages (drafts) of the same text.

*** I’m grateful to Laura Georgescu for pointing out this difference between criticism in current professional settings as opposed to many historical texts.

Love, crime, sincerity and normality. Or: sameness claims in history

How do the things mentioned in the title hang together? – Read on, then! Think about this well known illusion: You see a stick in the water; the stick seems to be bent. What can you do to check whether it is really bent? – Knowing that water influences visual perception, you can change the conditions: You take it out of the water and realise that it is straight. Taking it out also allows for confirmation through a different sense modality: Touching the stick, you can compare the visual impression with the tactile one. Checking sense modalities and/or conditions against one another establishes an agreement in judgment and thus objectivity. If you only had the visual impression of the stick in the water, you could not form an objective judgment. For all you knew, the stick would be bent.

Now, objectivity is nice to have. But it requires a crucial presupposition that we have not considered so far: that the different perceptions are perceptions of the same thing. Identity assumptions about perceptual objects come easily. But, in principle, they could be challenged: How do you know that what you touch really is the same thing as the one you feel? Normally, yes: normally, you don’t ask that question. You presuppose that it’s the same thing. Of course, you might theorise about a wicked friend exchanging the sticks when you aren’t looking, but this is not the issue now. We need that presupposition; otherwise our world would fall apart. Cutting a long story short, to ‘have’ our world we need at least two things, then: (1) agreement in our tacit judgments (about perceptions) and agreement with the judgments of others: So when someone says it’s raining that judgment should agree with our perceptual judgments: “it’s raining” must agree with the noise we hear of the drops hitting the rooftop and the drops we see hitting the window; (2) and we must presuppose that all these judgments concern the same thing: the rain.

Now all hell breaks loose when such judgments are consistently challenged. What is it I hear, if not the rain? What do you mean when you say “it’s raining”, if not that it’s raining? Are you talking figuratively? Are you not sincere? – One might begin to distrust the speaker or even one’s senses (or the speaker’s senses). It might turn out that the sameness was but a presupposition. (Oh, and what guided the comparison between touch and vision in the first place? How do I know what it feels like to touch a thing looking like ‘that’? Best wishes from Mr Molyneux …)

Presuppositions about sameness and challenging them: this provides great plots for stories about love, crime, sincerity and normality. I leave it to your imagination to fill in the gaps now. Assumptions about sameness figure in judgments about sincerity, about objects, persons, about perceptions, just about everything. (Could it turn out that the Morning Star is not the Evening Star, after all?) It’s clear that we need such assumptions if we don’t want to go loopy, and it’s palpable what might happen if they are not confirmed. Disagreement in judgment can hurt and upset us greatly.

No surprise then that we read philosophical texts with similar assumptions. If your colleague writes a text entitled “on consciousness” or “on justice” you make assumptions about these ideas. Are these assumptions confirmed when you pick up a translation: “De conscientia” or “Über Bewusstsein”? Hmmm, does the Latin match? Let’s see! What you look for, at least when your suspicion is raised, is confirmation about the topic: Does it match what you take consciousness to be? But hang on! Perhaps you should check your linguistic assumptions first? Is it a good translation?

What you try to track is sameness, by tracking agreement in judgments about different kinds of facts. Linguistic facts have to match. But also assumptions about the topic. Now a new problem emerges: It might be that the translation is a match, but that you genuinely disagree with your colleage about what consciousness is. Or it might be that you agree about consciousness, but that the translation is incorrect. – How are you going to find out which disagreement actually obtains? – You can ask your colleage: What do you mean by “conscientia”? She then tells you that she means that conscientia is given if p and q obtain. You might now disagree: I think consciousness obtains when p and r obtain. Now you have a disagreement about the criteria for consciousness. – Really? Perhaps you now have disagreement of what “consciousness” means or you have a disagreement of what “conscientia” means. How do you figure that out? Oh, look into a canonical book on consciousness! – Let’s assume it even notes certain disagreements: What are the disagreements about?

I guess the situation is not all that different when we read historical texts. Perhaps a bit worse actually. We just invoke some more ways of establishing sameness: the so-called context. What is context? Let’s say we invoke a bunch of other texts. So we look at “conscientia” in Descartes. Should we look at Augustine? Some contemporaries? At Dennett? At some scholastic authors? Paulus? The Bible? How do we determine which context is the right one for establishing sameness. And is consciousness even a thing? A natural kind about which sameness claims can be well established? – Oh, and was Descartes sincere when he introduced God in the Meditations?

Sometimes disagreements among historians and philosophers remind me of the question which interpretation of a piece of music is the proper one. There is a right answer: it’s whichever interpretation you’ve listened to first. Everything else will sound more or less off, different in any case. That’s where all your initial presuppositions were rooted. Is it the same piece as the later interpretations? Is it better? How? Why do I like it? How do I recognise it as the same or similar? And I need a second coffee now!

I reach to my cup and find the coffee in there lukewarm – is it really my coffee, or indeed coffee?

____

Whilst I’m at it: Many thanks to all the students in my course on methodology in the history of philosophy, conveniently called “Core Issues: Philosophy and Its Past”. The recent discussions were very intriguing again. And over the years, the participants in this course inspired a lot of ideas going into this blog.

Ugly ducklings and progress in philosophy

Agnes Callard recently gave an entertaining interview at 3:16 AM. Besides her lovely list of views that should count as much less controversial than they do, she made an intriguing remark about her book:

“I had this talk on weakness of will that people kept refuting, and I was torn between recognizing the correctness of their counter-arguments (especially one by Kate Manne, then a grad student at MIT), and the feeling my theory was right. I realized: it was a bad theory of weakness of will, but a good theory of another thing. That other thing was aspiration. So the topic came last in the order of discovery.”

Changing the framing or framework of an idea might resolve seemingly persisting problems and make it shine in a new and favourable light. Reminded of Andersen’s fairy tale in which a duckling is considered ugly until it turns out that the poor animal is actually a swan, I’d like to call this the ugly duckling effect. In what follows, I’d like to suggest that this might be a good, if underrated, form of making progress in philosophy.

Callard’s description stirred a number of memories. You write and refine a piece, but something feels decidedly off. Then you change the title or topic or tweak the context ever so slightly and, at last, everything falls into place. It might happen in a conversation or during a run, but you’re lucky if it does happen at all. I know all too well that I abandoned many ideas, before I eventually and accidentally stumbled on a change of framework that restored (the reputation of) the idea. As I argued in my last post, all too often criticism in professional settings provides incentives to tone down or give up on the idea. Perhaps unsurprisingly, many criticisms focus on the idea or argument itself, rather than on the framework in which the idea is to function. My hunch is that we should pay more attention to such frameworks. After all, people might stop complaining about the quality of your hammer, if you tell them that it’s actually a screwdriver.

I doubt that there is a precise recipe to do this. I guess what helps most are activities that help you tweaking the context, topic or terminology. This might be achieved by playful conversations or even by diverting your attention to something else. Perhaps a good start is to think of precedents in which this happened. So let’s just look at some ugly duckling effects in history:

  • In my last post I already pointed to Wittgenstein’s picture theory of meaning. Recontextualising this as a theory of representation and connecting it to a use theory or a teleosemantic account restored the picture theory as a component that makes perfect sense.
  • Another precendent might be seen in the reinterpretations of Cartesian substance dualism. If you’re unhappy with the interaction problem, you might see the light when, following Spinoza, you reinterpret the dualism as a difference of aspects or perspectives rather than of substances. All of a sudden you can move from a dualist framework to monism but retain an intuitively plausible distinction.
  • A less well known case are the reinterpretations of Ockham’s theory of mental language, which was seen as a theory of ideal language, a theory of logical deep structure, a theory of angelic speech etc.

I’m sure the list is endless and I’d be curious to hear more examples. What’s perhaps important to note is that we can also reverse this effect and turn swans into ugly ducklings. This means that we use the strategy of recontextualisation also when we want to debunk an idea or expose it as problematic:

  • An obvious example is Wilfried Sellars’ myth of the given: Arguing that reference to sense data or other supposedly immediate elements of perception cannot serve as a foundation or justification of knowledge, Sellars dismissed a whole strand of epistemology.
  • Similarly, Quine’s myth of the museum serves to dismiss theories of meaning invoking the idea that words serve as labels for (mental) objects.
  • Another interesting move can be seen in Nicholas of Cusa’s coincidentia oppositorum, restricting the principle of non-contradiction to the domain of rationality and allowing for the claim that the intellect transcends this domain.

If we want to assess such dismissals in a balanced manner, it might help to look twice at the contexts in which the dismissed accounts used to make sense. I’m not saying that the possibility of recontextualisation restores or relativises all our ideas. Rather I think of this option as a tool for thinking about theories in a playful and constructive manner.

Nevertheless, it is crucial to see that the ugly duckling effect works in both ways, to dismiss and restore ideas. In any case, we should try to consider a framework in which the ideas in question make sense. And sometimes dismissal is the way to go.

At the end of the day, it could be helpful to see that the ugly duckling effect might not be owing to the duck being actually a swan. Rather, we might be confronted with duck-swan or duck-rabbit.

Naturalism as a bedfellow of capitalism? A note on the reception of early modern natural philosophy

Facing the consequences of anthropogenic climate change and pollution, the idea that a certain form of scientific naturalism goes hand in hand with an exploitative form of capitalism might (or might not) have an intuitive plausibility. But does the supposed relation between naturalism and capitalism have something like a historical origin? A set of conditions that tightened it? And that can be traced back to a set of sources? In what follows, I’d like to present a few musings on this kind of question.

What does it take to write or think about a history of certain ideas? Obviously, what you try to do is to combine certain events and think something like: “This was triggered by that or this thought relies on that assumption.” You might even be more daring and say: “Had it not been for X, Y would (probably) never have occurred.” Such claims are special in that they bind events or ideas together into a narrative, often designed to explain how it was possible that some event or an idea occurred. – The philosopher Akeel Bilgrami makes such a claim when he suggests that naturalism, taken as a certain way of treating nature scientifically and instrumentally, is tied to capitalism. In his “The wider significance of naturalism” (2010), Bilgrami writes:

“[D]issenters argued that it is only because one takes matter to be “brute” and “stupid,” to use Newton’s own term, that one would find it appropriate to conquer it with nothing but profit and material wealth as ends, and thereby destroy it both as a natural and a human environment for one’s habitation.
[…] Newton and Boyle’s metaphysical view of the new science won out over the freethinkers’ and became official only because it was sold to the Anglican establishment and, in an alliance with that establishment, to the powerful mercantile and incipient industrial interests of the period in thoroughly predatory terms that stressed that nature may now be transformed in our conception of it into the kind of thing that is indefinitely available for our economic gain…”

Bilgrami’s overall story is a genealogy of naturalism or rather scientism.* The paper makes itself some intriguing observations regarding narratives and historiography. But let’s look at his claim more closely. By appealing to Newton and the victory of his kind of naturalism, it is designed to explain why we got to scientism and a certain understanding of nature. In doing so, it binds a number of highly complex events and ideas together: There is a (1) debate between “dissenters” and what he calls “naturalists”, whose ideas (2) became official, (3) “only because” they were “sold” to the Anglicans and to industrial stakeholders. Although this kind of claim is problematic for several reasons, it is quite interesting. One could now discuss why ideas about necessary connections between facts (“only because”) presuppose a questionable understanding of history tout court or seem to ignore viable alternatives. But for the time being I would like to focus on what I find interesting. For me, two aspects stand out in particular.

Firstly, Bilgrami’s thesis, and especially (3), seems to suggest a counterfactual causal claim: Had the metaphysical view not been sold to the said stakeholders, it would not have become official. In other words, the scientific revolution or Newton’s success is owing to the rise of capitalism. Both cohere in that they seem to propagate a notion of nature that is value-free, allowing nature to be exploited and manipulated. Even if that notion of nature might not be Newton’s, it is an interesting because it seems to gain new ground today: The widespread indifference to climate change and pollution for capitalist reasons suggests such a conjunction. Thus, a genealogy that traces the origin of that notion seems to ask at least an interesting question: Which historical factors correlate to the rise of the currently fashionable notion of nature?

Secondly, the narrative Bilgrami appeals to has itself a history and is highly contested. But Bilgrami neither argues for the facts he binds together, nor does he appeal to any particular sources. This is striking, for although he is not alone with his thesis, people are not exactly buying into this narrative. If you read Steven Pinker, you’ll rather get a great success story about why science has liberated us. And even proper historians readily dismiss the relation between the rise of capitalism and science as “inadequate”. This raises another interesting question: Why do we accept certain narratives (rather than others)?

This latter question seems to suggest a simple answer: We do or should accept only those narratives that are correct. As I see it, this is problematic. Narratives are plausible or implausible. But the complexity of the tenets they bind together makes it impossible to prove or refute them on ordinary grounds of evidence. Just try to figure out what sort of evidence you need to show that the Newtonian view “won” or was “sold”! You might see who argued against whom; you might have evidence that some merchants expressed certain convictions, but the correlations suggested by these words can be pulled and evidenced in all sorts of ways. Believing a narrative means to believe that certain correlations (between facts) are more relevant than others. It means to believe, for instance, that capitalism was a driving force for scientists to favour certain projects over others. But unless you show that certain supposed events did not occur or certain beliefs were not asserted, it’s very hard to counter the supposed facts, let alone the belief in their correlation.

So I doubt that we simply chose to believe in certain narratives because we have grounds for believing they are true. My hunch is that they gain or lose plausibility along with larger ideologies or belief systems that we adhere to. In this regard it is striking that Bilgrami goes for his thesis without much argument. While he doesn’t give clear sources, Bilgrami’s assumption bears striking resemblance to the claims of Boris Hessen, who wrote (in 1931):

“The Royal Society brought together the leading and most eminent scientists in England, and in opposition to university scholasticism adopted as its motto ‘Nullius in verba’ (verify nothing on the basis of words). Robert Boyle, Brouncker, Brewster, Wren, Halley, and Robert Hooke played an active part in the society. One of its most outstanding members was Newton. We see that the rising bourgeoisie brought natural science into its service, into the service of developing productive forces. … And since … the basic problems were mechanical ones, this encyclopedic survey of the physical problems amounted to creating a consistent structure of theoretical mechanics which would supply general methods for solving the problems of celestial and terrestrial mechanics.”

The claim that “the the rising bourgeoisie brought natural science into its service” is indeed similar to what Bilgrami seems to have in mind. As a new special issue on Boris Hessen’s work makes clear, these claims were widely disseminated.** At the same time, an encyclopedia from 2001 characterises Hessen’s view as “crude and dogmatically Marxist”.

Thus, the reception of Hessen’s claim is itself tied to larger ideological convictions. This might not be surprising, but it puts pressure on the reasons we give for favouring one narrative over another. While believing in certain narratives means believing that certain correlations (between facts) are more relevant than others, our choice and rejection of narratives might be driven by wider ideologies or belief systems. If this is correct, then the dismissal of Hessen’s insights might not be owing to the dismissal of his scholarship but rather to the supposed Marxism. So the question is: are the cold-war convictions still alive, driving the choice of narratives? Or is the renewed interest in Marxism already a reason for a renewed interest in Hessen’s work? In any case, in the history of interpreting Newtonian naturalism Akeel Bilgrami’s paper is striking, because it bears witness to this reception without directly acknowledging it.*** Might this be because there are new reasons for being interested in the (history of the) relation between scientific naturalism and capitalism?

____

* It’s important to note that Bilgrami uses the term naturalism in a resticted sense: “I am using the term “naturalism” in a rather restricted way, limiting the term to a scientistic form of the philosophical position. So, the naturalism of Wittgenstein or John McDowell or even P. F. Strawson falls outside of this usage. In fact all three of these philosophers are explicitly opposed to naturalism in the sense that I am using the term. Perhaps “scientism” would be the better word for the philosophical position that is the center of the dispute I want to discuss.” – This problematically restricted use of naturalism is probably owing to Margaret Jacob’s distinction between a “moderate” and “radical enlightenment”. The former movement is associated with writers like Newton and Boyle; the latter with the pantheist “dissenters” for whom nature is inseprable from the divine.

** I am very grateful to Sean Winkler, who not only edited the special issue on Hessen but kindly sent me a number of passages from his writings. I’m also grateful to all the kind people who patiently discussed some questions on Facebook (here with regard to Bilgrami; here with regard to Hessen).

*** The lines of reception are of course much more complex and, in Bilgrami’s case, perhaps more indirect than I have suggested. Bilgrami explicitly references Weber’s recourse to “disentchantment” and also acknowledges the importance of Marx for his view. Given these references, Bilgrami’s personal reception might be owing more to Weber than Hessen. That said, Merton (following Weber) clearly acknowledges his debt to Hessen. A further (unacknowledged but possible) source for this thesis is Edgar Zilsel. For more details on the intricate pathways of reception see Gerardo Ienna’s and Giulia Rispoli’s paper in the special issue referenced above.

Let’s get rid of “medieval” philosophy!

“Your views are medieval.” Let’s face it: we often use the term “medieval” in a pejorative sense; and calling a line of thought “medieval” might be a good way of chasing away students who would otherwise have been interested in that line of thought. In what follows, I’d like to suggest that, in order to keep what we call medieval philosophy, we should stop talking about “medieval” philosophy altogether.

While no way of slicing up periods is arbitrary, they all come with problems, as this blog post by Laura Sangha makes clear. So I don’t think that there ever will be a coherently or neatly justified periodisation of history, let alone of history of philosophy. But while other names of periods are equally problematic, none of them is as degrading. Outside academia, the term “medieval” is mainly used to describe exceptionally cruel actions or backward policies.  Often named “dark ages”, the years from, roughly, 500 to 1500 count as a period of religious indoctrination. This usage also shapes the perception in academic philosophy. Arguably, medieval philosophical thought is still seen as subordinate to theology. Historical surveys of philosophy often jump from ancient to early modern, and even specialists in history often make it sound as if the sole philosopher that existed in these thousand years had been Thomas Aquinas. This deplorable status has real-life consequences. Exceptions aside, there are very few jobs in medieval philosophy and a decreasing number of students interested in studying it.

You will rightly object that the problems described are not only owing to the name “medieval” and its cognates. I agree. First of all, the field of history of philosophy has not exactly been pampered in recent decades. Often people working on contemporary issues are asked to do a bit of history on the side or the study programmes are catered for in other fields of humanities (history, theology, languages). Secondly and perhaps more importantly, the dominant research traditions in medieval philosophy often continue to represent the field in an esoteric manner. As a student, the first thing you are likely to hear is that it is almost impossible to study medieval thought unless you read Latin (at least!), learn to read illegible manuscripts, understand outlandish theological questions (angels on a pinhead, anyone?), and know Aristotle by heart. Thirdly, most historical narratives depict medieval thought as a backward counterpoint to what is taken to be the later rise of science, enlightenment and secularisation. While the first of these three problems is beyond the control of medievalists alone, the second and third issue are to some degree in our own hands.

Therefore, we can and should present our field as more accessible. A great part of this will consist in strengthening continuities with other periods. Thus, medieval philosophy should always be seen as continuous with what is called ancient or modern or even contemporary thought. This way, we can rid ourselves not only of this embarrassment of a name (“Middle Ages”) but also of trying to indicate what is typically medieval. I’m inclined to think that, whenever we find something “typical” for that period, it will be also typical of other periods. In other words, there is nothing specifically medieval in medieval philosophy.

While there are already a number of laudable attempts to renew approaches in teaching (see e.g. Robert Pasnau’s survey of surveys), my worry is that the more esoteric strands in our field, both in terms of method and content, will be insinuated whenever we talk about “medieval” philosophy. The term “medieval” is a sticky one and won’t go away, but in combination with “philosophy” it will continue to sound like an oxymoron. What shall we say instead, though? I’d suggest that we talk about what we really do: most of us study a handful of themes or topics in certain periods of time. So why not say that you study the eleventh and twelfth centuries (in the Latin West or wherever) or the history of thought from the thirteenth to the sixteenth century? If a more philosophical specification is needed you might say that you study the history of, say, psychology, especially from the thirteenth to the seventeenth century. If you believe in the progress narrative, you might even use “pre-modern”. Or why not “post-ancient”?

By the way, if you are what is called a medievalist and you work on a certain topic, most of your work will be continuous with ancient or (early) modern philosophy. If there are jobs advertised in these areas, it’s not unlikely that they will be in your field. That might become more obvious if you call yourself a specialist in, say, the history of metaphysics from 400 to 500 AD or the history of ethics from 1300 to 1800. If this is the case, it would not seem illegitimate to apply for positions in such areas, too. – “Oh”, you might say, “won’t these periods sound outrageously long?” Then just remind people that the medieval period comprises at least a thousand years.

____

PS. I started this blog on 26 July 2018. So the blog is now over a year old. Let me take the opportunity to thank you all for reading, writing, and thinking along.