History is about you. On teaching outdated philosophy

Everything we take to be history is, in fact, present right now. Otherwise we wouldn’t think about it.

When I was little, I often perceived the world as an outcome of historical progress. I didn’t exactly use the word “historical progress” when talking to myself, but I thought I was lucky to grow up in the 20th century rather than, say, the Middle Ages. Why? Well, the most obvious examples were advances in technology. We have electricity; they didn’t. That doesn’t change everything, but still a lot. Thinking about supposedly distant times, then, my childhood mind conjured up an image of someone dragging themselves through a puddle of medieval mud, preferably while I was placed on the sofa in a cozy living-room with the light switched on and the fridge humming in the adjacent kitchen. It took a while for me to realise that this cozy contrast between now and then is not really an appreciation of the present, but a prejudice about history, more precisely about what separates us from the past. For what my living room fantasy obscures is that this medieval mud is what a lot of people are dragging themselves through today. It would have taken a mere stroll through town to see how many homeless or other people do not live in the same world that I identified as my present world. Indeed, most things that we call “medieval” are present in our current world. Listening to certain people today, I realise that talk of the Enlightenment, the Light of Reason and Rationality is portrayed in much the same way as my living-room fantasy. But as with the fruits of technology, I think the praise of Enlightenment is not an appreciation of the present, but a prejudice about what separates us from the past. One reaction to this prejudice would be to chide the prejudiced minds (and my former self); another reaction is to try and look more closely at our encounters with these prejudices when doing history. That means to try and see them as encounters with ourselves, with the ideologies often tacitly drummed into us, and to understand how these prejudices form our expectations when reading old texts. Approaching texts in this latter way, means to read them both as historical philosophical documents as much as an encounter with ourselves. It is this latter approach I want to suggest as a way of reading and teaching what could be called outdated philosophy. According to at least some of my students’ verdicts about last term, this might be worth pursuing.

Let’s begin with the way that especially medieval philosophy is often introduced. While it’s often called “difficult” and “mainly about religion”, it’s also said to require so much linguistic and other erudition that anyone will wonder why on earth they should devote much time to it. One of the main take-away messages this suggests is an enormous gap between being served some catchy chunks of, you know, Aquinas, on the one hand, and the independent or professional study of medieval texts, on the other hand. Quite unlike in ethics or social philosophy, hardly any student will see themselves as moving from the intro course to doing some real research on a given topic in this field. While many medievalists and other historians work on developing new syllabi and approaches, we might not spend enough time on articulating what the point or pay-off of historical research might be. – I don’t profess to know what the point of it all is. But why would anyone buy into spending years on learning Latin or Arabic, palaeography or advanced logic, accepting the dearth of the academic job market, a philosophical community dismissing much of their history? For the sake of, yes, what exactly? Running the next edition of Aquinas or growing old over trying to get your paper on Hildegard of Bingen published in a top journal? I’m not saying that there is no fun involved in studying these texts and doing the work it takes; I’m wondering whether we make sufficiently explicit why this might be fun. Given the public image of history (of philosophy), we are studying what the world was like before there was electricity and how they then almost invented it but didn’t.

Trying to understand what always fascinated me about historical studies, I realised it was the fact that one learns as much about oneself as about the past. Studying seemingly outdated texts helped me understand how this little boy in the living room was raised into ideologies that made him (yes, me) cherish his world with the fridge in the adjacent kitchen, and think of history as a linear progress towards the present. In this sense, that is in correcting such assumptions, studying history is about me and you. But, you ask, even if this is true, how can we make it palpable in teaching? – My general advice is: Try to connect to your student-self, don’t focus on the supposed object of study, but on what it revealed about you. Often this isn’t obvious, because there is no obvious connection. Rather, there is disparity and alienation. It is an alienation that might be similar to moving to a different town or country. So, try to capture explicitly what’s going on in the subject of study, too, in terms of experience, resources and methods available. With such thoughts in mind, I designed a course on the Condemnation of 1277 and announced it as follows:

Condemned Philosophy? Reason and faith in medieval and contemporary thought

Why are certain statements condemned? Why are certain topics shunned? According to a widespread understanding of medieval cultures, especially medieval philosophy was driven and constrained by theological and religious concerns. Based on a close reading of the famous condemnation of 1277, we will explore the relation between faith and reason in the medieval context. In a second step we will look at contemporary constraints on philosophy and the role of religion in assessing such constraints. Here, our knowledge of the medieval context might help questioning current standards and prejudices. In a third step we will attempt to reconsider the role of faith and belief in medieval and contemporary contexts.

The course was aimed at BA students in their 3rd year. What I had tried to convey in the description is that the course should explore not only medieval ideas but also the prejudices through which they are approached. During the round of introductions many students admitted that they were particularly interested in this twofold focus on the object and the subject of study. I then explained to them that most things I talk about can be read about somewhere else. What can’t be done somewhere else is have them come alive by talking them through. I added that “most of the texts we discuss are a thousand years old. Despite that fact, these texts have never been exposed to you. That confrontation is what makes things interesting.” In my view, the most important tool to bring out this confrontation lies in having students prepare and discuss structured questions about something that is hard to understand in the text. (See here for an extensive discussion) The reason is that questions, while targeting something in the text, reveal the expectations of the person asking. Why does the question arise? Because there is something lacking that I would expect to be present in the text. Most struggles with texts are struggles with our own expectations that the text doesn’t meet. Of course, there might be a term we don’t know or a piece of information lacking, but this is easily settled with an internet search these days. The more pervasive struggles often reveal that we encounter something unfamiliar in the sense that it runs counter to what we expect the text to say. This, then, is where a meeting of the current students and historical figures takes place, making explicit our and their assumptions.

During the seminar discussions, I noticed that students, unlike in other courses, dared targeting really tricky propositions that they couldn’t account for on the fly. Instead of trying to appear as being on top of the material, they delineated problems to be addressed and raised genealogical questions of how concepts might have developed between 1277 and 2020. Interestingly, the assumption was often not that we were more advanced. Rather they were interested in giving reasons why someone would find a given idea worth defending. So my first impression after this course was that the twofold focus on the object and subject of study made the students’ approach more historical, in that they didn’t take their own assumptions as a yardstick for assessing ideas. Another outcome was that students criticised seeing our text as a mere “object of study”. In fact, I recall one student saying that “texts are hardly ever mere objects”. Rather, we should ultimately see ourselves as engaging in dialogue with other subjects, revealing their prejudices as much as our own.

The children in the living room were not chided. They were recognised in what they had taken over from their elders. Now they could be seen as continuing to learn – making, shunning and studying history.

The canon as a status symbol? White men, cancel culture, and the functions of history of philosophy

“I’m an avid reader of Locke.” “I love listening to Bach.” – Utterances like this often expose the canon – be it in philosophy, literature, music or other arts – as a status symbol. The specifics of our cultural capital might differ, but basically we might say that one man’s Mercedes Benz is another’s readership of Goethe. What is often overlooked is that challenging the canon can look equally status-driven: “Oh, that’s another dead white man.” “I’m so excited about Caroline Shaw’s work.” – Spoken in the pertinent in-group, utterances like this are just as much of an indication of status symbolism. Challenging the canon, then, can become as much of a worn trope as defending adherence to the traditional canon. Let me explain.

Functions of history. – For better or worse, the aims of our discipline are often portrayed in epistemic terms. We study history, we say, to understand or explain (the development of) ideas and events. And in doing that, we want to “get it right.” Arguably, the aim of getting it right obscures a whole set of quite different aims of history. I think more often than not, history is done to (politically) justify or even legitimise one’s position. Just as talk about ancestors justifies inheritance, talk about philosophical predecessors is often invoked to legitimise why it’s worth thinking about something along certain lines. Just asking a question on the fly is nothing, but continuing the tradition of inquiring about the criteria of knowledge does not only justify historical research; it also legitimises our current approaches. Seen this way, a historical canon legitimises one’s own interests. Likewise, the attack on a canonical figure can be seen as shaking such legitimacy, be it with regard to representative figures, topics or questions. Conversely, I might aim to adjust the canon to find and highlight the ancestry that legitimises a new field of study. This endeavour is not one of “getting it right” though.  Of course, we cannot change the past, but we can attempt to change the canon or what we admit to the canon so as to admit of ancestors in line with new ways of thinking. As I see it, these are well-founded motivations to study and/or alter the study of canonical figures. – However, while such motivations might well drive our choices in doing history, they can also deteriorate into something like mere status symbolism. Let’s look at a concrete example.

Three kinds of debates. – I recently read a piece about Locke on slavery, making the point that Locke’s involvement in the American context is far more problematic than recent research portrayed it to be.* The piece struck me as an interesting contribution to (1) the debate on Locke’s political ideas, but the title was jazzed up with the recommendation to leave “Locke in the dustbin of history”. Since the word “dustbin” doesn’t return in the text, I’m not sure whether the title reflects the author’s choice. Be that as it may, in contrast to the piece itself (which is part of a series of texts on Locke’s political position), the title firmly places it in (2) a larger public debate about the moral status of canonical philosophers such as Hume, Berkeley or Aristotle. I think both the more scholarly and the more public debates are important and intertwined in various ways. We can be interested in both how Locke thought about slavery and how we want to judge his involvement. Given what I said about the justifying function of history, it’s clear that we look at authors not only as ancestors. We also ask whether they do or do not support a line of thought we want to endorse. And if it turns out that Locke’s thought is compatible with advocating slavery, then we want to think again how we relate to Locke, in addition to studying again the pertinent documents. However, in addition to these two debates, there is (3) yet another debate about the question whether we should be having these debates at all. This is the debate about the so-called “cancel culture”. While some say we shouldn’t cancel philosophers like Locke, others challenge the omnipresence of the notorious old or dead white men. As I see it, this latter debate about cancellation is highly problematic insofar as its proponents often question the legitimacy of the former (scholarly) debates.

As I see it, debates (1) and (2) are scholarly debates about Locke’s position on slavery. (1) makes an internal case regarding Locke’s writings. (2) also zooms in on the contrast to current views on slavery. (3) however is a different debate altogether. Here, the question is mainly whether it is legitimate to invoke Locke as an ancestor or as part of a canon we want to identify with. The main problem I see, though, is that the title “Leave John Locke in the historical dustbin” makes the whole piece ambiguous between (2) and (3). Given the piece, I’d think this works on level (2), but given how people responded to it and can use it, it becomes a hit piece on level (3) whose only aim seems to be to write Locke out of the (legitimate) canon. But this ambiguity or continuity between the the two kinds of debate is disastrous for the discipline. While on levels (1) and (2) the question of how Locke relates to slavery is an open question, dependent on interpretations of empirical evidence, Locke’s moral failure is already taken for granted on level (3). Here, the use of the canonical figure Locke stops being historical. It reduces to political partisanship. Why? Because history is then taken to be something already known, rather than something to be studied.

The irony is that each group, the defenders as well as the challengers of the canonical figure, questions the moral legitimacy of what they suppose the other group does by making a similar move, that is by appealing to a status symbol that enjoys recognition in the pertinent in-group. One group shouts “Locke and Enlightenment”; the other group shouts “Locke and Racism”. Neither approach to history strikes me as historical. It deteriorates into a mere use of historical items as status symbols, providing shortcuts for political fights. All of this is perhaps not very suprprising. The problem is that such status symbolism undermines scholarly debates and threatens to reduce historical approaches to political partisanship. My point, then, is not that all political or moral discussion of history reduces to status symbolism. But there is the danger that historical scholarship can appear to be continuous with mere status symbolism.

______

* I’d like to thank Nick Denyer, Natalia Milopolsky, Naomi Osorio, Tzuchien Tho, Anna Tropia, and Markus Wild for insightful remarks or exchanges on this matter.

Empiricism and rationalism as political ideas?

Are all human beings equal? – Of course, that’s why we call them human. ­– But how do we know? – Well, it’s not a matter of empirical discovery, it’s our premise. ­– I see. And so everything else follows?

The opposition between empiricism and rationalism is often introduced as an epistemological dispute, concerning primarily the ways knowledge is acquired, warranted and limited. This is what I learned as a student and what is still taught today. If you’ve studied philosophy for a bit, you will also have heard that this opposition is problematic and coarse-grained when taken as a historical category. But in my view the problem is not that this opposition is too coarse-grained (all categories of that kind are). Rather, the problem lies with introducing it as a mere epistemological dispute. As I see it,* the opposition casts a much wider conceptual net and is rooted in metaphysical and even political ideas. Thus, the opposition is to be seen in relation to a set of disagreements in both theoretical and practical philosophy. In what follows, I don’t want to present a historical or conceptual account, but merely suggest means of recognising this wide-ranging set of ideas and show how the distinction helps us seeing the metaphysical implications and political choices related to our epistemological leanings.

Let me begin with a simple question: Do you think there is, ultimately, only one true description of the world? If your answer is ‘yes’, I’d be inclined to think that you are likely to have rationalist commitments. Why? Well, because an empiricist would likely reject that assumption for the reason that we might not be able to assess whether we lack important knowledge. Thus, we might miss out on crucial insights required to answer that question in the first place. This epistemological caution bears on metaphysical questions: Might the world be a highly contingent place, subject to sudden or constant change? If this is affirmed, it might not make sense to say that there is one true description of the world. How does this play out in political or moral terms? Rephrasing the disagreement a bit, we might say that rationalists are committed to the idea that the world is ordered in a certain way, while empiricists will remain open as to whether such an order is available to us at all. Once we see explanatory order in relation to world order, it becomes clear that certain commitments might follow for what we are and, thus, for what is good for us. If you believe that we can attain the one true description of the world, you might also entertain the idea that this standard should inform our sciences and our conduct at large. – Of course, this is quite a caricature of what I have in mind. All I want to suggest is that it might be rewarding to look whether certain epistemological leanings go hand in hand with metaphysical and practical commitments. So let’s zoom in on the different levels in a bit more detail.

(1) Epistemology: As I have already noted, the opposition is commonly introduced as concerning the origin, justification and limits of knowledge. Are certain ideas or principles innate or acquired through the senses? Where do we have to look in order to justify our assumptions? Can we know everything there is to be known, at least in principle, or are there realms that we cannot even sensibly hope to enter? – If we focus on the question of origin, we can already see how the opposition between empiricism and rationalism affects the pervasive nature-nurture debates: Are certain concepts and the related abilities owing to learning within a certain (social) environment or are the crucial elements given to us from the get-go? Now, let’s assume you’re a rationalist and think that our conceptual activity is mostly determined from the outset. Doesn’t it follow from this that you also assume that we are equal in our conceptual capacities? And doesn’t it also follow that rules of reasoning and standards of rationality are the same for all (rather than owing, say, to cultural contexts)? – While the answers are never straightforward, I would assume at least certain leanings into one direction or another. But while such leanings might already inform political choices, it is equally important to see how they relate to other areas of philosophy.

(2) Metaphysics: If you are an empiricist and assume that the main sources of our knowledge are our (limited) senses, this often goes and in hand with epistemic humility and the idea that we cannot explain everything. Pressed why you think so, you might find yourself inclined to say that the limits of our knowledge have a metaphysical footing. After all, if we cannot say whether an event is fully explicable, might this not be due to the fact that the world is contingent? Couldn’t everything have been otherwise, for instance because God interferes in events here and there? In other words, if you don’t assume there to be a sufficient reason for everything, this might be because you accept brute facts. Accordingly, the world is a chancy place and what our sciences track might be good enough to get by, but never provide the certainty that is promised by our understanding of natural laws. Depending on the historical period, such assumptions often go hand in hand with more or less explicit forms of essentialism. The lawful necessities in nature might be taken to relate to the way things are. Now essences are not only taken to determine what things are, but also how they ought to be. – Once you enter the territory of essentialism, then, it is only a small step to leanings regarding norms of being (together), of rationality, and of goodness.

(3) Theology / Sources of Normativity: If you allow for an essentialist determination of how things are and ought to be, this immediately raises the question of the sources of such essences and norms. Traditionally, we often find this question addressed in the opposition between theological intellectualism (or rationalism) and voluntarism: Intellectualists assume that norms of being and acting are prior to what God wills. So even God is bound by an order prior to his will. God acts out of reasons that are at least partly determined by the way natural things and processes are set up. By contrast, voluntarists assume that something is rational or right because God wills it, not vice versa. It is clear how this opposition rhymes with that of rationalism and empiricism: The rationalist assumes one order that even binds God. The empiricist remains epistemically humble, because she believes that rationality is fallible. Perhaps she believes this because she assumes that the world is a chancy place, which in turn might be owing to the idea that the omnipotent God can intervene anytime. It is equally clear how this opposition might translate into (lacking) justifications of moral norms or political power. – Unlike often assumed in the wake of Blumenberg and others, this doesn’t mean that voluntarism or empiricism straightforwardly translate into political absolutism. It is hardly ever a particular political idea that is endorsed as a result of empiricist or rationalist leanings. Nevertheless, we will likely find elements that play out in the justification of different systems.**

Summing up, we can see that certain ideas in epistemology go hand in hand with certain metaphysical as well as moral and political assumptions. The point is not to argue for systematically interwoven sets of doctrines, but to show that the opposition of empiricism and rationalism is so much more than just a disagreement about whether our minds are “blank slates”. Our piecemeal approach to philosophical domains might have its upsides, but it blurs our vision when it comes to the tight connections between theoretical and practical questions which clearly were more obvious to our historical predecessors. Seen this way, you might try and see whether you’ll find pertinently coherent assumptions in historical or current authors or in yourself. I’m not saying you’re inconsistent if you diverge from a certain set of assumptions. But it might be worth asking if and why you conform or diverge.

_____

* A number of ideas alluded to here would never have seen the light of day without numerous conversations with Laura Georgescu.

** See my posts on Ockham’s and Wittgenstein’s voluntarism for more details.

Music for chameleons or humans during pandemics

Surely nobody expected the Spanish Inquisition, but who would have expected Covid-19 pandemics one year ago? More significantly for teachers, who would have expected a massive use of virtual platforms for contactless teaching? The first lockdown and the first “transfer” from real to virtual has been (to me, and I’m stressed that this post relates to my personal experience) a real shock. In March (again, my case) I remember students checking their phones during my last “in person” class, interrupting me to claim: “This will be our last class here for a while, it seems, they’re shutting down schools and university as well”. It was as surprising at least as the Spanish Inquisition (until then, almost nobody was really taking seriously Covid) and brought some accents of drama into the room; parting was strange, as well as my incapacity of answering to the practical question “what will we do next week?”. The rest is history, almost everybody teaching at the University or in schools moved online and had to re-adapt everything to the space of her laptop-screen, the whole being framed by a more or less personal space, offered to the view of students for the first time (such a shame in my case, for, in order to show as little as possible of my room, I chose a strange angle and eventually have no massive and impressive lines of bookshelves to show off). Be this as it may, this is not a post on Zoom’s or other tools’ aesthetics. There are YouTube tutorials for this, I guess.

The point I want to raise is so evident that it might appear flat; but it was not so to me until we entered the second lockdown and went back to a new, massive and continuous (up to the current day) use of Zoom. How much of us do we bring in class whilst teaching? What exactly do we transmit to students together with our attempts at explaining God’s simplicity and the complex story of his attributes, when external conditions are of no help (e.g. poor concentration deriving from flat-sharing, or too much privacy: cams shut down, how do you know students are still there, poorly or completely not interested by the story of the divine attributes?). The first question is broad of course. We bring a lot of us into the classroom. Our experience, from which we draw examples and images that can help out clarifying difficult abstract concepts. And the same applies to Zoom, but in a slightly different way (see the previously mentioned focus difficulty). But let’s go back to the divine attributes. Whilst preparing my power-point presentation n. 1000 etc. for the undergraduate survey-class on medieval philosophy, I was once choosing images to explain the absolute distance between creator and creatures. I put there the image of the Porphyrian tree, to show how God cannot be there at all. At a certain point I thought, well, God is a total alien, that is the idea. A completely different being. I was looking for an image to fix this, for none can think without images, right? And David Bowie as Ziggy Stardust popped up in my mind, the cover of Life on Mars? Was there anyone like him before him? Of course not. So, I put it there, it made me smile, I finished to prepare it and eventually taught the class. It worked out well, so well that we ended up listening to the song together. It sounds cheesy I know, but the point is again another. What enters the screen? How do we reach each other? Focus is not self-evident these days. 

Some weeks before using David Bowie for Aquinas, whilst teaching the same class, a smart student was smiling so much at the screen that I thought he was doing something else, so I asked him what the reason of his exaggerated smile was. He told me that he was smiling at me, because I was there striving to explain Maimonides (again, on divine attributes) in a moment in which nothing made much sense (a lot of my students had Covid-19, many lost their temporary jobs and ended up in financial troubles). I did not know what to reply immediately, just said something like: “Talking about Maimonides is our normality, mine and yours, think about it. Why are we here otherwise?” I could have done better indeed but found not better thing to say right there. And that student, Simon, is very smart. I doubled efforts, tried to reach out to them as much as possible each time, tried to be clearer and clearer. I am of course not as cheesy as to arrive to the point of making claims like: “Music is the answer, we listened together to Life on Mars? and this is the best you can do on Zoom”. I have colleagues who used to play music regularly during their classes when there was still no pandemic. But I realized how everything was much more difficult and that I was completely sharing Simon’s difficulty (Simon is the smiling student above): finding motivation to prepare classes and to enjoy my work. Being home in rigid lockdown for almost three months today, you basically go from your laptop to your laptop, either preparing courses or teaching them, to people who are as tired as you. Focusing is difficult. I decided to repeat the music experiment in the class on Port-Royal thinkers. How to explain Jansenism and Pascal’s background? We listened to Jansenist Sainte-Colombe’s music (e. g., Le tombeau des regrets) and I told the story of his pupil Marin Marais, who learned from him virtuous technique and then chose the world, becoming Versailles’ official composer. This was immediately understood and triggered a lot of commentaries that went from Pascal to the Logic or Art of Thinking passing through observations on Montaigne. What actually brings you better to the Port-Royalists’ spirit than a Leçon des ténèbres? Given the time-challenge (how long will they remain focused, once the meeting is launched?) and the necessity to transmit some content, I think the music experiment worked. In the end, aren’t we all using YouTube when we sit in front of the computer? The medium is the same. And during pandemics, we all are at pains with attention problems. So, maybe, this explains the massive usage of paintings in my power-points (never used so much of my poor competences of history of art before) or the references to literature and books, even the last entering my (invisible to them) library. I used to refer to books or show paintings even before the pandemic of course. What is different now is that they are more needed than ever to create bridges from one desktop to others.

Are philosophical classics too difficult for students?

Say you would like to learn something about Kant, should you start by reading one of his books or rather get a good introduction to Kant? Personally, I think it’s good to start with primary texts, get confused, ask questions, and then look at the introductions to see some of your questions discussed. Why? Well, I guess it’s better to have a genuine question before looking for answers. However, even before the latest controversy on Twitter (amongst others between Zena Hitz and Kevin Zollman) took off, I have been confronted with quite different views. Taken as an opposition between extreme views, you could ask whether you want to make philosophy about ideas or people (and their writings). It’s probably inevitable that philosophy ends up being about both, but there is still the question of what we should prioritise.

Arguably, if you expose students to the difficult original texts, you might frighten them off. Thus, Kevin Zollman writes: “If I wanted someone to learn about Kant, I would not send them to read Kant first. Kant is a terrible writer, and is impossible for a novice to understand.” Accordingly, he argues that what should be prioritised is the ideas. In response Zena Hitz raises a different educational worry: “You’re telling young people (and others) that serious reading is not for them, but only for special experts.” Accordingly, she argues for prioritising the original texts. As Jef Delvaux shows in an extensive reflection, both views touch on deeper problems relating to epistemic justice. A crucial point in his discussion is that we never come purely or unprepared to a primary text anyway. So an emphasis on the primary literature might be prone to a sort of givenism about original texts.

I think that all sides have a point, but when it comes to students wanting to learn about historical texts, there is no way around looking at the original. Let me illustrate my point with a little analogy:

Imagine you want to study music and your main instrument is guitar. It is with great excitement that you attend courses on the music of Bach whom you adore. The first part is supposed to be on his organ works, but already the first day is a disappointment. Your instructor tells you that you shouldn’t listen to Bach’s organ pieces themselves, since they might be far too difficult. Instead you’re presented with a transcription for guitar. Well, that’s actually quite nice because this is indeed more accessible even if it sounds a bit odd. (Taken as an analogy to reading philosophy, this could be a translation of an original source.) But then you look at he sheets. What is this? “Well”, the instructor goes on, “I’ve reduced the accompaniment to the three basic chords. That makes it easier to reproduce it in the exam, too. And we’ll only look at the main melodic motif. In fact, let’s focus on the little motif around the tonic chord. So, if you can reproduce the C major arpeggio, that will be good enough. And it will be a good preparation for my master class on tonic chords in the pre-classic period.” Leaving this music school, you’ll never have listened to any Bach pieces, but you have wonderful three-chord transcriptions for guitar, and after your degree you can set out on writing three-chord pieces yourself. If only there were still people interested in Punk!

Of course, this is a bit hyperbolic. But the main point is that too much focus on cutting things to ‘student size’ will create an artificial entity that has no relation to anything outside the lecture hall. But while I thus agree with Zena Hitz that shunning the texts because of their difficulties sends all sorts of odd messages, I also think that this depends on the purpose at hand. If you want to learn about Kant, you should read Kant just like you should listen to Bach himself. But what if you’re not really interested in Kant, but in a sort of Kantianism under discussion in a current debate? In this case, the purpose is not to study Kant, but some concepts deriving from a certain tradition.  In this case, you might be more like a jazz player who is interested in building a vocabulary. Then you might be interested, for instance, in how Bach dealt with phrases over diminished chords and focus on this aspect first. Of course, philosophical education should comprise both a focus on texts and on ideas, but I’d prioritise them in accordance with different purposes.

That said, everything in philosophy is quite difficult. As I see it, a crucial point in teaching is to convey means to find out where exactly the difficulties lie and why they arise. That requires all sorts of texts, primary, secondary, tertiary etc.

Why we shouldn’t study what we love

I recognize that I could only start to write about this … once I related to it. I dislike myself for this; my scholarly pride likes to think I can write about the unrelatable, too. Eric Schliesser

Philosophy students often receive the advice that they should focus on topics that they have a passion for. So if you have fallen for Sartre, ancient scepticism or theories of justice, the general advice is to go for one of those. On the face of it, this seems quite reasonable. A strong motivation might predict good results which, in turn, might motivate you further. However, I think that you might actually learn more by exposing yourself to material, topics and questions that you initially find remote, unwieldy or even boring. In what follows, I’d like to counter the common idea that you should follow your passions and interests, and try to explain why it might help to study things that feel remote.

Let me begin by admitting that this approach is partly motivated by my own experience as a student. I loved and still love to read Nietzsche, especially his aphorisms in The Gay Science. There is something about his prose that just clicks. Yet, I was always sure that I couldn’t write anything interesting about his work. Instead, I began to study Wittgenstein’s Tractatus and works from the Vienna Circle. During my first year, most of these writings didn’t make sense to me: I didn’t see why they found what they said significant; most of the terminology and writing style was unfamiliar. In my second year, I made things worse by diving into medieval philosophy, especially Ockham’s Summa Logicae and Quodlibeta. Again, not because I loved these works. In fact, I found them unwieldy and sometimes outright boring. So why would I expose myself to these things? Already at the time, I felt that I was actually learning something: I began to understand concerns that were alien to me; I learned new terminology; I learned to read Latin. Moreover, I needed to use tools, secondary literature and dictionaries. And for Ockham’s technical terms, there often were no translations. So I learned moving around in the dark. There was no passion for the topics or texts. But speaking with hindsight (and ignoring a lot of frustration along the way), I think I discovered techniques and ultimately even a passion for learning, for familiarising myself with stuff that didn’t resonate with me in the least. (In a way, it seemed to turn out that it’s a lot easier to say interesting things about boring texts than to say even boring things about interesting texts.)

Looking back at these early years of study, I’d now say that I discovered a certain form of scholarly explanation. While reading works I liked was based on a largely unquestioned understanding, reading these unwieldy new texts required me to explain them to myself. This in turn, prompted two things: To explain these texts (to myself), I needed to learn about the new terminology etc. Additionally, I began to learn something new about myself. Discovering that certain things felt unfamiliar to me, while others seemed familiar meant that I belonged to one kind of tradition rather than another. Make no mistake: Although I read Nietzsche with an unquestioned familiarity, this doesn’t mean that I could have explained, say, his aphorisms any better than the strange lines of Wittgenstein’s Tractatus. The fact that I thought I understood Nietzsche didn’t give me any scholarly insights about his work. So on top of my newly discovered form of explanation I also found myself in a new relation to myself or to my preferences. I began to learn that it was one thing to like Nietzsche and quite another to explain Nietzsche’s work, and still another to explain one’s own liking (perhaps as being part of a tradition).

So my point about not studying what you like is a point about learning, learning to get oneself into a certain mode of reading. Put more fancily: learning to do a certain way of (history of) philosophy. Being passionate about some work or way of thinking is something that is in need of explanation, just as much as not being passionate and feeling unfamiliar about something needs explaining. Such explanations are greatly aided by alienation. As I said in an earlier post, a crucial effect of alienation is a shift of focus. You can concentrate on things that normally escape your attention: the logical or conceptual structures for instance, ambiguities, things that seemed clear get blurred and vice versa. In this sense, logical formalisation or translation are great tools of alienation that help you to raise questions, and generally take an explanatory stance, even to your most cherished texts.

As a student, discovering this mode of scholarly explanation instilled pride, a pride that can be hurt when explanations fail or evade us. It was remembering this kind of pain, described in the motto of this post, that prompted these musings. There is a lot to be said for aloof scholarship and the pride that comes with it, but sometimes it just doesn’t add up. Because there are some texts that require a more passionate or intuitive relation before we can attain a scholarly stance towards them. If the passion can’t be found, it might have to be sought. Just like our ears have to be trained before we can appreciate some forms of, say, very modern music “intuitively”.

Must we claim what we say? A quick way of revising essays

When writing papers, students and advanced philosophers alike are often expected to take a position within a debate and to argue for or against a particular claim. But what if we merely wish to explore positions and look for hidden assumptions, rather than defend a claim? Let’s say you look at a debate and then identify an unaddressed but nevertheless important issue, a commitment left implicit in the debate, let’s call it ‘X’. Writing up your findings, the paper might take the shape of a description of that debate plus an identification of the implicit X. But the typical feedback to such an exploration can be discouraging: It’s often pointed out that the thesis could have been more substantive and that a paper written this way is not publishable unless supplemented with an argument for or against X. Such comments all boil down to the same problem: You should have taken a position within the debate you were describing, but you have failed to do so.

But hang on! We’re all learning together, right? So why is it not ok to have one paper do the work of describing and analysing a debate, highlighting, for instance, some unaddressed X, so that another paper may attempt an answer to the questions about X and come up with a position? Why must we all do the same thing and, for instance, defend an answer on top of everything else? Discussing this issue, we* wondered what this dissatisfaction meant and how to react to it. Is it true? Should you always take a position in a debate when writing a paper? Or is there a way of giving more space to other approaches, such as identifying an unaddressed X?

One way of responding to these worries is to dissect and extend the paper model, for instance, by having students try other genres, such as commentaries, annotated translations, reviews, or structured questions. (A number of posts on this blog are devoted to this.) However, for the purposes of this post, we’d like to suggest and illustrate a different idea. We assume that the current paper model (defending a position) does not differ substantially from other genres of scholarly inquiry. Rather, the difference between, say, a commentary or the description of a debate, on the one hand, and the argument for a claim, on the other, is merely a stylistic one. Now our aim is not to present an elaborate defense of this idea, but to try out how this might help in practice.

To test and illustrate the idea (below), we have dug out some papers and rewritten sections of them. Before presenting one sample, let’s provide a brief manual. The idea rests on the, admittedly somewhat contentious, tenets that

  • any description or analysis can be reformulated as a claim,
  • the evidence provided in a description can be dressed up as an argument for the claim.

But how do you go about it? In describing a debate, you typically identify a number of positions. So what if you don’t want to adopt and argue for one of them? There is something to be said for just picking a side anyway, but if that feels too random, here is a different approach:

(a) One thing you can always do is defend a claim about the nature of the disagreement in the debate. Taken this way, the summary of your description or analysis becomes the claim about the nature of the disagreement, while the analysis of the individual positions functions as an argument / evidence for this claim. This is not a cheap trick; it’s just a pointed way of presenting your material.

(b) A second step consists in actually labelling steps as claims, arguments, evaluations etc. Using such words doesn’t change the content, but it signals even to a hasty reader where your crucial steps begin and end.

Let’s now look at a passage from the conclusion of a paper. Please abstract away from the content of discussion. We’re just interested in identifying pertinent steps. Here is the initial text:

“… Thus, I have dedicated this essay to underscoring the importance of this problem. I have first discussed two of the most prominent levels accounts, namely O&P’s layer-cake account, and Craver and Bechtel’s mechanistic levels, and shown that they both provide radically different levels accounts. I addressed the problems with each account, and it became clear that what is considered to be a problem by some, is considered to be a virtue by others. This led us to uncover a deeper disagreement, namely about what the function of a levels account is supposed to be and what the term “level” means.”

Here is the rewritten version (underlined sections indicate more severe changes or additions):

“… But why is this problem significant? I have first discussed two of the most prominent levels accounts, namely O&P’s layer-cake account, and Craver and Bechtel’s mechanistic levels, and shown that they both provide radically different levels accounts. I addressed the problems with each account, and it became clear that what is considered to be a problem by some, is considered to be a virtue by others. This is in keeping with my second-order thesis that the dispute is less about content but rather about defining criteria. However, this raises the question of what to make of levels on any set of criteria. Answering this question led me to defend my main (first-order) thesis: If we look at the different sets of criteria, we uncover a deeper disagreement, namely about what the function of a levels account is supposed to be and what the term “level” means. Accordingly, I claim that disparate accounts of levels indicate different functions of levels.

We consider neither passage a piece of beauty. The point is merely to take some work in progress and see what happens if you follow the two steps suggested above: (a) articulate claims; (b) label items as such. – What can we learn from this small exercise? We think that the contrast between these two versions shows just how big of an impact the manner of presentation can have, not least on the perceived strength of a text. The desired effect would be that a reader can easily identify what is at stake for the author. Content-wise, both versions say the same thing. However, the first version strikes us as a bit detached and descriptive in character, whereas the second version seems more engaged and embracing a position. What used to be a text about a debate has now become a text partaking in a debate.  (Of course, your impressions might differ. So we’d be interested to hear about them!) Another thing we saw confirmed in this exercise is that you always already have a position, because you end up highlighting what matters to you. Having something to say about a debate still amounts a position. Arguably, it’s also worth to be presented as such.

Where do we go from here? Once you have reformulated such a chunk and labelled some of your ideas (say, as first and second order claims etc.), you can rewrite the rest of your text accordingly. Identify these items in the introduction, and clarify which of those items you argue for in the individual sections of your paper, such that they lead up to these final paragraphs. That will probably allow you (and the reader) to highlight the rough argumentative structure of your paper. Once this is established, it will be much easier to polish individual sections.

____

*Co-authored by Sabine van Haaren and Martin Lenz

Why using quotation marks doesn’t cancel racism or sexism. With a brief response to Agnes Callard

Would you show an ISIS video, depicting a brutal killing of hostages, to the survivor of their murders? Of if you prefer a linguistic medium: would you read Breivik’s Manifesto to a survivor of his massacre? – Asking these questions, I’m assuming that none of you would be inclined to endorse these items. That’s not the point. The question is why you would not present such items to a survivor or perhaps indeed to anyone. My hunch is that you would not want to hurt or harm your audience. Am I right? Well, if this is even remotely correct, why do so many people insist on continuing to present racist, sexist or other dehumanising expressions, such as the n-word, to others? And why do we decry the take-down of past authors as racists and sexists? Under the label of free speech, of all things? I shall suggest that this kind of insistence relies on what I call the quotation illusion and hope to show that this distinction doesn’t really work for this purpose.

Many people assume that there is a clear distinction between use and mention. When saying, “stop” has four letters, I’m not using the expression (to stop or alert you). Rather, I am merely mentioning the word to talk about it. Similarly, embedding a video or passages from a text into a context in which I talk about these items is not a straightforward use of them. I’m not endorsing what these things supposedly intend to express or achieve. Rather, I am embedding them in a context in which I might, for instance, talk about the effects of propaganda. It is often assumed that this kind of “going meta” or mentioning is categorically different from using expressions or endorsing statements. As I noted in an earlier post, if I use an insult or sincerely threaten people by verbal means, I act and cause harm. But if I consider a counterfactual possibility or quote someone’s words, my expressions are clearly detached from action. However, the relation to possible action is what contributes to making language meaningful in the first place. Even if I merely quote an insult, you still understand that quotation in virtue of understanding real insults. In other words, understanding such embeddings or mentions rides piggy-back on understanding straightforward uses.

If this is correct, then the difference between use and mention is not a categorical one but one of degrees. Thus, the idea that quotations are completely detached from what they express strikes me as illusory. Of course, we can and should study all kinds of expressions, also expressions of violence. But their mention or embedding should never be casual or justified by mere convention or tradition. If you considered showing that ISIS video, you would probably preface your act with a warning. – No? You’re against trigger warnings? So would you explain to your audience that you were just quoting or ask them to stop shunning our history? And would you perhaps preface your admonitions with a defense of free speech? – As I see it, embedded mentions of dehumanising expressions do carry some of the demeaning attitudes. So exposing others to them merely to make a point about free speech strikes me as verbal bullying. However, this doesn’t mean that we should stop quoting or mentioning problematic texts (or videos). It just means that prefacing such quotations with pertinent warnings is an act of basic courtesy, not coddling.

The upshot is that we cannot simply rely on a clear distinction between quotation and endorsement, or mention and use. But if this correct, then what about reading racist or sexist classics? As I have noted earlier, the point would not be to simply shun Aristotle or others for their bigotry. Rather, we should note their moral shortcomings as much as we should look into ours. For since we live in some continuity with our canon, we are to some degree complicit in their racism and sexism.

Yet instead of acknowledging our own involvement in our history, the treatment of problematic authors is often justified by claiming that we are able to detach ourselves from their involvement, usually by helping ourselves to the use-mention distinction. A recent and intriguing response to this challenge comes from Agnes Callard, who claims that we can treat someone like Aristotle as if he were an “alien”. We can detach ourselves, she claims, by interpreting his language “literally”, i.e. as a vehicle “purely for the contents of his belief” and as opposed to “messaging”, “situated within some kind of power struggle”. Taken this way, we can grasp his beliefs “without hostility”, and the benefits of reading come “without costs”. This isn’t exactly the use-mention distinction. Rather, it is the idea that we can entertain or consider ideas without involvement, force or attitude. In this sense, it is a variant of the quotation illusion: Even if I believe that your claims are false or unintelligible, I can quote you – without adding my own view. I can say that you said “it’s raining” without believing it. Of course I can also use an indirect quote or a paraphrase, a translation and so on. Based on this convenient feature of language, historians of philosophy (often including myself) fall prey to the illusion that they can present past ideas without imparting judgment. Does this work?

Personally, I doubt that the literal reading Callard suggests really works. Let me be clear: I don’t doubt that Callard is an enormously good scholar. Quite the contrary. But I’m not convinced that she does justice to the study that she and others are involved in when specifying it as a literal reading. Firstly, we don’t really hear Aristotle literally but mediated through various traditions, including quite modern ones, that partly even use his works to justify their bigoted views. Secondly, even if we could switch off Aristotle’s political attitudes and grasp his pure thoughts, without his hostility, I doubt that we could shun our own attitudes. Again, could you read Breivik’s Manifesto, ignoring Breivik’s actions, and merely grasp his thoughts? Of course, Aristotle is not Breivik. But if literal reading is possible for one, then why not for the other?

The upshot is: once I understand that a way of speaking is racist or sexist, I cannot unlearn this. If I know that ways of speaking hurt or harm others, I should refrain from speaking this way. If I have scholarly or other good reasons to quote such speech, I shouldn’t do so without a pertinent comment. But I agree with Callard’s conclusion: We shouldn’t simply “cancel” such speech or indeed their authors. Rather, we should engage with it, try and contextualise it properly. And also try and see the extent of our own involvement and complicity. The world is a messy place. So are language and history.

Cavendish’s Triumvirate and the Writing Process

I’m working through Margaret Cavendish’s Observations upon Experimental Philosophy (1666) at the moment. It’s not the first time (in fact, I taught a course on it after Christmas), but her writing is dense and is neither as systematic as someone like Descartes nor as succinct as someone like Berkeley. But the pay-off is a philosophy rich full of insights that genuinely does seem to be, if not ahead of its time (I don’t want to be accused of anachronism), then idiosyncratic to its immediate historical context in some striking ways. For example, I’m reading Cavendish alongside Keith Allen’s A Naïve Realist Theory of Colour (OUP, 2016), and there are clear signs that she had thought deeply about phenomena such as colour constancy (whereby we take objects to have remained the same colour even though a different coloured light is shining on them) and metamerism (objects with different microphysical qualities that appear to be the same colour) that are central to contemporary perception debates (Colin Chamberlain has written a great article on Cavendish’s atypical philosophy of colour). As far as I am aware, these aren’t issues that her contemporaries (Hobbes, Descartes, Berkeley, et al) were much preoccupied with. And while reading and working through Cavendish’s philosophy is a bit like trying to untangle a charger cable that’s been kept in a box in a drawer too long – each time you think you’ve untangled all the knots another one appears – it tends to be rewarding, even if it is near impossible to pin down exactly what she thinks about any given issue ‘X’.

Perhaps because of the inevitable struggle that comes with defending an interpretation of Cavendish’s philosophy, I’m also thinking a lot about the trials and tribulations of the writing process (it may also be because I have literally nothing else to do). For a long time, I’ve thought that one of the best pieces of writing advice came from Daniel Dennett who, in various platforms (including a keynote he gave here in Dublin last September) has encouraged writers to ‘blurt something out, and then you have something to work with’. I’ve regurgitated this advice to students several times, and it chimes well with me because I find it much easier to shape and mould a pre-existing block of text, than to face the task of squeezing something out of the ether (or my brain – wherever it comes from) and onto the page. Like Leibniz, I prefer a block to chip away from than a Lockean blank page. With that in mind, I’ve started to wonder whether a particular aspect of Cavendish’s metaphysics might provide us with a nice model for the writing process.

Perhaps one of the most interesting, and remarkable, aspects of Cavendish’s system of nature is her claim that all parts of nature contain what she calls a “triumvirate” of matter (note: Cavendish is a materialist, even the mind is composed of material substance in her system). She claims that each and every part of nature is made up of three kinds of matter: (1) rational matter, (2) sensitive matter, and (3) inanimate matter. Even if you could pick out an atomistic unit (although she rejects atomism herself), she thinks, you would find varying degrees of all three kinds of matter. Inanimate matter is matter as we would ordinarily think of it, bulky stuff that weighs the other kinds of matter down and does the important job of filling up space (a job I’ve gotten very good at myself during lockdown). Cavendish compares inanimate matter to the bricks and mortar used to build a house. Continuing this analogy, she suggests that sensitive matter plays the role of the team of builders, moving inanimate matter around and getting it to take up particular shapes and forms. The variety of ways that inanimate matter is put together, she thinks, explains the variety of things in the natural world around us. What’s more, if there were no sensitive matter to move inanimate matter around, she claims, the world would be entirely homogenous. Finally, she compares rational matter to the architect responsible for it all. For the sensitive matter wouldn’t know what to do with all the inanimate matter if it wasn’t told what to do by someone with a plan. In the section of the Observations entitled ‘An Argumental Discourse’ (one of the strangest philosophical dialogues out there, between two ‘halves’ of her own mind who are ‘at war’) she sums up the triumvirate of matter like so:

as in the exstruction of a house there is first required an architect or surveyor, who orders and designs the building, and puts the labourers to work; next the labourers or workmen themselves; and lastly the materials of which the house is built: so the rational part… in the framing of natural effects, is, as it were, the surveyor or architect; the sensitive, the labouring or working part; and the inanimate, the materials: and all these degrees are necessarily required in every composed action of nature.

Observations upon Experimental (Cambridge Texts Edition, edited by Eileen O’Neill (2001)) pp. 24

This is, then, a top-down approach to understanding both orderliness and variety of things in nature. It’s all possible, Cavendish thinks, because there’s an ‘architect’ (the rational part of a thing in nature) that devises a plan and decides what to do the with bulky mass of inanimate matter. (Another note: Cavendish is a vitalist materialist or what we might retrospectively call a panpsychist: she thinks that every part of nature, from grains of sand to plants, animals, and people, has life and knowledge of things in the world around it.)

Right, so how does all this relate to the writing process? I don’t quite know whether this is intended to be a helpful normative suggestion, or just a descriptive claim, but I suggest that Cavendish’s triumvirate might provide a model for thinking about how writing works. In this case, the role of bulky, cumbersome inanimate matter is played by the words on the page you’ve managed to ‘blurt out’, to use Dennett’s technical terminology. Or, perhaps it’s the thoughts/ ideas you’ve still got in your head. Either way, it’s a mass of sentences, propositions, textual references, and so on, that you’ve got to do something with (another tangled charger cable, if you will). What options have you got? Well, structure and presentation are important – and while these are facilitated by your word processor (for example), they constitute a kind of medium between your thought and the words on the page. So I’d suggest that presentation, structure, perhaps even the phrasing of individual sentences, is what plays the role of sensitive matter: Cavendish’s labourers or workmen.

Finally, there’s the role of rational matter: the architect or surveyor who’s plan the sensitive matter is just waiting to carry out. I actually think this may be the hardest comparison to draw. It would be easy to simply say ‘you’ are the architect of your writing, but once you’ve taken away the words/ ideas as well the as the way they are presented or structured, it’s hard to know exactly what’s doing the work or what’s left (just ask Hume). Last year, I saw Anna Burns, author of the brilliant Milkman, give a talk where she was asked about her writing process. Her answer, which in the mouth of another could have sounded pompous or pretentious, was honest and revealing: she had literally nothing to say. She couldn’t explain what the real source of her writing was and, even more remarkably, she wasn’t particularly interested. In any case, there’s something that’s grouping together, or paying selective attention to, some ideas or notions and advocating that they should become a piece of writing. Whatever that is, I suggest it plays the role of rational matter: Cavendish’s architect.

How might this be helpful to writers? I’m not sure it can in any practical way, but I find it helpful when I hit upon a nice description of something I’ve grappled with or when it seems that someone is describing my own experiences (it’s one of the reasons I like reading both philosophy and fiction). Perhaps Cavendish’s triumvirate model can be useful in this way. It may also, and I have begun to think in these terms myself, provide you with a measure of where you are in the writing process. Am I still sourcing the bricks and mortar? Are the labourers at work? Or are they waiting for instructions from the architect? Sometimes, it’s helpful to know where you are, because it lets you take stock of what there is still to do – and, in keeping with Cavendish’s analogy, who’s going to do it.

Questions – an underrated genre

Looking at introductions to philosophy, I realise that we devote much attention to the reconstruction of arguments and critical analysis of positions. Nothing wrong with that. Yet, where are the questions? Arguably, we spend much of our time raising questions, but apart from very few exceptions questions are rarely treated as a genre of philosophy. (However, here is an earlier post, prompted by Sara Uckelman’s approach, on which she elaborates here. And Lani Watson currently runs a project on philosophical questions.) Everyone who has tried to articulate a question in public will have experienced that it is not all that simple, at least not if you want to go beyond “What do you mean?” or “What time is it?” In what follows, I’d hope to get a tentative grip on it by looking back at my recent attempt to teach students asking questions.

This year, I gave an intense first-year course on medieval philosophy.* I say “intense” because it comprises eight hours per week: two hours lecture and two hours reading seminar on Thursday and Friday morning. It’s an ideal setting to do both, introduce material and techniques of approaching it as well as applying the techniques by doing close reading in the seminars. Often students are asked to write a small essay as a midterm exam. Given the dearth of introductions to asking questions, I set a “structured question” instead. The exercise looks like this:

The question will have to be about Anselm’s Proslogion, chapters 2-4. Ideally, the question focuses on a brief passage from that text. It must be no longer than 500 words and contain the following elements:

– Topic: say what the question is about;
– Question: state the actual question (you can also state the presupposition before stating the question);
– Motivation: give a brief explanation why the question arises;
– Answer: provide a brief anticipation of at least one possible answer.

What did I want to teach them? My declared goal was to offer a way of engaging with all kinds of texts. When doing so I assumed that understanding (a text) can be a general aim of asking questions. I often think of questions as a means of making contact with the text or interlocutor. For a genuine question brings two aspects together: on the one hand, there is your question, on the other, there is that particular bit of the text that you don’t understand or would like to hear more about. But … that’s more easily said than done. During the lectures and seminars we would use some questions from students to go through the motions. What I noticed almost immediately is that this was obviously really hard. One day, a student came up and said:

“Look, this focus on questions strikes me as a bit much. I’m used to answer questions, not raising them. It seems to require knowledge that I don’t have. As it is, it is rather confusing and I feel like drowning out at sea.”

I’m quoting from memory, but the gist should be clear. And while I now think of a smallish group of students as particularly brave and open, this comment probably represents the attitude of the majority. The students wanted guidance, and what I wanted to offer them instead was tools to guide themselves. I had and have a number of different reactions to the student’s confession. My first thought was that this is a really brave stance to take: Being so open about one’s own limits and confusion is rarely to be found even among established people. At the same time, I began to worry about my approach. To be sure, the confusion was caused intentionally to some degree, and I said so. But for this apporach to work one has to ensure that asking questions eventually provides tools to orient oneself and to recognise the reasons for the confusion. Students need to learn to consider questions such as: Why am I confused? Could it be that my own expectations send me astray? What am I expecting? What is it that the text doesn’t give me? Arguably, they need to understand their confusion to make contact to the text.  In other words, questions need to be understood. But this takes time and, above all, trust that the confusion lands us somewhere in the end.

When I taught this kind of course in the past, I did what the student seemed to miss now: I gave them not only guiding questions to provide a general storyline through the material, but also detailed advice on what to look for in the texts. While that strikes me as a fine way of introducing material, it doesn’t help them develop questions autonomously. In any case, we had to figure out the details of this exercise. So what is behind the four elements in the task above?

Since questions are often used for other purposes, such as masking objections or convey irritation, it is vital to be explicit about the aim of understanding. Thus, finding the topic had to be guided by a passage or concept that left the questioner genuinely confused. Admitting to such confusion is trickier than meets the eye, because it requires you to zoom in on your lack of understanding or knowledge. You might think that the topic just is the passage. But it’s important to attempt a separate formulation for two reasons: firstly, it tells the listener or reader what matters to you; secondly, it should provide coherence in that the question, motivation and answer should all be on the same topic.

In the beginning, I spent most of the time with analysing two items: the motivation and the formulation of the actual question. After setting out an initial formulation of the question, students had to spell out why the question arises. But why do questions arise? In a nutshell, most questions arise because we make a presupposition or have an expectation that the text does not meet. (Here is a recent post with more on such expectations.) A simple example is that you expect evidence or an argument for a claim p, while the author might simply say that p is self-evident. You can thus begin by jotting down something like “Why would p be self-evident, according to the author?” This means that now, at last, you can talk about something that you do know: your expectations. Ideally, this provides a way of spelling out what you expect and thus what the text lacks (from that perspective). Going from there, the tentative answer will have to provide a reason that shows why p is self-evident for the author. Put differently, while the motivation brings out your presuppositions, the answer is an attempt at spelling out the presuppositions guiding the text (or author). With hindsight, you can now also fix the topic, e.g. self-evidence.

But things are never that straightforward. What I noticed after a while was that many students went off in a quite different direction when it came to answering the question. Rather than addressing the possible reasons of the author, the students began to spell out why the author was wrong. At least during the first letures, they would sometimes not try to see what reasons the author could invoke. Instead, they would begin by stating why their own presupposition was right and the author wrong, whatever the author’s reasons.

This is not surprising. Most discussions inside and outside of philosophy have exactly this structure. Arguably, most philsophy is driven by an adversarial culture rather than by the attempt to understand others. A question is asked, not to target a difficulty in understanding, but to justify the refutation of the interlocutor’s position. While this approach can be one legitimate way of interacting, it appears particularly forced in engaging with historical texts. Trying to say why Anselm or any other historical author was wrong, by contemporary standards, just is a form of avoiding historical analysis. You might as well begin by explaining your ideas and leave Anselm out of the equation altogether.

But how can an approach to understanding the text (rather than refuting it) be encouraged? If you start out from the presupposition that Anselm is wrong, an obvious way would be to ask for the reasons that make his position seem right. It strikes me as obvious that this requires answering the question on Anselm’s behalf. It is at this point that we need to move from training skills (of asking questions) to imparting (historical) knowledge. Once the question arises why an author claims that p, and p does not match our expectations, we need to teach students to recognise certain moves as belonging to different traditions and ways of doing philosophy, ways that do not square with our current culture. My hope is that, if we begin with teaching to raise questions, it will become more desirable to acquire the knowledge relevant to providing answers and to understanding our own questions.

_____

* I’ve really enjoyed teaching this course and think I’ve learned a lot from it. Special thanks to my patient students, particularly to my great TAs, Elise van de Kamp and Mark Rensema, whose ideas helped me enormously in shaping the course. – Now, if you’ve read this far, I’d like to thank you, too, for bearing with me. Not only for the length of this post. Today is a special occasion: this is post number 101.