Meditation in philosophy. A conversation with Andrea Sangiacomo (podcast)

Meditation in philosophy. A conversation with Andrea Sangiacomo (podcast)

This is the fourth installment of my still fairly new series Philosophical Chats. In this episode, I have a conversation with Andrea Sangiacomo who is an associate professor of philosophy at Groningen University. In this conversation, we focus on meditation both as part of philosophical traditions as well as an approach that might be a resourceful factor impacting (academic) philosophy, teaching and academic culture. While Cartesian and Buddhist ideas* form a continuous resource in the background of our discussion, here is a list of themes in case you look for something specific:

  • Introduction   0:00
  • Meditation and Descartes’ Meditations   2:20
  • The notion of experience – and objections against experience as a basis in philosophy   9:00
  • Meditation in teaching   21:14
  • Why aren’t we already using these insights in education?   37:00
  • How can we teach and learn effectively?   44:36
  • How can we guide and assess?   52:50
  • Where is this approach leading, also in terms of academic culture?   1:03:00

______

* The opening quotation is from Andrea’s blogpost What can we learn today from Descartes’ Meditations? Here is the passage: “Since last year, I appreciated the text of the Mediations as real meditation, namely, as a way of practicing a meditative kind of philosophy (for lack of better term), a philosophy more concerned with what it means to experience reality in this way or that way, rather than with what a certain set of propositions means.”

He has published four more posts on this topic on the blog of the Centre for Medieval and Early Modern Thought. They are:

History is about you. On teaching outdated philosophy

Everything we take to be history is, in fact, present right now. Otherwise we wouldn’t think about it.

When I was little, I often perceived the world as an outcome of historical progress. I didn’t exactly use the word “historical progress” when talking to myself, but I thought I was lucky to grow up in the 20th century rather than, say, the Middle Ages. Why? Well, the most obvious examples were advances in technology. We have electricity; they didn’t. That doesn’t change everything, but still a lot. Thinking about supposedly distant times, then, my childhood mind conjured up an image of someone dragging themselves through a puddle of medieval mud, preferably while I was placed on the sofa in a cozy living-room with the light switched on and the fridge humming in the adjacent kitchen. It took a while for me to realise that this cozy contrast between now and then is not really an appreciation of the present, but a prejudice about history, more precisely about what separates us from the past. For what my living room fantasy obscures is that this medieval mud is what a lot of people are dragging themselves through today. It would have taken a mere stroll through town to see how many homeless or other people do not live in the same world that I identified as my present world. Indeed, most things that we call “medieval” are present in our current world. Listening to certain people today, I realise that talk of the Enlightenment, the Light of Reason and Rationality is portrayed in much the same way as my living-room fantasy. But as with the fruits of technology, I think the praise of Enlightenment is not an appreciation of the present, but a prejudice about what separates us from the past. One reaction to this prejudice would be to chide the prejudiced minds (and my former self); another reaction is to try and look more closely at our encounters with these prejudices when doing history. That means to try and see them as encounters with ourselves, with the ideologies often tacitly drummed into us, and to understand how these prejudices form our expectations when reading old texts. Approaching texts in this latter way, means to read them both as historical philosophical documents as much as an encounter with ourselves. It is this latter approach I want to suggest as a way of reading and teaching what could be called outdated philosophy. According to at least some of my students’ verdicts about last term, this might be worth pursuing.

Let’s begin with the way that especially medieval philosophy is often introduced. While it’s often called “difficult” and “mainly about religion”, it’s also said to require so much linguistic and other erudition that anyone will wonder why on earth they should devote much time to it. One of the main take-away messages this suggests is an enormous gap between being served some catchy chunks of, you know, Aquinas, on the one hand, and the independent or professional study of medieval texts, on the other hand. Quite unlike in ethics or social philosophy, hardly any student will see themselves as moving from the intro course to doing some real research on a given topic in this field. While many medievalists and other historians work on developing new syllabi and approaches, we might not spend enough time on articulating what the point or pay-off of historical research might be. – I don’t profess to know what the point of it all is. But why would anyone buy into spending years on learning Latin or Arabic, palaeography or advanced logic, accepting the dearth of the academic job market, a philosophical community dismissing much of their history? For the sake of, yes, what exactly? Running the next edition of Aquinas or growing old over trying to get your paper on Hildegard of Bingen published in a top journal? I’m not saying that there is no fun involved in studying these texts and doing the work it takes; I’m wondering whether we make sufficiently explicit why this might be fun. Given the public image of history (of philosophy), we are studying what the world was like before there was electricity and how they then almost invented it but didn’t.

Trying to understand what always fascinated me about historical studies, I realised it was the fact that one learns as much about oneself as about the past. Studying seemingly outdated texts helped me understand how this little boy in the living room was raised into ideologies that made him (yes, me) cherish his world with the fridge in the adjacent kitchen, and think of history as a linear progress towards the present. In this sense, that is in correcting such assumptions, studying history is about me and you. But, you ask, even if this is true, how can we make it palpable in teaching? – My general advice is: Try to connect to your student-self, don’t focus on the supposed object of study, but on what it revealed about you. Often this isn’t obvious, because there is no obvious connection. Rather, there is disparity and alienation. It is an alienation that might be similar to moving to a different town or country. So, try to capture explicitly what’s going on in the subject of study, too, in terms of experience, resources and methods available. With such thoughts in mind, I designed a course on the Condemnation of 1277 and announced it as follows:

Condemned Philosophy? Reason and faith in medieval and contemporary thought

Why are certain statements condemned? Why are certain topics shunned? According to a widespread understanding of medieval cultures, especially medieval philosophy was driven and constrained by theological and religious concerns. Based on a close reading of the famous condemnation of 1277, we will explore the relation between faith and reason in the medieval context. In a second step we will look at contemporary constraints on philosophy and the role of religion in assessing such constraints. Here, our knowledge of the medieval context might help questioning current standards and prejudices. In a third step we will attempt to reconsider the role of faith and belief in medieval and contemporary contexts.

The course was aimed at BA students in their 3rd year. What I had tried to convey in the description is that the course should explore not only medieval ideas but also the prejudices through which they are approached. During the round of introductions many students admitted that they were particularly interested in this twofold focus on the object and the subject of study. I then explained to them that most things I talk about can be read about somewhere else. What can’t be done somewhere else is have them come alive by talking them through. I added that “most of the texts we discuss are a thousand years old. Despite that fact, these texts have never been exposed to you. That confrontation is what makes things interesting.” In my view, the most important tool to bring out this confrontation lies in having students prepare and discuss structured questions about something that is hard to understand in the text. (See here for an extensive discussion) The reason is that questions, while targeting something in the text, reveal the expectations of the person asking. Why does the question arise? Because there is something lacking that I would expect to be present in the text. Most struggles with texts are struggles with our own expectations that the text doesn’t meet. Of course, there might be a term we don’t know or a piece of information lacking, but this is easily settled with an internet search these days. The more pervasive struggles often reveal that we encounter something unfamiliar in the sense that it runs counter to what we expect the text to say. This, then, is where a meeting of the current students and historical figures takes place, making explicit our and their assumptions.

During the seminar discussions, I noticed that students, unlike in other courses, dared targeting really tricky propositions that they couldn’t account for on the fly. Instead of trying to appear as being on top of the material, they delineated problems to be addressed and raised genealogical questions of how concepts might have developed between 1277 and 2020. Interestingly, the assumption was often not that we were more advanced. Rather they were interested in giving reasons why someone would find a given idea worth defending. So my first impression after this course was that the twofold focus on the object and subject of study made the students’ approach more historical, in that they didn’t take their own assumptions as a yardstick for assessing ideas. Another outcome was that students criticised seeing our text as a mere “object of study”. In fact, I recall one student saying that “texts are hardly ever mere objects”. Rather, we should ultimately see ourselves as engaging in dialogue with other subjects, revealing their prejudices as much as our own.

The children in the living room were not chided. They were recognised in what they had taken over from their elders. Now they could be seen as continuing to learn – making, shunning and studying history.

Empiricism and rationalism as political ideas?

Are all human beings equal? – Of course, that’s why we call them human. ­– But how do we know? – Well, it’s not a matter of empirical discovery, it’s our premise. ­– I see. And so everything else follows?

The opposition between empiricism and rationalism is often introduced as an epistemological dispute, concerning primarily the ways knowledge is acquired, warranted and limited. This is what I learned as a student and what is still taught today. If you’ve studied philosophy for a bit, you will also have heard that this opposition is problematic and coarse-grained when taken as a historical category. But in my view the problem is not that this opposition is too coarse-grained (all categories of that kind are). Rather, the problem lies with introducing it as a mere epistemological dispute. As I see it,* the opposition casts a much wider conceptual net and is rooted in metaphysical and even political ideas. Thus, the opposition is to be seen in relation to a set of disagreements in both theoretical and practical philosophy. In what follows, I don’t want to present a historical or conceptual account, but merely suggest means of recognising this wide-ranging set of ideas and show how the distinction helps us seeing the metaphysical implications and political choices related to our epistemological leanings.

Let me begin with a simple question: Do you think there is, ultimately, only one true description of the world? If your answer is ‘yes’, I’d be inclined to think that you are likely to have rationalist commitments. Why? Well, because an empiricist would likely reject that assumption for the reason that we might not be able to assess whether we lack important knowledge. Thus, we might miss out on crucial insights required to answer that question in the first place. This epistemological caution bears on metaphysical questions: Might the world be a highly contingent place, subject to sudden or constant change? If this is affirmed, it might not make sense to say that there is one true description of the world. How does this play out in political or moral terms? Rephrasing the disagreement a bit, we might say that rationalists are committed to the idea that the world is ordered in a certain way, while empiricists will remain open as to whether such an order is available to us at all. Once we see explanatory order in relation to world order, it becomes clear that certain commitments might follow for what we are and, thus, for what is good for us. If you believe that we can attain the one true description of the world, you might also entertain the idea that this standard should inform our sciences and our conduct at large. – Of course, this is quite a caricature of what I have in mind. All I want to suggest is that it might be rewarding to look whether certain epistemological leanings go hand in hand with metaphysical and practical commitments. So let’s zoom in on the different levels in a bit more detail.

(1) Epistemology: As I have already noted, the opposition is commonly introduced as concerning the origin, justification and limits of knowledge. Are certain ideas or principles innate or acquired through the senses? Where do we have to look in order to justify our assumptions? Can we know everything there is to be known, at least in principle, or are there realms that we cannot even sensibly hope to enter? – If we focus on the question of origin, we can already see how the opposition between empiricism and rationalism affects the pervasive nature-nurture debates: Are certain concepts and the related abilities owing to learning within a certain (social) environment or are the crucial elements given to us from the get-go? Now, let’s assume you’re a rationalist and think that our conceptual activity is mostly determined from the outset. Doesn’t it follow from this that you also assume that we are equal in our conceptual capacities? And doesn’t it also follow that rules of reasoning and standards of rationality are the same for all (rather than owing, say, to cultural contexts)? – While the answers are never straightforward, I would assume at least certain leanings into one direction or another. But while such leanings might already inform political choices, it is equally important to see how they relate to other areas of philosophy.

(2) Metaphysics: If you are an empiricist and assume that the main sources of our knowledge are our (limited) senses, this often goes and in hand with epistemic humility and the idea that we cannot explain everything. Pressed why you think so, you might find yourself inclined to say that the limits of our knowledge have a metaphysical footing. After all, if we cannot say whether an event is fully explicable, might this not be due to the fact that the world is contingent? Couldn’t everything have been otherwise, for instance because God interferes in events here and there? In other words, if you don’t assume there to be a sufficient reason for everything, this might be because you accept brute facts. Accordingly, the world is a chancy place and what our sciences track might be good enough to get by, but never provide the certainty that is promised by our understanding of natural laws. Depending on the historical period, such assumptions often go hand in hand with more or less explicit forms of essentialism. The lawful necessities in nature might be taken to relate to the way things are. Now essences are not only taken to determine what things are, but also how they ought to be. – Once you enter the territory of essentialism, then, it is only a small step to leanings regarding norms of being (together), of rationality, and of goodness.

(3) Theology / Sources of Normativity: If you allow for an essentialist determination of how things are and ought to be, this immediately raises the question of the sources of such essences and norms. Traditionally, we often find this question addressed in the opposition between theological intellectualism (or rationalism) and voluntarism: Intellectualists assume that norms of being and acting are prior to what God wills. So even God is bound by an order prior to his will. God acts out of reasons that are at least partly determined by the way natural things and processes are set up. By contrast, voluntarists assume that something is rational or right because God wills it, not vice versa. It is clear how this opposition rhymes with that of rationalism and empiricism: The rationalist assumes one order that even binds God. The empiricist remains epistemically humble, because she believes that rationality is fallible. Perhaps she believes this because she assumes that the world is a chancy place, which in turn might be owing to the idea that the omnipotent God can intervene anytime. It is equally clear how this opposition might translate into (lacking) justifications of moral norms or political power. – Unlike often assumed in the wake of Blumenberg and others, this doesn’t mean that voluntarism or empiricism straightforwardly translate into political absolutism. It is hardly ever a particular political idea that is endorsed as a result of empiricist or rationalist leanings. Nevertheless, we will likely find elements that play out in the justification of different systems.**

Summing up, we can see that certain ideas in epistemology go hand in hand with certain metaphysical as well as moral and political assumptions. The point is not to argue for systematically interwoven sets of doctrines, but to show that the opposition of empiricism and rationalism is so much more than just a disagreement about whether our minds are “blank slates”. Our piecemeal approach to philosophical domains might have its upsides, but it blurs our vision when it comes to the tight connections between theoretical and practical questions which clearly were more obvious to our historical predecessors. Seen this way, you might try and see whether you’ll find pertinently coherent assumptions in historical or current authors or in yourself. I’m not saying you’re inconsistent if you diverge from a certain set of assumptions. But it might be worth asking if and why you conform or diverge.

_____

* A number of ideas alluded to here would never have seen the light of day without numerous conversations with Laura Georgescu.

** See my posts on Ockham’s and Wittgenstein’s voluntarism for more details.

Solitude standing. How I remain a solipsist (and you probably, too)

“… solipsism strictly carried out coincides with pure realism. The I in solipsism shrinks to an extensionless point and there remains the reality co-ordinated with it.” Wittgenstein, TLP 5.64

When was the last time you felt really and wholly understood? If this question is meaningful, then there are such moments. I’d say, it does happen, but very rarely. If things move in a good direction, there is an overlap or some contiguity or a fruitful friction in your conversation. Much of the time, though, I feel misunderstood or I feel that I have misunderstood others. – Starting from such doubts, you could take this view to its extremes and argue that only you understand yourself or, more extreme still, that there is nothing external to your own mind. But I have to admit that I find these extreme brands of solipsism, as often discussed in philosophy, rather boring. They are highly implausible and don’t capture what I think is a crucial idea in solipsism. What I find crucial is the idea that each of us is fundamentally alone. However, it’s important to understand in what sense we are alone. As I see it, I am not alone in the sense that only I know myself or only my mind exists. Rather, I am alone insofar as I am different from others. Solitude, then, is not merely a feeling but also a fact about the way we are.* In what follows, I’d like to suggest reasons for embracing this view and how its acknowledgement might actually make us more social.

Throwing the baby out with the bathwater. – In 20th-century philosophy, solipsism has often had a bad name. Solipsism was and is mostly construed as the view that subjective experience is foundational. So you might think that you can only be sure about what’s going on in your own mind. If you hold that view, people will ridicule you as running into a self-defeating position, because subjective states afford no criteria to distinguish between what seems and what is right. Rejecting subjective experience as a foundation for knowledge or theories of linguistic meaning, many people seemed to think it was a bad idea altogether. This led to an expulsion of experience from many fields in philosophy. Yes, it does seem misguided to build knowledge or meaning on subjective experience. But that doesn’t stop experience from playing an important part in our (mental) lives. Let me illustrate this issue a bit more so as to show where I see the problem. Take the word “station”. For the (public) meaning of this word, it doesn’t matter what your personal associations are. You might think of steam trains or find the sound of the word a bit harsh, but arguably nothing of this matters for understanding what the word means. And indeed, it would seem a bit much if my association of steam trains would be a necessary ingredient for mastering the concept or using it in communication. This is a bit like saying: If we want to use the word “station” to arrange a meeting point, it doesn’t matter whether you walk to the station through the village or take the shortcut across the field. And yes, it doesn’t matter for the meaning or success of our use of the word whether you cut across the field.  But hang on! While it doesn’t matter for understanding the use of the word, it does matter for understanding my interlocutor. Thinking of steam trains is different from not thinking of them. Cutting across the field is different from walking through the village. This is a clear way in which the experience of interlocutors matters. Why? Well, because it is different. As speakers, we have a shared understanding of the word “station”; as interlocutors we have different experiences and associations we connect with that word. As I see it, it’s fine to say that experience doesn’t figure in the (public) meaning. But it is problematic to deny that the difference in experience matters.

A typical objection to this point is that private or subjective experience cannot be constitutive for meaning. But this goes only so far. As interlocutors, we are not only interested in understanding the language that someone uses, but also the interlocutor who is using it. This is not an easy task. For understanding language is rooted in grasping sameness across different contexts, while understanding my interlocutor is rooted in acknowledging difference (in using the same words). This is not a point about emphatic privacy or the idea that our experience were to constitute meaning (it doesn’t). It’s a point about how differences can play out in practical interaction. To return to the earlier example “Let’s go to the station” can mean very different things, if one of you wants to go jointly but it turns out you have different routes in mind. So understanding the interlocutor involves not only a parsing of the sentence, but an acknowledgement of the differences in association. It requires acknowledging that we relate different experiences or expectations to this speech act. So while we have a shared understanding of language, we often lack agreement in associations. It is this lack of agreement that can make me vastly different from others. Accordingly, what matters in my understanding of solipsism is not that we have no public language (we do), but that we are alone (to some degree) with our associations and experiences.

Arguably, these differences matter greatly in understanding or misunderstanding others. Let me give an example: Since I started blogging, I can see how often people pick one or two ideas and run. Social media allow you to test this easily. Express an opinion and try to predict whether you’ll find yourself in agreement with at least a fair amount of people. Some of my predictions failed really miserably. But even if predictions are fulfilled, most communication situations lack a certain depth of understanding. Why is this the case? A common response (especially amongst analytically inclined philosophers) is that our communication lacks clarity. If this were true, we should improve our ways of communicating. But if I am right, this doesn’t help. What would help is acknowledging the differences in experience. Accordingly, my kind of solipsism is not saying: Only I know myself. Or: Only my mind exists. Rather it says: I am different (from others).

This “differential solipsism” is clearly related to perspectivism and even standpoint theory. However, in emerging from the acknowledgement of solitude, it has a decidedly existential dimension. If a bit of speculation is in order, I would even say that the tendency to shun solipsism might be rooted in the desire to escape from solitude by denying it. It’s one thing to acknowledge solitude (rooted in difference); it’s another thing to accept the solitary aspects of our (mental) lives. Let’s look more closely how these aspects play out.

Even if philosophers think that experience doesn’t figure in the foundations of knowledge and meaning, it figures greatly in many of our interactions.** We might both claim to like jazz, but if we go to a concert, it might be a disappointment when it turns out that we like it for very different reasons. So you might like the improvisations, while I don’t really care about this aspect, but am keen on the typical sound of a jazz combo. If the concert turns out to feature one but not the other aspect, our differences will result in disagreement.  Likewise, we might disagree about our way to the station, about the ways of eating dinner etc. Now as I see it, the solitude or differences we experience in such moments doesn’t sting because of the differences themselves. What makes such moments painful is rather when we endure and paste over these differences without acknowledging them.

If I am right, then I don’t feel misunderstood because you don’t happen to care about the sound of the combo. I feel misunderstood, because the difference remains unacknowledged. Such a situation can typically spiral into a silly kind of argument about “what really matters”: the sound or the improvisation. But this is just silly: what matters for our mutual understanding is the difference, not one of the two perspectives. In a nutshell: True understanding does not lie in agreement, but in the detailed acknowledgement of disagreement.***

But why, you might ask, should this be right? Why would zooming in on differences in association or experience really amend the situation? The reason might be given in Wittgenstein’s claim that solipsism ultimately coincides with realism. How so? Well, acknowledging the different perspectives should hopefully end the struggle over the question which of the perspectives is more legitimate. Can we decide on the right way to the station? Or on the most salient aspect in a jazz concert? No. What we can do is articulate all the perspectives, acknowledging the reality that each view brings to the fore. (If you like, you can imagine all the people in the world articulating their different experiences, thereby bringing out “everything that is the case.”)

Writing this, I am reminded of a claim Evelina Miteva made in a conversation about writing literature: The more personal the description of events is, the more universal it might turn out to be. While this sounds paradoxical, the realism of differential solipsism makes palpable why this is true. The clear articulation of a unique experience does not block understanding. Quite the contrary: It allows for localising it in opposition to different experiences of the same phenomenon. In all these cases, we might experience solitude through difference, but we will not feel lonely for being invisible.

_____

* Of course, the title “Solitude standing” is also a nod to the great tune by Suzanne Vega:

** In this sense, degrees of privacy can be cashed out in degrees of intimacy between interlocutors.

*** And once again, I am reminded of Eric Schliesser’s discussion of Liam Brights’s post on subjectivism, hitting the nail on the following head: “Liam’s post (which echoes the loveliest parts of Carnap’s program with a surprisingly Husserlian/Levinasian sensibility) opens the door to a much more humanistic understanding of philosophy. The very point of the enterprise would be to facilitate mutual understanding. From the philosophical analyst’s perspective the point of analysis or conceptual engineering, then, is not getting the concepts right (or to design them for ameliorative and feasible political programs), but to find ways to understand, or enter into, one’s interlocutor life world.”

Are philosophical classics too difficult for students?

Say you would like to learn something about Kant, should you start by reading one of his books or rather get a good introduction to Kant? Personally, I think it’s good to start with primary texts, get confused, ask questions, and then look at the introductions to see some of your questions discussed. Why? Well, I guess it’s better to have a genuine question before looking for answers. However, even before the latest controversy on Twitter (amongst others between Zena Hitz and Kevin Zollman) took off, I have been confronted with quite different views. Taken as an opposition between extreme views, you could ask whether you want to make philosophy about ideas or people (and their writings). It’s probably inevitable that philosophy ends up being about both, but there is still the question of what we should prioritise.

Arguably, if you expose students to the difficult original texts, you might frighten them off. Thus, Kevin Zollman writes: “If I wanted someone to learn about Kant, I would not send them to read Kant first. Kant is a terrible writer, and is impossible for a novice to understand.” Accordingly, he argues that what should be prioritised is the ideas. In response Zena Hitz raises a different educational worry: “You’re telling young people (and others) that serious reading is not for them, but only for special experts.” Accordingly, she argues for prioritising the original texts. As Jef Delvaux shows in an extensive reflection, both views touch on deeper problems relating to epistemic justice. A crucial point in his discussion is that we never come purely or unprepared to a primary text anyway. So an emphasis on the primary literature might be prone to a sort of givenism about original texts.

I think that all sides have a point, but when it comes to students wanting to learn about historical texts, there is no way around looking at the original. Let me illustrate my point with a little analogy:

Imagine you want to study music and your main instrument is guitar. It is with great excitement that you attend courses on the music of Bach whom you adore. The first part is supposed to be on his organ works, but already the first day is a disappointment. Your instructor tells you that you shouldn’t listen to Bach’s organ pieces themselves, since they might be far too difficult. Instead you’re presented with a transcription for guitar. Well, that’s actually quite nice because this is indeed more accessible even if it sounds a bit odd. (Taken as an analogy to reading philosophy, this could be a translation of an original source.) But then you look at he sheets. What is this? “Well”, the instructor goes on, “I’ve reduced the accompaniment to the three basic chords. That makes it easier to reproduce it in the exam, too. And we’ll only look at the main melodic motif. In fact, let’s focus on the little motif around the tonic chord. So, if you can reproduce the C major arpeggio, that will be good enough. And it will be a good preparation for my master class on tonic chords in the pre-classic period.” Leaving this music school, you’ll never have listened to any Bach pieces, but you have wonderful three-chord transcriptions for guitar, and after your degree you can set out on writing three-chord pieces yourself. If only there were still people interested in Punk!

Of course, this is a bit hyperbolic. But the main point is that too much focus on cutting things to ‘student size’ will create an artificial entity that has no relation to anything outside the lecture hall. But while I thus agree with Zena Hitz that shunning the texts because of their difficulties sends all sorts of odd messages, I also think that this depends on the purpose at hand. If you want to learn about Kant, you should read Kant just like you should listen to Bach himself. But what if you’re not really interested in Kant, but in a sort of Kantianism under discussion in a current debate? In this case, the purpose is not to study Kant, but some concepts deriving from a certain tradition.  In this case, you might be more like a jazz player who is interested in building a vocabulary. Then you might be interested, for instance, in how Bach dealt with phrases over diminished chords and focus on this aspect first. Of course, philosophical education should comprise both a focus on texts and on ideas, but I’d prioritise them in accordance with different purposes.

That said, everything in philosophy is quite difficult. As I see it, a crucial point in teaching is to convey means to find out where exactly the difficulties lie and why they arise. That requires all sorts of texts, primary, secondary, tertiary etc.

Why we shouldn’t study what we love

I recognize that I could only start to write about this … once I related to it. I dislike myself for this; my scholarly pride likes to think I can write about the unrelatable, too. Eric Schliesser

Philosophy students often receive the advice that they should focus on topics that they have a passion for. So if you have fallen for Sartre, ancient scepticism or theories of justice, the general advice is to go for one of those. On the face of it, this seems quite reasonable. A strong motivation might predict good results which, in turn, might motivate you further. However, I think that you might actually learn more by exposing yourself to material, topics and questions that you initially find remote, unwieldy or even boring. In what follows, I’d like to counter the common idea that you should follow your passions and interests, and try to explain why it might help to study things that feel remote.

Let me begin by admitting that this approach is partly motivated by my own experience as a student. I loved and still love to read Nietzsche, especially his aphorisms in The Gay Science. There is something about his prose that just clicks. Yet, I was always sure that I couldn’t write anything interesting about his work. Instead, I began to study Wittgenstein’s Tractatus and works from the Vienna Circle. During my first year, most of these writings didn’t make sense to me: I didn’t see why they found what they said significant; most of the terminology and writing style was unfamiliar. In my second year, I made things worse by diving into medieval philosophy, especially Ockham’s Summa Logicae and Quodlibeta. Again, not because I loved these works. In fact, I found them unwieldy and sometimes outright boring. So why would I expose myself to these things? Already at the time, I felt that I was actually learning something: I began to understand concerns that were alien to me; I learned new terminology; I learned to read Latin. Moreover, I needed to use tools, secondary literature and dictionaries. And for Ockham’s technical terms, there often were no translations. So I learned moving around in the dark. There was no passion for the topics or texts. But speaking with hindsight (and ignoring a lot of frustration along the way), I think I discovered techniques and ultimately even a passion for learning, for familiarising myself with stuff that didn’t resonate with me in the least. (In a way, it seemed to turn out that it’s a lot easier to say interesting things about boring texts than to say even boring things about interesting texts.)

Looking back at these early years of study, I’d now say that I discovered a certain form of scholarly explanation. While reading works I liked was based on a largely unquestioned understanding, reading these unwieldy new texts required me to explain them to myself. This in turn, prompted two things: To explain these texts (to myself), I needed to learn about the new terminology etc. Additionally, I began to learn something new about myself. Discovering that certain things felt unfamiliar to me, while others seemed familiar meant that I belonged to one kind of tradition rather than another. Make no mistake: Although I read Nietzsche with an unquestioned familiarity, this doesn’t mean that I could have explained, say, his aphorisms any better than the strange lines of Wittgenstein’s Tractatus. The fact that I thought I understood Nietzsche didn’t give me any scholarly insights about his work. So on top of my newly discovered form of explanation I also found myself in a new relation to myself or to my preferences. I began to learn that it was one thing to like Nietzsche and quite another to explain Nietzsche’s work, and still another to explain one’s own liking (perhaps as being part of a tradition).

So my point about not studying what you like is a point about learning, learning to get oneself into a certain mode of reading. Put more fancily: learning to do a certain way of (history of) philosophy. Being passionate about some work or way of thinking is something that is in need of explanation, just as much as not being passionate and feeling unfamiliar about something needs explaining. Such explanations are greatly aided by alienation. As I said in an earlier post, a crucial effect of alienation is a shift of focus. You can concentrate on things that normally escape your attention: the logical or conceptual structures for instance, ambiguities, things that seemed clear get blurred and vice versa. In this sense, logical formalisation or translation are great tools of alienation that help you to raise questions, and generally take an explanatory stance, even to your most cherished texts.

As a student, discovering this mode of scholarly explanation instilled pride, a pride that can be hurt when explanations fail or evade us. It was remembering this kind of pain, described in the motto of this post, that prompted these musings. There is a lot to be said for aloof scholarship and the pride that comes with it, but sometimes it just doesn’t add up. Because there are some texts that require a more passionate or intuitive relation before we can attain a scholarly stance towards them. If the passion can’t be found, it might have to be sought. Just like our ears have to be trained before we can appreciate some forms of, say, very modern music “intuitively”.

Why using quotation marks doesn’t cancel racism or sexism. With a brief response to Agnes Callard

Would you show an ISIS video, depicting a brutal killing of hostages, to the survivor of their murders? Of if you prefer a linguistic medium: would you read Breivik’s Manifesto to a survivor of his massacre? – Asking these questions, I’m assuming that none of you would be inclined to endorse these items. That’s not the point. The question is why you would not present such items to a survivor or perhaps indeed to anyone. My hunch is that you would not want to hurt or harm your audience. Am I right? Well, if this is even remotely correct, why do so many people insist on continuing to present racist, sexist or other dehumanising expressions, such as the n-word, to others? And why do we decry the take-down of past authors as racists and sexists? Under the label of free speech, of all things? I shall suggest that this kind of insistence relies on what I call the quotation illusion and hope to show that this distinction doesn’t really work for this purpose.

Many people assume that there is a clear distinction between use and mention. When saying, “stop” has four letters, I’m not using the expression (to stop or alert you). Rather, I am merely mentioning the word to talk about it. Similarly, embedding a video or passages from a text into a context in which I talk about these items is not a straightforward use of them. I’m not endorsing what these things supposedly intend to express or achieve. Rather, I am embedding them in a context in which I might, for instance, talk about the effects of propaganda. It is often assumed that this kind of “going meta” or mentioning is categorically different from using expressions or endorsing statements. As I noted in an earlier post, if I use an insult or sincerely threaten people by verbal means, I act and cause harm. But if I consider a counterfactual possibility or quote someone’s words, my expressions are clearly detached from action. However, the relation to possible action is what contributes to making language meaningful in the first place. Even if I merely quote an insult, you still understand that quotation in virtue of understanding real insults. In other words, understanding such embeddings or mentions rides piggy-back on understanding straightforward uses.

If this is correct, then the difference between use and mention is not a categorical one but one of degrees. Thus, the idea that quotations are completely detached from what they express strikes me as illusory. Of course, we can and should study all kinds of expressions, also expressions of violence. But their mention or embedding should never be casual or justified by mere convention or tradition. If you considered showing that ISIS video, you would probably preface your act with a warning. – No? You’re against trigger warnings? So would you explain to your audience that you were just quoting or ask them to stop shunning our history? And would you perhaps preface your admonitions with a defense of free speech? – As I see it, embedded mentions of dehumanising expressions do carry some of the demeaning attitudes. So exposing others to them merely to make a point about free speech strikes me as verbal bullying. However, this doesn’t mean that we should stop quoting or mentioning problematic texts (or videos). It just means that prefacing such quotations with pertinent warnings is an act of basic courtesy, not coddling.

The upshot is that we cannot simply rely on a clear distinction between quotation and endorsement, or mention and use. But if this correct, then what about reading racist or sexist classics? As I have noted earlier, the point would not be to simply shun Aristotle or others for their bigotry. Rather, we should note their moral shortcomings as much as we should look into ours. For since we live in some continuity with our canon, we are to some degree complicit in their racism and sexism.

Yet instead of acknowledging our own involvement in our history, the treatment of problematic authors is often justified by claiming that we are able to detach ourselves from their involvement, usually by helping ourselves to the use-mention distinction. A recent and intriguing response to this challenge comes from Agnes Callard, who claims that we can treat someone like Aristotle as if he were an “alien”. We can detach ourselves, she claims, by interpreting his language “literally”, i.e. as a vehicle “purely for the contents of his belief” and as opposed to “messaging”, “situated within some kind of power struggle”. Taken this way, we can grasp his beliefs “without hostility”, and the benefits of reading come “without costs”. This isn’t exactly the use-mention distinction. Rather, it is the idea that we can entertain or consider ideas without involvement, force or attitude. In this sense, it is a variant of the quotation illusion: Even if I believe that your claims are false or unintelligible, I can quote you – without adding my own view. I can say that you said “it’s raining” without believing it. Of course I can also use an indirect quote or a paraphrase, a translation and so on. Based on this convenient feature of language, historians of philosophy (often including myself) fall prey to the illusion that they can present past ideas without imparting judgment. Does this work?

Personally, I doubt that the literal reading Callard suggests really works. Let me be clear: I don’t doubt that Callard is an enormously good scholar. Quite the contrary. But I’m not convinced that she does justice to the study that she and others are involved in when specifying it as a literal reading. Firstly, we don’t really hear Aristotle literally but mediated through various traditions, including quite modern ones, that partly even use his works to justify their bigoted views. Secondly, even if we could switch off Aristotle’s political attitudes and grasp his pure thoughts, without his hostility, I doubt that we could shun our own attitudes. Again, could you read Breivik’s Manifesto, ignoring Breivik’s actions, and merely grasp his thoughts? Of course, Aristotle is not Breivik. But if literal reading is possible for one, then why not for the other?

The upshot is: once I understand that a way of speaking is racist or sexist, I cannot unlearn this. If I know that ways of speaking hurt or harm others, I should refrain from speaking this way. If I have scholarly or other good reasons to quote such speech, I shouldn’t do so without a pertinent comment. But I agree with Callard’s conclusion: We shouldn’t simply “cancel” such speech or indeed their authors. Rather, we should engage with it, try and contextualise it properly. And also try and see the extent of our own involvement and complicity. The world is a messy place. So are language and history.

Two kinds of philosophy? A response to the “ex philosopher”

Arguably, there are at least two different kinds of philosophy: The first kind is what one might call a spiritual practice, building on exercises or forms of artistic expression and aiming at understanding oneself and others. The second kind is what one might call a theoretical endeavour, building on concepts and arguments and aiming at explaining the world. The first kind is often associated with traditions of mysticism, meditation and therapy; the second is related to theory-building, the formation of schools (scholasticism) and disciplines in the sciences (and humanities). If you open any of the so-called classics, you’ll find representations of both forms. Descartes’ Meditations offer you meditative exercises that you can try at home alongside a battery of arguments engaging with rival theories. Wittgenstein’s Tractatus closes with the mystical and the advice to shut up about the things that matter most after opening with an account of how language relates to the world. However, while both kinds are present in many philosophical works, only the second kind gets recognition in professional academic philosophy. In what follows, I’d like to suggest that this lopsided focus might undermine our discipline.

Although I think that these kinds of philosophy are ultimately intertwined, I’d like to begin by trying to make the difference more palpable. Let’s start with a contentious claim: I think that most people are drawn into philosophy by the first kind, that is, by the desire understand themselves, while academic philosophy trains people in the second kind, that is, in handling respectable theories. People enter philosophy with a first-person perspective and leave or become academics through mastering the third-person perspective. By the way, this is why most first-year students embrace subjectivism of all kinds and lecturers regularly profess to be “puzzled” by this. Such situations thrive on misunderstandings: for the most part, students don’t mean to endorse subjectivism as a theory; they simply and rightly think that perspective matters.* Now, this is perhaps all very obvious. But I do think that this transition from the one kind to the other kind could be made more transparent. The problem I see is not the transition itself, but the dismissal of the first kind of philosophy. As I noted earlier, the two kinds of philosophy require one another. We shouldn’t rip the Tractatus apart, to exclude either mysticism or the theory. Whether you are engaging in the first or second kind is more a matter of emphasis. However, interests in gatekeeping and unfounded convictions about what is and what isn’t philosophy often entail practices of exclusion, often with pernicious effects.

Such sentiments were stirred when I read the confessions of an ex philosopher that are currently making the rounds on social media. The piece struck many chords, quite different ones. I thought it was courageous and truthful as well as heart-breaking and enraging. Some have noted that the piece is perhaps more the complacent rant of someone who was never interested in philosophy and fellow philosophers to begin with. Others saw its value in highlighting what might be called a “phenomenology of failure” (as Dirk Koppelberg put it). These takes are not mutually exclusive. It’s not clear to me whether the author had the distinction between the two kinds of philosophy in mind, but it surely does invoke something along these lines:

“Philosophy has always been a very personal affair. Well, not always. When it stopped being a personal affair, it also stopped being enjoyable. It became a performance.

… Somewhat paradoxically, academia made me dumber, by ripening an intellectual passion I loved to engage with into a rotten performance act I had to dread, and that I hurried to wash out of my mind (impossible ambition) when clocking out. Until the clocking out became the norm. Now I honestly do not have insightful opinions about anything — not rarefied philosophical problems nor products nor popular culture nor current events.”

What the author describes is not merely the transition from one approach to another; it is transition plus denial. It’s the result of the professional academic telling off the first-year student for being overly enthusiastically committed to “subjectivism”. While we can sometimes observe this happening in the lecture hall, most of this denial happens within the same person, the supposed adult telling off themselves, that is, the playful child within. No doubt, sometimes such transition is necessary and called for. But the denial can easily kill the initial motivation. – That said, the author also writes that he has “never enjoyed doing philosophy.” It is at this point (and other similar ones) where I am torn between different readings, but according to the reading I am now proposing the “philosophy” he is talking about is a widespread type of academic philosophy.** What he is talking about, then, is that he never had an interest in a kind of philosophy that would deny the initial enthusiasm and turn it into a mere performance.

Now you might say that this is just the course of a (professionalised) life. But I doubt that we should go along with this dismissal too readily. Let me highlight two problems, unfounded gatekeeping and impoverished practices:

  • The gatekeeping has its most recognisable expression in the petulant question “Is this philosophy?” Of course, it depends on who is asking, but the fact that most texts from the mystic tradition or many decidedly literary expressions of philosophy are just ignored bears witness to the ubiquitous exclusion of certain philosophers. It certainly hit Hildegard of Bingen, parts of Nietzsche and bits of Wittgenstein. But if an exaggerated remark is in order, soon anything that doesn’t follow the current style of paper writing will be considered more or less “weird”. In this regard, the recent attempts at “diversifying the canon” often strike me as enraging. Why do we need to make a special case for re-introducing work that is perfectly fine? In any case, the upshot of dismissing the first kind of philosophy is that a lot of philosophy gets excluded, for unconvincing reasons.
  • You might think that such dismissal only concerns certain kinds of content or style. But in addition to excluding certain traditions of philosophy, there is a subtler sort of dismissal at work: As I see it, the denial of philosophy as a (spiritual) practice or a form of life (as Pierre Hadot put it) pushes personal involvement to the fringes. Arguably, this affects all kinds of philosophy. Let me give an example: Scepticism can be seen as a kind of method that allows us to question knowledge claims and eventually advances our knowledge. But it can also be seen as a personal mental state that affects our decisions. As I see it, the methodological approach is strongly continuous with, if not rooted in, the mental state. Of course, sometimes it is important to decouple the two, but a complete dismissal of the personal involvement cuts the method off from its various motivations. Arguably, the dismissal of philosophy as a spiritual (and also political) practice creates a fiction of philosophy. This fiction might be continuous with academic rankings and pseudo-meritocratic beliefs, but it is dissociated from the involvement that motivates all kinds of philosophical exchange.

In view of these problems, I think it is vital keep a balance between what I called two kinds but what is ultimately one encompassing practice. Otherwise we undermine what motivates people to philosophise in the first place.

____

* Liam Bright has a great post discussing the often lame counterarguments to subjectivism, making the point that I want to make in a different way by saying that the view is more substantial than it is commonly given credit for: “The objection [to subjectivism] imagines a kind of God’s-eye-perspective on truth and launches their attack from there, but the kind of person who is attracted to subjectivism (or for that matter relativism) is almost certainly the kind of person who is suspicious of the idea of such a God’s eye perspective. Seen from within, these objections simply lose their force, they don’t take seriously what the subjectivist is trying to do or say as a philosopher of truth.”

Eric Schliesser provides a brief discussion of Liam’s post, hitting the nail on the following head: “Liam’s post (which echoes the loveliest parts of Carnap’s program with a surprisingly Husserlian/Levinasian sensibility) opens the door to a much more humanistic understanding of philosophy. The very point of the enterprise would be to facilitate mutual understanding. From the philosophical analyst’s perspective the point of analysis or conceptual engineering, then, is not getting the concepts right (or to design them for ameliorative and feasible political programs), but to find ways to understand, or enter into, one’s interlocutor life world.”

** Relatedly, Ian James Kidd distinguishes between philosophy and the performative craft of academic philosophy in his post on “Being good at being good at philosophy”.

Questions – an underrated genre

Looking at introductions to philosophy, I realise that we devote much attention to the reconstruction of arguments and critical analysis of positions. Nothing wrong with that. Yet, where are the questions? Arguably, we spend much of our time raising questions, but apart from very few exceptions questions are rarely treated as a genre of philosophy. (However, here is an earlier post, prompted by Sara Uckelman’s approach, on which she elaborates here. And Lani Watson currently runs a project on philosophical questions.) Everyone who has tried to articulate a question in public will have experienced that it is not all that simple, at least not if you want to go beyond “What do you mean?” or “What time is it?” In what follows, I’d hope to get a tentative grip on it by looking back at my recent attempt to teach students asking questions.

This year, I gave an intense first-year course on medieval philosophy.* I say “intense” because it comprises eight hours per week: two hours lecture and two hours reading seminar on Thursday and Friday morning. It’s an ideal setting to do both, introduce material and techniques of approaching it as well as applying the techniques by doing close reading in the seminars. Often students are asked to write a small essay as a midterm exam. Given the dearth of introductions to asking questions, I set a “structured question” instead. The exercise looks like this:

The question will have to be about Anselm’s Proslogion, chapters 2-4. Ideally, the question focuses on a brief passage from that text. It must be no longer than 500 words and contain the following elements:

– Topic: say what the question is about;
– Question: state the actual question (you can also state the presupposition before stating the question);
– Motivation: give a brief explanation why the question arises;
– Answer: provide a brief anticipation of at least one possible answer.

What did I want to teach them? My declared goal was to offer a way of engaging with all kinds of texts. When doing so I assumed that understanding (a text) can be a general aim of asking questions. I often think of questions as a means of making contact with the text or interlocutor. For a genuine question brings two aspects together: on the one hand, there is your question, on the other, there is that particular bit of the text that you don’t understand or would like to hear more about. But … that’s more easily said than done. During the lectures and seminars we would use some questions from students to go through the motions. What I noticed almost immediately is that this was obviously really hard. One day, a student came up and said:

“Look, this focus on questions strikes me as a bit much. I’m used to answer questions, not raising them. It seems to require knowledge that I don’t have. As it is, it is rather confusing and I feel like drowning out at sea.”

I’m quoting from memory, but the gist should be clear. And while I now think of a smallish group of students as particularly brave and open, this comment probably represents the attitude of the majority. The students wanted guidance, and what I wanted to offer them instead was tools to guide themselves. I had and have a number of different reactions to the student’s confession. My first thought was that this is a really brave stance to take: Being so open about one’s own limits and confusion is rarely to be found even among established people. At the same time, I began to worry about my approach. To be sure, the confusion was caused intentionally to some degree, and I said so. But for this apporach to work one has to ensure that asking questions eventually provides tools to orient oneself and to recognise the reasons for the confusion. Students need to learn to consider questions such as: Why am I confused? Could it be that my own expectations send me astray? What am I expecting? What is it that the text doesn’t give me? Arguably, they need to understand their confusion to make contact to the text.  In other words, questions need to be understood. But this takes time and, above all, trust that the confusion lands us somewhere in the end.

When I taught this kind of course in the past, I did what the student seemed to miss now: I gave them not only guiding questions to provide a general storyline through the material, but also detailed advice on what to look for in the texts. While that strikes me as a fine way of introducing material, it doesn’t help them develop questions autonomously. In any case, we had to figure out the details of this exercise. So what is behind the four elements in the task above?

Since questions are often used for other purposes, such as masking objections or convey irritation, it is vital to be explicit about the aim of understanding. Thus, finding the topic had to be guided by a passage or concept that left the questioner genuinely confused. Admitting to such confusion is trickier than meets the eye, because it requires you to zoom in on your lack of understanding or knowledge. You might think that the topic just is the passage. But it’s important to attempt a separate formulation for two reasons: firstly, it tells the listener or reader what matters to you; secondly, it should provide coherence in that the question, motivation and answer should all be on the same topic.

In the beginning, I spent most of the time with analysing two items: the motivation and the formulation of the actual question. After setting out an initial formulation of the question, students had to spell out why the question arises. But why do questions arise? In a nutshell, most questions arise because we make a presupposition or have an expectation that the text does not meet. (Here is a recent post with more on such expectations.) A simple example is that you expect evidence or an argument for a claim p, while the author might simply say that p is self-evident. You can thus begin by jotting down something like “Why would p be self-evident, according to the author?” This means that now, at last, you can talk about something that you do know: your expectations. Ideally, this provides a way of spelling out what you expect and thus what the text lacks (from that perspective). Going from there, the tentative answer will have to provide a reason that shows why p is self-evident for the author. Put differently, while the motivation brings out your presuppositions, the answer is an attempt at spelling out the presuppositions guiding the text (or author). With hindsight, you can now also fix the topic, e.g. self-evidence.

But things are never that straightforward. What I noticed after a while was that many students went off in a quite different direction when it came to answering the question. Rather than addressing the possible reasons of the author, the students began to spell out why the author was wrong. At least during the first letures, they would sometimes not try to see what reasons the author could invoke. Instead, they would begin by stating why their own presupposition was right and the author wrong, whatever the author’s reasons.

This is not surprising. Most discussions inside and outside of philosophy have exactly this structure. Arguably, most philsophy is driven by an adversarial culture rather than by the attempt to understand others. A question is asked, not to target a difficulty in understanding, but to justify the refutation of the interlocutor’s position. While this approach can be one legitimate way of interacting, it appears particularly forced in engaging with historical texts. Trying to say why Anselm or any other historical author was wrong, by contemporary standards, just is a form of avoiding historical analysis. You might as well begin by explaining your ideas and leave Anselm out of the equation altogether.

But how can an approach to understanding the text (rather than refuting it) be encouraged? If you start out from the presupposition that Anselm is wrong, an obvious way would be to ask for the reasons that make his position seem right. It strikes me as obvious that this requires answering the question on Anselm’s behalf. It is at this point that we need to move from training skills (of asking questions) to imparting (historical) knowledge. Once the question arises why an author claims that p, and p does not match our expectations, we need to teach students to recognise certain moves as belonging to different traditions and ways of doing philosophy, ways that do not square with our current culture. My hope is that, if we begin with teaching to raise questions, it will become more desirable to acquire the knowledge relevant to providing answers and to understanding our own questions.

_____

* I’ve really enjoyed teaching this course and think I’ve learned a lot from it. Special thanks to my patient students, particularly to my great TAs, Elise van de Kamp and Mark Rensema, whose ideas helped me enormously in shaping the course. – Now, if you’ve read this far, I’d like to thank you, too, for bearing with me. Not only for the length of this post. Today is a special occasion: this is post number 101.

Dismissing (religious) belief. On a problematic kind of anachronism

I’m currently teaching an intro course on medieval philosophy. Although I really enjoy teaching medieval philosophy, I am always somewhat shocked at the generally dismissive attitude towards the religious or theological aspects of the material. A widespread assumption is that we can and should bypass such issues. Why bother with God or angels, if we can focus on philosophy of language or ethics? That said, there is no reason to blame students. Looking at various histories of philosophy, it’s clear that the selection of material often follows what is currently deemed most relevant. In fact, bits of my own work might serve as a case in point. However, in what follows I’d like to present three reasons for the claim that, in bypassing such aspects, we miss out on core ideas, not only in history of philosophy.

(1) The illusion of modernity. – If you ask people why they think we can happily ignore theological aspects, a common answer is that they are indeed no longer relevant, because the world is supposedly progressing towards an increasingly enlightened state of a scientific rather than a religious view of the world. This is of course not the last word. Criticisms of progress narratives aside, it is also clear that we live in a world that is currently deeply conflicted between adherents of religion and a scientific worldview. Moreover, this assumption makes us overlook that this conflict is a deeply medieval one, testified already in the writings of Augustine, culminating perhaps in the famous condemnation of 1277, and continuing well into what is known as modern philosophy. Thus, the idea that dissociating reason from faith is a trait of Enlightenment or modernity is a cherished illusion. After deciding to address this issue head-on in my current course, I made the condemnation of 1277 the first focal point. Amongst other things, it clearly shows that the battlefield of faith versus reason, along with the discussion of different kinds of truth, not to speak of alternative facts, has venerable precedents in the 13th century. In other words, the distinction between adherents of faith versus adherents of science is not a diachronic one (between medieval and modern) but a synchronic one.

(2) Theology is philosophy. – But even if you agree that conflicts of faith versus reason might be relevant even today, you might still deny that they are philosophically significant. If you turn to philosophers of the medieval or other periods, you might go straight to the philosophically interesting stuff. The assumption seems to be that certain problems or topics can be stripped of their theological content without much loss. Going from this assumption, material that cannot be stripped from such overtones is “not philosophy.” One problem with this view is that a number of philosophical systems have notions such as “god” at the core. For a number of medieval and early modern philosophers, their metaphysics are unintelligible without reference to a god. Trying to bypass this, means bypassing these metaphysics. The idea of stripping such systems from theological notions strikes me a consequence of the illusion of modernity. But in fact we find a number of 20th-century or present-day philosophers who rely on such notions. And as is well known by now, readers of Wittgenstein’s Tractatus Logico-Philosophicus, one of the foundational texts of the early analytic tradition, should not ignore his approach to the “mystical” and related ideas. This doesn’t mean that there is no philosophy without theology. But we are prone to serious misunderstanding if we wilfully ignore such foundations.

(3) The significance of belief. – My third and perhaps most important point is that the foundational role of belief is often ignored. Reading, for instance, that Anselm opens his Proslogion with the idea that we have to believe in order to understand, this and other remarks on (religious) belief are often taken as confessions that do not affect the arguments in question. As I see it, such estimations miss a crucial philosophical point: unquestioned belief is foundational for many further (mental or physical) acts. Arguably, there is a long tradition of philosophers (e.g. Augustine, Anselm, Gregory of Rimini, Spinoza, Hume, William James, Davidson) who exposed the foundational role of belief, showing that there are reasons to accept certain assumptions on faith. The need to rely on axioms is not only a trait of special sciences. Indeed, many aspects of our life depend on the fact that we hold certain unquestioned beliefs. Unless we have startling evidence to the contrary, we’re inclined to believe whatever we perceive. We believe that we weren’t lied to when our parents or other people informed us about our date of birth, and we don’t normally question that there is an external world. Challenging certain beliefs would probably deeply unsettle us, and we certainly wouldn’t begin searching, if we didn’t believe we had a chance of finding what we’re looking for. In this sense, certain beliefs are not optional.

The upshot is that the dismissal of (religious) belief is not only problematic in that it distorts some niches of medieval philosophy. Rather, it’s based on a misconception of our very own standards of rationality, which rely much on more on unquestioned beliefs than might meet the eye. So if the dismissal of religious belief is anachronistic, it’s not only distorting our view of the past but distorting the understanding of our current discourses. In this regard, much medieval philosophy should not be seen as strangely invested in religion but rather as strangely familiar, even if unbeknownst to ourselves. As Peter Adamson succinctly put it, for some “a proper use of reason is unattainable without religious commitment.” I agree, and would only add that we might recognise this attitude more readily as our own if we deleted the word “religious”. But that is perhaps more of a purely very verbal matter than we like to believe.