What’s behind the veil of perception? A response to Han Thomas Adriaenssen

Imagine you’re using glasses, would you think that your grasp of reality is somehow indirect? I guess not. We assume that glasses aid our vision rather than distort or hinder it. The fact that our vision is mediated by glasses does not make it less direct than the fact that our vision is mediated through our eyes. Now imagine your perception is mediated by what early modern philosophers call “ideas”. Does it follow that our grasp of reality is indirect? Many philosophers think that it is. By contrast, I would like to suggest that this is misleading. Ideas make our perceptions no less direct than glasses.

Both early modern and contemporary crictics often take the “way of ideas” as a route to scepticism. The assumption seems to be that the mediation of perception through ideas makes our thoughts not about reality but about the ideas. Han Thomas Adrianssen’s recent book is original in that it tells the story of this and related debates from Aquinas to the early modern period. In celebration of receiving the JHP book prize, Han Thomas gave a brief interview that aptly summarises the common line of criticism against ideas or the assumption of indirect perception related to them:

“Okay. So you explore the philosophical problem of ‘perception and representation’ from Aquinas to Descartes; what exactly is the problem?

HTA: ‘Right. So it goes like this: what is going on in your mind when you see something in your environment? Take this chair, for instance. When you see this chair, what’s happening in your mind? One answer is that you form a kind of pictorial representation of the chair. You start drawing a mental image for yourself of the thing in front of you, and you label it: ‘chair’. … But then there is a worry: if this is how it works – if this is how we access the environment cognitively – then that means there is a sort of interface between us and reality. A veil of perceptions, if you will. So what we’re thinking about is not the chair itself, but our picture of the chair.– But that can’t be right!”

Besides summarising the historical criticisms, Han Thomas seems to go along with their doubts. He suggests that metaphors trick us into such problematic beliefs: the “mental image metaphor” comes “naturally”, but brings about “major problems”.

While I have nothing but admiration for the historical analysis presented, I would like to respond to this criticism on behalf of those assuming ideas or other kinds of representational media. Let’s look at the chided metaphor again. Yes, the talk of the mental image suggests that what is depicted is remote and behind the image. But what about the act of drawing the image? Something, presumably our sense organs are exposed to the things and do some ‘drawing’. So the drawing is not done behind a veil. Rather the act of drawing serves as a transformation of what is drawn into something that is accessible to other parts of our mind.* Thus, we should imagine a series of transformations until our minds end up with ideas. But if you think of it in those terms the transformation is not akin to putting something behind a veil. Rather it is a way of making sensory input available. The same goes for glasses or indeed our eyes. They do not put something behind a veil but make it available in an accessible form. My point is that the metaphor needs to be unpacked more thoroughly. We don’t only have the image; we have the drawing, too.

Following Ruth Millikan’s account of perception,** I would like to argue that the whole opposition of indirect vs direct perception is unhelpful. It has steered both early modern and 20th-century debates in epistemology in fruitless directions. Sense perception is direct (as long as it does not involve inferences through which we explicitly reason that the presence of an idea means the presence of a represented thing). At the same time sense perception is indirect in that it requires means of transformation that make things available to different kinds of receptors. Thus, the kind of indirectness involved in certain cognitive vehicles does not lead anymore to scepticism than the fact that we use eyes to see.

What early modern philosophers call ideas are just cognitive vehicles, resulting from transformations that make things available to us. If an analogy is called for I’d suggest relating them, not to veils, but to glasses. If we unpack the metaphor more thoroughly, what is behind the veil is not only the world, but our very own sense organs making the world available by processing them through media accessible to our mind. If that evokes sceptical doubts, such doubts might be equally raised whenever you put your glasses on or indeed open you eyes to see.

___

* As Han Thomas himself notes (in the book, not the interview), many medieval authors do not think that representationalism leads to scepticism, and endorse an “epistemic optimism”. I guess these authors could be reconstructed as agreeing with my reply. After all, some stress that species (which could be seen as functionally equivalent to ideas) ought to be seen as a medium quo rather than that which is ultimately cognised.

** Ruth Millikan even claims that language is a means of direct perception: “The picture I want to leave you with, then, is that coming to believe, say, that Johnny has come in by seeing that he has come in, by hearing by his voice that he has come in, and by hearing someone say “Johnny has come in,” are normally equivalent in directness of psychological processing. There is no reason to suppose that any of these ways of gaining the information that Johnny has come in requires that one perform inferences. On the other hand, in all these cases it is likely that at least some prior dedicated representations must be formed. Translations from more primitive representations and combinations of these will be involved. If one insists on treating all translation as a form of inference, then all these require inference equally. In either event, there is no significant difference in directness among them. ”

On taking risks. With an afterthought on peer review

Jumping over a puddle is both fun to try and to watch. It’s a small risk to take, but some puddles are too large to cross… There are greater risks, but whatever the stakes, they create excitement. And in the face of possible failure, success feels quite different. If you play a difficult run on the piano, the listeners will equally feel relief when you manage to land on the right note in time. The same goes for academic research and writing. If you start out with a provocative hypothesis, people will get excited about the way you mount the evidence. Although at least some grant agencies ask for risks taken in proposals, risk taking is hardly ever addressed in philosophy or writing guides. Perhaps people think it’s not a serious issue, but I believe it might be one of the crucial elements.

In philosophy, every move worth our time probably involves a risk. Arguing that mistakes or successes depend on their later contextualisation, I already looked at the “the fine line between mistake and innovation.” But how do we get onto that fine line? This, I think, involves taking a risk. Taking a risk in philosophy means saying or doing something that will likely be met with objections. That’s probably why criticising interlocutors is so widespread. But there are many ways of taking risks. Sitting in a seminar, it might already feel risky to just raise your voice and ask a question. You feel you might make a fool of yourself and lose the respect of your fellow students or instructor. But if you make the effort you might also be met with the admiration for going through with an only seemingly trivial point. I guess it’s that oscillation between the possibility of failure and success that also moves the listeners or readers. It’s important to note that risk taking has a decidedly emotional dimension. Jumping across the puddle might land you in the puddle. But even if you don’t make it all the way, you’ll have moved more than yourself.

In designing papers or research projects, risk taking is most of the time rewarded, at least with initial attention. You can make an outrageous sounding claim like “thinking is being” or “panpsychism is true”. You can present a non-canonical interpretation or focus on a historical figure like “Hume was a racist” or “Descartes was an Aristotelian”. You can edit or write on the work of a non-canonical figure or provide an uncommon translation of a technical term. This list is not exhaustive, and depending on the conventions of your audience all sorts of moves might be risky. Of course, then there is work to be done. You’ve got to make your case. But if you’re set to make a leap, people will often listen more diligently than when you merely promise to summarise the state of the art. In other words, taking a risk will be seen as original. That said, the leap has to be well prepared. It has to work from elements that are familiar to your audience. Otherwise the risk cannot be appreciated for what it is. On the other hand, mounting the evidence must be presented as feasible. Otherwise you’ll come across as merely ambitious.

Whatever you do, in taking a risk you’ll certainly antagonise some people. Some will be cheering and applauding your courage and originality. Others will shake their heads and call you weird or other endearing things. What to do? It might feel difficult to live with opposition. But if you have two opposed groups, one positive, one negative, you can be sure you’re onto something. Go for it! It’s important to trust your instincts and intuitions. You might make it across the puddle, even if half of your peers don’t believe it. If you fail, you’ve just attempted what everyone else should attempt, too. Unless it’s part of the job to stick to reinventing the wheel.

Now the fact that risks will be met with much opposition but might indicate innovation should give us pause when it comes to peer review. In view of the enormous competition, journals seem to encourage that authors comply with the demands of two reviewers. (Reviewer #2 is a haunting meme by now.)  A paper that gets one wholly negative review will often be rejected. But if it’s true that risks, while indicative of originality, will incur strong opposition, should we not think that a paper is particularly promising when met with two opposing reviews? Compliance with every possible reviewer seems to encourage risk aversion. Conversely, looking out for opposing reviews would probably change a number of things in our current practice. I guess managing such a process wouldn’t be easier. So it’s not surprising if things won’t change anytime soon. But such change, if considered desirable, is probably best incentivised bottom-up. And this would mean to begin in teaching.

The fact, then, that a claim or move provokes opposition or even refutation should not be seen as a negative trait. Rather it indicates that something is at stake. It is important, I believe, to convey this message, especially to beginners who should learn to enjoy taking risks and listening to others doing it.

The idea of “leaving” social media might be a category mistake. A response to Justin E. H. Smith

I guess you all know at least variants of this situation: You go to a talk; you really enjoy it and look forward to the discussion. But then there is that facepuller again, lining up to be the first to ask a question. And not only is he (yes, it’s invariably a “he”) going on forever, dulling the mood and offending the speaker, he also makes his question sound so decisively threatening that everyone after him will come back to his destructive point. That’s that. There were great ideas in the talk, but this bully managed to make it all about himself again. This is just annoying, but imagine this guy is around you for a whole conference or you even work with him. Or he is your teacher or supervisor. Whatever the situation, that guy is a pain, and he manages to make everything about himself, spoiling most of the fun for everyone else involved. But sure enough, the greatest disappointment is yet to come: when it’s promotion time, that guy won’t be sanctioned. No, he gets the top job. – Social media are a bit like that. We all could have a nice time, but then that guy joined and everything turned nasty. And then it turns out that Facebook (or some such company) are not banning him; they even pay this bully. – In what follows I want to suggest that it doesn’t make sense to leave social media, especially if you are interested in ameliorating the situation or countering such behaviour.

Let me begin with a plea: Don’t leave me alone with that guy! But sadly the number of people who are leaving social media, especially Facebook, seems to be increasing. I think they all have good reasons, just like they all have good reasons to find the world a nasty place: social media are full of bullshitting bullies, the people who run Facebook or Twitter are no saints either, while users are turned into hapless addicts withering away in their echo chambers. In other words, ordinary people are involved. And that guy is around all the time. My point is: the reasons for leaving social media apply to the world outside social media as well. But this is not because we would be dealing with two realms (inside and outside), but because the technological patterns that pervade social media pervade our lives anyway.

So what are social media? I guess a cynically inclined person might say that they are a by-product of accumulating data. In an intriguing blog post, Justin Smith spells out the dystopian idea of the “tech companies’ transformation of individuals into data sets”. One conclusion he draws strikes me as particularly important: that this transformation destroys “human subjecthood”. The point he makes is that you’re not treated as an individual, but as an item fitting certain patterns of predictions along the lines of: “customers who liked philosophical blog posts were also interested in Martin …”. The result is that, as a social media participant, you might be incentivised to present yourself in line with certain predictions. So should you sell yourself as the wholesome lefty package or is it better to add on some grumpy edges?

Thinking about Justin Smith’s post back and forth, it hit me like a hammer. If his observations about the transformation into data sets are apt, then the very idea of “leaving” social media might rest on a category mistake. In Gilbert Ryle’s illustration, a guest at Oxford University walks around all the colleges and the library, and then asks “But where is the university?” The visitor mistakenly assumes that the university is one of the university buldings. People wanting to leave social media might be doing something similar. They stop using Facebook or Twitter and perhaps also switch on their computers or smartphones less frequently. Thus, they assume that social media are one of the various media or technological items on offer. Leaving Facebook, then, would mean to stop using that and choose a different medium, such that you might return to writing postcards, letters or just go for a walk and talk at the people who come your way. But this is not possible. The technology at work in Facebook is not something you can choose to abandon; that technology organises a great part of our lives. It’s spread across every transaction that involves data accumulation. The technology at work is not like one of the colleges; it’s the university!

The technology of data accumulation is pervasive, ingrained not only in our way of shopping with Amazon, but the result of a long-standing practice of economising our interactions. Of course you can stop using Facebook or the internet altogether. But if I understand the people who want to leave correctly, they have moral or more personal reasons for doing so. This means they don’t just dislike Facebook; they reject the way of interaction incentivised by the pertinent technology. On the whole, they want to avoid the ills going along with it. But now compare cars: Of course you can stop using a car, but you cannot “leave” or “stop using” the car industry.

You all know that there is an obvious objection. Someone’s got to take the first step. And if we all stop using Facebook, then … Then what? I tell you what: then we’ll all start using Bumbook or something else instead. (But Bumbook will be driven by the same technological patterns, not by our good intentions.) Or we just give up on the benefits of using media altogether. I’m not convinced by either option. – The technology of data accumulation is systemic; like public transport or education it runs through and affects society as a whole. If it is not working properly or subject to constant (political) abuse, it requires a collective effort to ameliorate the situation. Echoing an idea from Leopold Hess, social media are too important to be privatised.

So if we want to minimise the influence of that guy, we shouldn’t tolerate his behaviour. Leaving the room and thus leaving the floor to the bullies won’t help. All the more because you cannot leave social media in the same way that you can leave a room. Of course, sometimes leaving the room is all you want, and it might cure your headache. But it won’t do much else. Countering the ills of social media is a collective political task. Not leaving but getting involved even more might help. What are the most important skills in this? Listening and reading carefully.

Who’s afraid of relativism?

In recent years, relativism has had a particularly bad press. Often chided along with what some call postmodernism, relativism is held responsible for certain politicians’ complacent ignorance or bullshitting. While I’m not alone in thinking that this scapegoating is due to a severe misunderstanding of relativism, even those who should know better join the choir of condemnation:

“The advance of relativism – the notion that truth is relative to each individual’s standpoint – reached what might be seen as a new low with the recent claim by Donald Trump’s senior adviser Kellyanne Conway that there are such things as “alternative facts”. (She went so far as to cite a non-existent “Bowling Green massacre” to justify Trump’s refugee travel ban, something she later described as a “misspeak”.)” Joe Humphrey’s paraphrasing Timothy Williamson in the Irish Times, 5.7. 2017

If this is what Williamson thinks, he confuses relativism with extreme subjectivism. But I don’t want to dismiss this view too easily. The worry behind this accusation is real. If people do think that truth is relative to each individual’s standpoint, then “anything goes”. You can claim anything and there are no grounds for me to correct you. If this is truth, there is no truth. The word is a meaningless appeal. However, I don’t think that the politicians in question believe in anything as sophisticated as relativism. Following up on some intriguing discussions about the notion of “alternative facts”, I believe that the strategy is (1) to lie by (2) appealing to an (invented) set of states of affairs that supposedly has been ignored. Conway did not assume that she was in the possession of her own subjective truth; quite the contrary. Everyone would have seen what she claimed to be the truth, had they cared to look at the right time in the right way. If I am right, her strategy depends on a shared notion of truth. In other words, I guess that Williamson and Conway roughly start out from the same understanding of truth. To bring in relativism or postmodernism is not helpful when trying to understand the strategy of politicians.

By introducing the term “alternative facts” Conway reminds us of the fact (!) that we pick out truths relative to our interests. I think we are right to be afraid of certain politicians. But why are we afraid of relativism? We have to accept that truth, knowledge or morality are relative to a standard. Relativism is the view that there is more than one such standard.* This makes perfect sense. That 2 plus 2 equals 4 is not true absolutely. Arguably, this truth requires the agreement on a certain arithmetic system. I think that arithmetic and other standards evolve relative to certain interests. Of course, we might disagree about the details of how to spell out such an understanding of relativism. But it is hard to see what makes us so afraid of it.

Perhaps an answer can be given by looking at how relativism evolved historically. If you look at early modern or medieval discussions of truth, knowledge and morality, there is often a distinction between divine and human concepts. Divine knowledge is perfect; human knowledge is partial and fallible. Divine knowledge sets an absolute standard against which human failure is measured. If you look at discussions in and around Locke, for instance, especially his agnosticism about real essences and divine natural law, divine knowledge is still assumed but it loses the status of a standard for us. What we’re left with is human knowledge, in all its mediocrity and fallibility. Hume goes further and no longer even appeals to the divine as a remote standard. Our claims to knowledge are seen as rooted in custom. Now if the divine does no longer serve as an absolute measure, human claims to knowledge, truth and morality are merely one possible standard. There is no absolute standard available. Nominal essences or customs are relative to the human condition: our biological make-up and our interests. The focus on human capacities, irrespective of the divine, is a growing issue, going hand in hand with an idea of relativism.  The “loss” of the absolute is thus owing to a different understanding of theological claims about divine standards. Human knowledge is relative in that it is no longer measured against divine knowledge. If this is correct, relativism emerged (also) as a result of a dissociation of divine and human standards. Why would we be afraid of that?

____

* I’m following Martin Kusch’s definition in his proposal for the ERC project on the Emergence of Relativism“It is not easy to give a neutral definition of “relativism”: defenders and critics disagree over the question of what the relativist is committed to. Roughly put, the relativist regarding a given domain (e.g. epistemology) insists that judgments or beliefs in this domain are true or false, justified or unjustified, only relative to  systems of standards. For the relativist there is more than one such system, and there is no neutral way of adjudicating between them. Some relativists go further and claim that all such systems are equally valid.”

How we unlearn to read

Having been busy with grading again, I noticed a strange double standard in our reading practice and posted the following remark on facebook and twitter:

A question for scholars. – How can we spend a lifetime on a chapter in Aristotle and think we’re done with a student essay in two hours? Both can be equally enigmatic.

Although it was initially meant as a joke of sorts, it got others and me thinking about various issues. Some people rightly pointed out that we mainly set essay tasks for the limited purpose of training people to write; others noted that they are expected to take even less than two hours (some take as little as 10 minutes per paper). Why do we go along with such expectations? Although our goals in assigning essays might be limited, the contrast to our critical and historical engagement with past or current texts of philosophers should give us pause. Let me list two reasons.

Firstly, we might overlook great ideas in contributions by students. I am often amazed how some students manage to come up with all crucial objections and replies to certain claims within 20 minutes, while these considerations took perhaps 20 years to evolve in the historical setting. Have them read, say, Putnam’s twin earth thought experiment and listen to all the major objections passing by in less than an hour. If they can do that, it’s equally likely that their work contains real contributions. But we’ll only notice those if we take our time and dissect sometimes clumsy formulations to uncover the ideas behind them. I’m proud to have witnessed quite a number of graduate students who have developed highly original interpretations and advanced discussions in ways that I didn’t dream of.

Secondly, by taking comparably little time we send a certain message both to our students and ourselves. On the one hand, such a practice might suggest that their work doesn’t really matter. If that message is conveyed, then the efforts on part of the students might be equally low. Some students have to write so many essays that they don’t have time to read. And let’s face it, grading essays without proper feedback is equally a waste of time. If we don’t pay attention to detail, we are ultimately undermining the purpose of philosophical education. Students write more and more papers, while we have less and less time to read them properly. Like a machine running blindly, mimicking educational activity. On the other hand, this way of interacting with and about texts will affect our overall reading practice. Instead of trying to appreciate ideas and think them through, we just look for cues of familiarity or failure. Peer review is overburdening many of us in similar ways. Hence, we need our writing to be appropriately formulaic. If we don’t stick to certain patterns, we risk that our peers miss the cues and think badly of our work. We increasingly write for people who have no time to read, undermining engagement with ideas. David Labaree even claims that it’s often enough to produce work that “looks and feels” like a proper dissertation or paper.

The extreme result is an increasing mechanisation of mindless writing and reading. It’s not supring that hoaxes involving automated or merely clichéd writing get through peer review. Of course this is not true across the board. People still write well and read diligently. But the current trend threatens to undermine educational and philosophical purposes. An obvious remedy would be to improve the student-teacher ratio by employing more staff. In any case, students and staff should write less, leaving more time to read carefully.

___

Speaking of reading, I’d like to thank all of you who continue reading or even writing for this blog. I hope you enjoy the upcoming holidays, wish you a very happy new year, and look forward to conversing with you again soon.

Embracing mistakes in music and speech

Part of what I love about improvised music is the special relation to mistakes. If you listen to someone playing a well known composition, a deviation from the familiar melody, harmony or perhaps even from the rhythm might appear to be a mistake. But what if the “mistake” is played with confidence and perhaps even repeated? Compare: “An apple a day keeps the creeps away.” Knowing the proverb, you will instantly recognise that something is off. But did I make a downright mistake or did play around with the proverb? That depends I guess. But what does it depend on? On the proverb itself? On my intentions? Or does it depend on your charity as a listener? It’s hard to tell. The example is silly and simple but the phenomenon is rather complex if you think about mistakes in music and speech. What I would like to explore in the following is what constitutes the fine line between mistake and innovation. My hunch is there is no such thing as a mistake (or an innovation). Yes, I know what you’re thinking, but you’re mistaken. Please hear me out.

Like much else, the appreciation of music is based on conventions that guide our expectations. Even if your musical knowledge is largely implicit (in that you might have had no exposure to theory), you’ll recognise variations or oddities – and that even if you don’t know the piece in question. The same goes for speech. Even if you don’t know the text in question and wouldn’t recognise if the speaker messed up a quotation, you will recognise mispronunciations, oddities in rhythm and syntax and such like. We often think of such deviations from conventions as mistakes. But while you might still be assuming that the speaker is sounding somewhat odd, they might in fact be North Americans intonating statements as if they were questions, performing funny greeting rituals or even be singing rap songs. Some things might strike people as odd while others catch on, so much so that they end up turning into conventions. – But why do we classify one thing as a variation and the other as a mistake?

Let’s begin with mistakes in music. You might assume that a mistake is, for instance, a note that shouldn’t be played. We speak of a “wrong note” or a “bum note”. Play an F# with much sustain over a C Major triad and you get the idea. Even in the wildest jazz context that could sound off. But what if you hold that F# for half a bar and then add a Bb to the C Major triad? All else being equal, the F# will sound just fine (because the C Major can be heard as a C7 and the F# as a the root note of the tritone substitution F#7) and our ear might expect the resolution to a F Major triad.* Long story short: Whether something counts as a mistake does not depend on the note in question, but on what is played afterwards.**

Let this thought sink in and try to think through situations in which something sounding off was resolved. If you’re not into music, you might begin with a weird noise that makes you nervous until you notice that it’s just rain hitting the roof top. Of course, there are a number of factors that matter, but the upshot is that a seemingly wrong note will count as fine or even as an impressive variation if it’s carried on in an acceptable way. This may be through a resolution (that allows for a reinterpretation of the note) or through repetition (allowing for interpreting it as an intended or new element in its own right) or another measure. Repetition, for example, might turn a strange sequence into an acceptable form, even if the notes in question would not count as acceptable if played only once. It’s hard to say what exactly will win us over (and in fact some listeners might never be convinced). But the point is not that the notes themselves are altered, but that repetition is a form of creating a meaningful structure, while a one-off does not afford anything recognisable. That is, repetition is a means to turn mistakes into something acceptable, a pattern. If this is correct, then it seems sensible to say that the process of going through (apparent) mistakes is not only something that can lead to an amended take on the music, but also something that leads to originality. After all, it’s turning apparent mistakes into something acceptable that makes us see them as legitimate variations.

I guess the same is true of speech. Something might start out striking you as unintelligible, but will be reinterpreted as a meaningful pattern if it is resolved into something acceptable. But how far does this go? You might think that the phenomenon is merely of an aesthetic nature, pertaining to the way we hear and recontextualise sounds in the light of what comes later. We might initially hear a string of sounds that we identify as language once we recognise a pattern in the light of what is uttered later. But isn’t this also true of the way we understand thoughts in general? If so, then making (apparent) mistakes is the way forward – even in philosophy.

Now you might object that the fact that something can be identified as an item in a language (or in music) does not mean that the content of what is said makes sense or is true. If I make a mistake in thinking, it will remain a mistake, even if the linguistic expression can be amended. – Although it might seem this way, I’d like to claim that the contrary is true: The same that goes for music and basic speech comprehension also goes for thought. Thoughts that would seem wrong at the time of utterance can be adjusted in the light of what comes later. Listening to someone, we will do everything to try and make their thoughts come out true. Trying to understand a thought that might sound unintelligible and wrong in the beginning might lead us to new insights, once we find ways in which it rhymes with things we find acceptable. “Ah, that is what you mean!” As Donald Davidson put it, charity is not optional.*** And yes, bringing Davidson into the picture should make it clear that my idea is not new. Thoughts that strike us as odd might turn out fine or even original once we identify a set of beliefs that makes them coherent. — Only among professional philosophers, it seems, we are all too often inclined to make the thoughts of our interlocutors come out false. But seen in analogy to musical improvisation, the talk of mistakes is perhaps just conservatism. Branding an idea as mistaken might merely reveal our clinging to familiar patterns.

___

* Nicer still is this resolution: You hold that F# for half a bar and then add a F# in the bass. All else being equal, the F# will sound just fine (because the C Major can be heard as a D7 add9/11 without the root note) and our ear might expect the resolution to a G Major triad.

** See also Daniel Martin Feige’s Philosophie des Jazz, p. 77, where I found some inspiration for my idea: “Das, was der Improvisierende tut, erhält seinen spezifischen Sinn erst im Lichte dessen, was er später getan haben wird.”

*** The basic idea is illustrated by the example at the beginning of an older post on the nature of error.

“Is it ok if I still work on Descartes?” The canon does not have to be canonical

Browsing through the web today, I found the following passage on the webpage of one of the few leading journals in the history of philosophy:

“Ever since the founding of the Journal of the History of Philosophy, its articles (and its submissions) have been dominated by papers on a small, select coterie of philosophers. Not surprisingly, these are Plato, Aristotle, Descartes, Spinoza, Hume, and Kant.”

“Not surprisingly” can be said in many ways, but the place and phrasing of the passage suggest some sort of pride on part of the author. But the “coterie” is so small that it still makes me chuckle. Given that this is one of the general top journals for the whole of the history of philosophy, this narrowness should be worrying. Posting this on facebook lead to some obvious entertainment. However, I also recognised some mild expression of shame from those who work on canonical figures. And I sometimes caught myself wondering whether I should continue to work on figures such as Ockham, Locke, Spinoza and Hume. Should we feel ashamed of working on the canon? In the light of such questions, I would like to briefly talk about a different worry: that of throwing out the baby with the bathwater. More precisely, I worry that attempts at diversifying the canon can harm good work on and alongside the canon. Let me explain.

Currently, we are witnessing an enormous amount of initiatives to diversify the canon, both with regard to the inclusion of women as well as of non-western traditions. The initiatives and projects I know are truly awe-inspiring. Not only do they open up new areas of research, they also affect the range of what is taught, even in survey courses. This is a great success for teaching and research in philosophy and its history. On the one hand, we learn more and more about crucial developments in the history of philosophy on a global level. On the other hand, this increase of knowledge also seems to set a moral record straight. In view of attempts to make our profession more inclusive in hiring, it’s obvious that we should also look beyond the narrow “coterie” when it comes to the content of research and teaching.

Now the moral dimension of diversification might embarrass those who continue to do teaching and research on canonical figures. “Is it ok”, one might wonder, “to teach Descartes rather than Elisabeth of Bohemia?” Of course, we might reply that it depends on one’s agenda. Yet, as much as diversification is a good thing, it will put pressure on those who choose otherwise. Given constraints of time and space, diversification might be perceived as adding to the competition. Will publishers and editors begin to favour the cool new work on non-canonical figures? Will I have to justify my canonical syllabus? While I wouldn’t worry too much about such issues, we know that our profession is rather competitive and it wouldn’t be the first time that good ideas are abused for nasty ends. – This is why it’s vital to see the whole idea of diversification as one to enrich and complement our knowledge. Rather than seeing canonical figures being pushed to the side, we should embrace the new lines of research and teaching as a way of learning new things also about canonical figures. In keeping with this spirit, I’d like to highlight two points that I find crucial in thinking about the canon and its diversification:

  • Firstly, there are non-canonical interpretations of the canon. The very idea of a canon suggests that we already know most things about certain figures and traditions. But we need to remind ourselves that the common doxography does by no means exhaust what there is to be known about authors such as Plato or Kant. Rather we need to see that most authors and debates are still unknown. On the one hand, we gather new historical knowledge about these figures. On the other hand, each generation of scholars has to make up their minds anew. Thus, even if we work on on the most canonical figures ever, we can challenge the common doxography and develop new knowledge.
  • Secondly, the diversification should also concern neglected figures alongside the canon. Have you noticed that the Middle Ages are represented by three authors? Yes, Aquinas, Aquinas, and Aquinas! Almost every study dipping into medieval discussions mentions Aquinas, while his teacher Albert the Great is hardly known outside specialist circles. But when we talk of diversification, we usually don’t think of Albert the Great, Adam of Wodeham, Kenelm Digby or Bernard Bolzano. These authors are neglected, unduly so, but they normally aren’t captured by attempts at diversification either. They run alongside the canonical figures and weigh on our conscience, but they have not much of a moral lobby. Yet, as I see it, it’s equally important that the work on them be continued and that they are studied in relation to other canonical and non-canonical figures.

In other words, the canon does not have to be canonical. The upshot is that we need as much work on canonical as on non-canonical figures in all senses of the word. We hardly know anything about either set of figures. And we constantly need to renew our understanding. Competition between these two areas of research and teaching strikes me as nonsensical. There is nothing, absolutely nothing wrong with working on canonical figures.