Against history of philosophy: shunning vs ignoring history in the analytic traditions

Does history matter to philosophy? Some time ago I claimed that, since certain facts about concepts are historical, all philosophy involves history to some degree (see here and here). But this kind of view has been and is attacked by many. The relation to history is a kind of philosophical Gretchenfrage. If you think that philosophy is a historical endeavour, you’ll be counted among the so-called continental philosophers. If you think that philosophy can be done independently of (its) history, you’ll be counted among the analytic philosophers. Today, I’ll focus on the latter, that is, on analytic philosophy. What is rarely noted is that the reasons against history are rather different and to some extent even contradictory. Roughly put, some think that history is irrelevant, while others think that it is so influential that it should be shunned. In keeping with this distinction, I would like to argue that the former group tends to ignore history, while the latter group tends to shun history. I believe that ignoring history is a relatively recent trend, while shunning history is foundational for what we call analytic philosophy. But how do these trends relate? Let’s begin with the current ignorance.

A few years ago, Mogens Laerke told me that he once encountered a philosopher who claimed that it wasn’t really worth going back any further in history than “to the early Ted Sider”. Indeed, it is quite common among current analytic philosophers to claim that history of philosophy is wholly irrelevant for doing philosophy. Some educational exposure might count as good for preventing us from reinventing the wheel or finding the odd interesting argument, but on the whole the real philosophical action takes place today. Various reasons are given for this attitude. Some claim that philosophy aims at finding the truth and that truth is non-historical. Others claim that you don’t need any historical understanding to do, say, biology or mathematics, and that, since philosophy is a similar endeavour, it‘s equally exempt from its history. I’ll look at these arguments some other day. But they have to rely on the separability of historical factors from what is called philosophy. As a result of this, this position denies any substantial impact of history on philosophy. Whatever the merit of this denial, it has enormous political consequences. While the reasons given are often dressed as a-political, they have serious repercussions on the shape of philosophy in academic institutions. In Germany, for instance, you’ll hardly find a department that has a unit or chair devoted to history of philosophy. Given the success of analytic practitioners through journal capture etc., history is a marginalised and merely instrumental part of philosophy.

Yet, despite the supposed irrelevance of history, many analytic philosophers do see themselves as continuous with a tradition that is taken to begin with Frege or Russell. To portray contemporary philosophical work as relevant, it is apparently not enough to trust in truth-conduciveness of the current philosophical tools on display. Justifying current endeavours has to rely on some bits and bobs of history. For some colleagues, grant agencies and students it’s not sufficient to point to the early Ted Sider to highlight the relevance of a project. While pointing to early analytic philosophy is certainly not enough, at least some continuity in terminology, arguments and claims is required. But do early analytic philosophers share the current understanding of history? As I said in the beginning, I think that many early figures in that tradition endorse a rather different view. As late as 1947, Ryle writes in a review of Popper in Mind, the top journal of analytic philosophy:

“Nor is it news to philosophers that Nazi, Fascist and Communist doctrines are descendants of the Hegelian gospel. … Dr Popper is clearly right in saying that even if philosophers are at long last immunized, historians, sociologists, political propagandists and voters are still unconscious victims of this virus …”*

Let me single out two claims from this passage: (1) Hegelian philosophy shaped pervasive political ideologies. (2) Philosophy has become immune against such ideologies. The first claim endorses the idea that historical positions of the past are not only influential for adherent philosophers, but shape political ideologies. This is quite different from the assumption that history is irrelevant. But what about the second claim? The immunity claim seems to deny the influence of history. So on the face of it, the second claim seems to be similar to the idea that history is irrelevant. This would render the statements incongruent. But there is another reading: Only a certain kind of philosophy is immune from the philosophical past and the related ideologies. And this is non-Hegelian philosophy. The idea is, then, not that history is irrelevant, but, to the contrary, that history is quite relevant that thus certain portions of the past should be shunned. Analytic philosophy is construed as the safe haven, exempt from historical influences that still haunt other disciplines.

Ryle is not entirely clear about the factors that would allow for such immunity. But if claim (2) is to be coherent with (1), then this might mean that we are to focus on certain aspects of philosophy and that we should see ourselves in the tradition of past philosophers working on these aspects. If this correct, Ryle is not claiming that philosophy is separate from history and politics, but that it can be exempt from certain kinds of history and politics. As Akehurst argues**, this tradition was adamant to shun German and Britisch idealism as well as many figures that seemed to run counter to certain ideas. Whatever these precise ideas are, the assumption that (early) analytic philosophy is simply a-historical or a-political is a myth.

Whatever one thinks of Ryle’s claims, they are certainly expressive of a core belief in the tradition. At it’s heart we see a process of shunning with the goal of reshaping the canon. The idea of being selective about what considers as the canon is of course no prerogative of analytic philosophy. However, what seems to stand out is the assumption of immunity. While the attempt to immunise oneself or to counter one’s biases is a process that includes the idea that one might be in the grip of ideologies, the idea that one is already immune seems to be an ideology itself.

Now how does this shunning relate to what I called today’s ignorance? For better or worse, I doubt that these stances are easily compatible. At the same time, it seems likely that the professed ignorance is an unreflected outcome of the shunning in earlier times. If this is correct, then the idea of non-historicity has been canonised. In any case, it is time reconsider the relation between analytic philosophy and the history of philosophy.***

____

* Thanks to Richard Creek, Nick Denyer, Stefan Hessbrüggen, Michael Kremer, and Eric Schliesser for some amusing online discussion of this passage.

** See T. Akehurst, The Cultural Politics of Analytic Philosophy: Britishness and the Spectre of Europe, London: Continuum 2010, esp. 58-60. I am grateful to Catarina Dutilh-Novaes for bringing this book to my attention. See also his brief blog post focussing on Russell.

*** Currently, Laura Georgescu and I are preparing a special issue on the Uses and Abuses of History in Analytic Philosophy for JHAP. Please contact us if you are interested in contributing!

Networks and friendships in academia

Recently, I came across an unwelcome reminder of my time as a graduate student and my early-career days. It had the shape of a conference announcement that carries all the signs of a performative contradiction: it invites you by exclusion. What can we learn from such contradictions?

The announcement invites early-career people to attend seminars that run alongside a conference whose line-up is already fixed and seems to consist mainly of a circle of quite established philosophers who have been collaborating closely ever since. Since the invitation is not presented as a “call”, it’s hard to feel invited in the first place. Worse still, you’re not asked to present at the actual conference but to attend “seminars” that are designed “to motivate students and young scholars from all over the world to do research in the field of medieval philosophy and to help them learn new scientific methodology and develop communication skills.” If you’re still interested in attending, you’ll look in vain for time slots dedicated to such seminars. Instead, there is a round table on the last day, scheduled for the same time the organising body holds their annual meeting, thus probably without the established scholars.* You might say there is a sufficient amount of events, so just go somewhere else. But something like the work on the “Dionysian Traditions” is rarely presented. In fact, medieval philosophy is often treated as a niche unto itself, so the choice is not as vast as for, say, analytic metaphysics.

If you think this is problematic, I’ll have to disappoint you. There is no scandal lurking here. Alongside all the great efforts within a growingly inclusive infrastructure of early career support, things like that happen all the time, and since my time as a professor I have been accused of organising events that do at least sound “clubby” myself. Of course, I’m not saying that the actual event announced is clubby like that; it’s just that part of the description triggers old memories. When I was a graduate student, in the years before 2000, at least the academic culture in Germany seemed to be structured in a clubby fashion. By “structured” I mean that academic philosophy often seemed to function as a simple two-class system, established and not-established, and the not-established people had the status of onlookers. They were, it seemed, invited to kind of watch the bigger figures and learn by exposure to greatness. But make no mistake; this culture did not (or not immediately) come across as exclusionary. The onlookers could feel privileged for being around. For firstly, even if this didn’t feel like proper participation, it still felt like the result of a meritocratic selection. Secondly, the onlookers could feel elated, for there was an invisible third class, i.e. the class of all those who either were not selected or didn’t care to watch. The upshot is that part of the attraction of academia worked by exclusion. As an early career person, you felt like you might belong, but you were not yet ready to participate properly.

Although this might come across as a bit negative, it is not meant that way. Academia never was an utopian place outside the structures that apply in the rest of the world. More to the point, the whole idea of what is now called “research-led teaching” grew out of the assumption that certain skills cannot be taught explicitly but have to be picked up by watching others, preferably advanced professionals, at work. Now my point is not to call out traditions of instructing scholars. Rather, this memory triggers a question that keeps coming back to me when advising graduate students. I doubt that research-led teaching requires the old class system. These days, we have a rich infrastructure that, at least on the surface, seems to counter exclusion. But have we overcome this two-class system, and if not, what lesson could it teach us?

Early career people are constantly advised to advance their networking skills and their network. On the whole, I think this is good advice. However, I also fear that one can spend a quarter of a lifetime with proper networking without realising that a network as such does not help. Networks are part of a professionalised academic environment. But while they might help exchanging ideas and even offer frameworks for collaborative projects, they are not functional as such. They need some sort of glue that keeps them together. Some people believe that networks are selective by being meritocratic. But while merit or at least prestige might often belong to the necessary conditions of getting in, it’s probably not sufficient. My hunch is that this glue comes in the shape of friendship. By that I don’t necessarily mean deeply personal friendships but “academic friendships”: people like and trust each other to some degree, and build on that professionally. If true, this might be an unwelcome fact because it runs counter to our policies of inclusion and diversity. But people need to trust each other and thus also need something stronger than policies.

Therefore, the lesson is twofold: On the one hand, people need to see that sustainable networks require trust. On the other hand, we need functional institutional structures to both to sustain such networks and to counterbalance the threat of nepotism that might come with friendship. We have or should have such structures in the shape of laws, universities, academic societies and a growing practice of mentoring. To be sure, saying that networks are not meritocratic does not mean that there is no such thing as merit. Thus, such institutions need to ensure that processes of reviewing are transparent and in keeping with commitments to democratic values as well as to the support of those still underrepresented. No matter whether this concerns written work, conferences or hiring. But the idea that networks as such are meritocratic makes their reliance on friendships invisible.

Now while friendships cannot be forced, they can be cultivated. If we wish to counter the pernicious class system and stabilise institutional remedies against it, we should advise people to extend (academic) friendships rather than competition. Competition fosters the false idea that getting into a network depends on merit. The idea of extending and cultivating academic friendship rests on the idea that merit in philosophy is a collective effort to begin with and that it needs all the people interested to keep weaving the web of knowledge. If at all, it is this way that meritocratic practices can be promoted; not by exclusion. You might object that we are operating with limited resources, but if the demand is on the rise, we have to demand more resources rather than competing for less and less. That said, cultivating academic friendships needs to be counterbalanced by transparency. Yet while we continue to fail, friendships are not only the glue of networks, but might be what keeps you sane when academia seems to fall apart.

Postscriptum I: So what about the conference referred to above? The event is a follow-up from a conference in 1999, and quite some of the former participants are present again. If it was, as it seems, based on academic friendships, isn’t that a reason to praise it? As I said and wish to emphasise again, academic friendships without institutional control do not foster the kinds of inclusive environments we should want. For neither can there be meritocratic procedures without the inclusion of underrepresented groups, nor can a two-class separation of established and not-established scholars lead to the desired extension of academic friendships. In addition to the memories triggered, one might note other issues. Given that there are comparatively many women working in this field, it is surprising that only three women are among the invited speakers. That said, the gendered conference campaign has of course identified understandable reasons for such imbalances. A further point is the fact that early career people wishing to attend have roughly two weeks after the announcement to register and apply. There is no reimbursement of costs, but one can apply for financial support after committing oneself to participate. – In view of these critical remarks, it should be noted again that this conference rather represents the status quo than the exception. The idea is not to criticise that academic friendships lead to such events, but rather to stress the need for rethinking how these can be joined with institutional mechanisms that counterbalance the downsides in tightening our networks.

___

* Postscriptum II (14 March 2019): Yes. Before writing this post, I sent a mail to S.I.E.M.P. inquiring about the nature of the seminars for early career people. I asked:

(1) Are there any time slots reserved for this or are the seminars held parallel to the colloquium?
(2) What is the “new scientific methodology” referred to in the call?
(3) And is there any sort of application procedure?

The mail was forwarded to the local organisers and prompted the following reply:

“Thank you for interest in the colloquium on the Dionysian Traditions!

The time for the seminars is Friday morning. The papers should not be longer than 20 minutes. You should send us a list with titles, and preferably – with abstracts too. We have a strict time limit and not everyone may have the opportunity to present. Travel and accommodation costs are to be covered by the participants.

The new scientific methodology is the methodology you deem commensurate with the current knowledge about the Corpus.”

Apart from the fact that the event runs from a Monday to a Wednesday, the main question about the integration and audience of these seminars remains unanswered. Assuming that “Friday” is Wednesday, the seminars conicide with the announced round table, to be held at the same time at which the bureau of S.I.E.P.M. holds their meeting. (This was confirmed by a further exchange of mails.) But unlike the announcement itself, the mail now speaks of “papers” that the attendees may present.

History of contemporary thought and the silence in Europe. A response to Eric Schliesser

What should go into a history of contemporary ideas or philosophy? Of course this is a question that is tricky to answer for all sorts of reasons. What makes it difficult is that we then tend to think of mostly canonical figures and begin to wonder which of those will be remembered in hundred years. I think we can put an interesting spin on that question if we approach it in a more historical way. How did our current thoughts evolve? Who are the people who really influenced us? There will not only be people whose work we happen to read, but those who directly interact and interacted with us. Our teachers, fellow students, friends and opponents. You might not think of them as geniuses, but we should drop that category anyway. These are likely people who really made a difference to the way you think. So let’s scratch our heads a bit and wonder who gave us ideas directly. In any case, they should figure in the history of our thought.

You might object that these figures would not necessarily be recognised as influential at large. However, I doubt that this is a good criterion: our history is not chiefly determined by who we take to be generally influential, but more often than not by those people we speak to. If not, why would historians bother to figure out real interlocutors in letters etc.? This means that encounters between a few people might make quite a difference. You might also object that a history of contemporary philosophy is not about you. But why not? Why should it not include you at least? What I like about this approach is that it also serves as a helpful corrective to outworn assumptions about who is canonical. Because even if certain figures are canonical, our interpretations of canonical figures are strongly shaped by our direct interlocutors.

Thinking about my own ideas in this way is a humbling experience. There is quite a number of people inside and outside my department to whom I owe many of my ideas. But this approach also reveals some of the conditions, political and other, that allow for such influence. One such condition I am painfully reminded of when observing the current political changes in Europe. No, I do not mean Brexit! Although I find these developments very sad and threatening indeed, most of the work done by friends and colleagues in Britain will reach me independently of those developments.

But Central and Eastern Europe is a different case. As it happens, the work that affected my own research most in the recent years is on the history of natural philosophy. It’s more than a guess when I say that I am not alone in this. Amongst other things, it made me rethink our current and historical ideas of the self. Given that quite a number of researchers who work on this happen to come from Central and Eastern Europe, much of this work probably wouldn’t have reached me, had it not been for the revolutions in 1989. This means that my thinking (and most likely that of others, too) would have been entirely different in many respects, had we not seen the Wall come down and communist regimes overthrown.

Why do I bring this up now? A brief exchange following up on an interesting post by Eric Schliesser* made it obvious that many Western Europeans, by and large, seem to assume that the revolutions from 1989 have had no influence on their thought. As he puts it, “the intellectual class kind of was conceptually unaffected” by them. And indeed, if we look at the way we cite and acknowledge the work of others, we regularly forget to credit many, if not most, of our interlocutors from less prestigious places. In this sense, people in what we call the Western world might be inclined to think that 1989 was not of significance in the history of thought. I think this is a mistake. A mistake arising from the canonical way of thinking about the work that influences us. Instead of acknowledging the work of individuals who actually influence us, we continue citing the next big shot whom we take to be influential in general. By excluding the direct impact of our actual interlocutors, we make real impact largely invisible. Intellectually, the West behaves as if it were still living in the Cold War times. But the fact that we continue to ignore or shun the larger patterns of real impact since 1989 does not entail that it is not there. Any claim to the contrary would, without further evidence at least, amount to an argument from ignorance.

The point I want to make is simple: we depend on other people for our thought. We need to acknowledge this if we want to understand how we come to think what we think. The fact that universities are currently set up like businesses might make us believe that the work people do can (almost) equally be done by other people. But this is simply not true. People influencing our thought are always particular people; they cannot be exchanged salva veritate. If we care about what we think, we should naturally care about the origin of our thought. We owe it to particular people, even if we sometimes forget the particular conversations in which our ideas were triggered, encouraged or refuted.

Now if this is correct, then it’s all the more surprising that we let go of the conditions enabling much of this exchange in Europe so easily. How is it possible, for instance, that most European academics remain quiet in the face of recent developments in Hungary? We witnessed how the CEU was being forced to move to Vienna in an unprecedented manner, and now the Hungarian Academy of Sciences is targeted.**

While the international press reports every single remark (no matter how silly) that is made in relation to Brexit, and while I see many academics comment on this or that aspect (often for very good reasons), the silence after the recent events in Hungary is almost deafening. Of course, Hungary is not alone in this. Academic freedom is now targeted in many places inside and outside Europe. If we continue to let it happen, the academic community in Europe and elsewhere will disintegrate very soon. But of course we can continue to praise our entrepreneurial spirit in the business park of academia and believe that people’s work is interchangeable salva veritate; we can continue talking to ourselves, listen diligently to our echoes, and make soliloquies a great genre again.

____

* See also this earlier and very pertinent post by Eric Schliesser.

** See also this article. And this call for support.

Should contemporary philosophers read Ockham? Or: what did history ever do for us?

If you are a historian of philosophy, you’ve probably encountered the question whether the stuff you’re working on is of any interest today. It’s the kind of question that awakens all the different souls in your breast at once. Your more enthusiastic self might think, “yes, totally”, while your methodological soul might shout, “anachronism ahead!” And your humbler part might think, “I don’t even understand it myself.” When exposed to this question, I often want to say many things at once, and out comes something garbled. But now I’d like to suggest that there is only one true reply to the main question in the title: “No, that’s the wrong kind of question to ask!” – But of course that’s not all there is to it. So please hear me out.

I’m currently revisiting an intriguing medieval debate between William of Ockham, Gregory of Rimini and Pierre D’Ailly on the question of how thought is structured. While I find the issue quite exciting in itself, it’s particularly interesting to see how they treat their different sources, Aristotle and Augustine. While they clearly all know the texts invoked they emphasise different aspects. Thinking about thought, William harkens back to Aristotle and clearly thinks that it’s structure that matters. By contrast, Gregory goes along with Augustine and emphasises thought as a mental action. – For these authors it was clear that their sources were highly relevant, both as inspiration and authorities. At the same time, they had no qualms to appropriate them for their uses. – In some respects we make similar moves today, when we call ourselves Humeans or Aristotelians. But since we also have professional historians of philosophy and look back at traditions of critical scholarship, both historians and philosophers are more cautious when it comes to the question of whether some particular author would be “relevant today”.

In view of this question, historians are trained to exhibit all kinds of (often dismissive) gut reactions, while philosophers working in contemporary themes don’t really have time for our long-winded answers. And so we started talking past each other, happily ever after. That’s not a good thing. So here is why I think the question of whether any particular author could inform or improve current debates is wrongheaded.

Of course everyone is free to read Ockham. But I wouldn’t recommend doing it, if you’re hoping to enrich the current debates. Yes, Ockham says a lot of interesting things. But you’d need a long time to translate them into contemporary terminology and still more time to find an argument that will look like a right-out improvement of a current argument.* – My point is not that Ockham is not an interesting philosopher. My point is that Ockham (and many other past philosophers) doesn’t straightforwardly speak to any current concerns.

However … Yes, of course there was going to be a “however”! However, while we don’t need to ask whether any particular author is relevant today, we should be asking a different question. That Ockham doesn’t speak to current concerns doesn’t mean that historians of philosophy (studying Ockham or others) have nothing to say about current concerns. So it’s not that Ockham should be twisted to speak to current concerns; rather historians and philosophers should be talking to each other! So the right question to ask is: how can historians speak to current issues?

The point is that historians study, amongst other things, debates on philosophical issues. “You say tomahto, I say tomato”, that sort of thing. Debates happen now as they happened then. What I find crucial is that studying debates reveals features that can be informative for understanding current debates. There are certain conditions that have to be met for a debate to arise. We’re not just moving through the space of reasons. Debates occur or decline because of various factors. What we find conceptually salient can be driven by available texts, literary preferences, other things we hold dear, theological concerns, technological inventions (just think of various computer models), arising political pressures (you know what I mean), linguistic factors (what would happen if most philosophers were to write in Dutch?), and a lot of other factors, be they environmental, sociological, or what have you. Although we like to think that the pursuit of truth is central, it’s by far not the only reason why debates arise and certain concepts are coined and stick around, while others are forgotten. Although contingent, such factors are recurrent. And this is something that affects our understanding of current as well as past debates. The historian can approach current debates as a historian in the same way that she can approach past debates. And this is, I submit, where historians can truly speak to current concerns.

Coming back to the debate I mentioned earlier, there is another issue (besides the treatment of sources) that I find striking. In their emphasis of Augustine, Gregory and Peter show a transition from a representational to an action model of thought. Why does this transition occur? Why do they find it important to emphasise action over representation against William? – Many reasons are possible. I won’t go into them now. But this might be an interesting point of comparison to the current debates over enactivism versus certain sorts of representationalism. Why do we have that debate now? Is it owing to influences like Ryle and Gibson? Certainly, they are points of (sometimes implicit) reference. Are there other factors? Again, while I think that these are intriguing philosophical developments, our understanding of such transitions and debates remains impoverished, if we don’t look for other factors. Studying past transitions can reveal recurrent factors in contemporary debates. One factor might be that construing thoughts as acts rather than mere representations discloses their normative dimensions. Acts are something for which we might be held responsible. There is a lot more to be said. For now suffice it to say that it is in comparing debates and uncovering their conditions, that historians of philosophy qua historians can really contribute.

At the same time, historians might also benefit from paying more attention to current concerns. Not only to stay up to date, but also to sharpen their understanding of historical debates.** As we all know, historical facts don’t just pop up. They have to be seen. But this seeing is of course a kind of seeing as. Thus, if we don’t just want to repeat the historiographical paradigms of, say, the eighties, it certainly doesn’t hurt if our seeing is trained in conversation with current philosophers.

____

* That said, this is of course an open question. So I’m happy to be shown a counterexample.

** More on the exchange between philosophers and historians of philosophy can be found in my inaugural lecture.

What’s behind the veil of perception? A response to Han Thomas Adriaenssen

Imagine you’re using glasses, would you think that your grasp of reality is somehow indirect? I guess not. We assume that glasses aid our vision rather than distort or hinder it. The fact that our vision is mediated by glasses does not make it less direct than the fact that our vision is mediated through our eyes. Now imagine your perception is mediated by what early modern philosophers call “ideas”. Does it follow that our grasp of reality is indirect? Many philosophers think that it is. By contrast, I would like to suggest that this is misleading. Ideas make our perceptions no less direct than glasses.

Both early modern and contemporary crictics often take the “way of ideas” as a route to scepticism. The assumption seems to be that the mediation of perception through ideas makes our thoughts not about reality but about the ideas. Han Thomas Adrianssen’s recent book is original in that it tells the story of this and related debates from Aquinas to the early modern period. In celebration of receiving the JHP book prize, Han Thomas gave a brief interview that aptly summarises the common line of criticism against ideas or the assumption of indirect perception related to them:

“Okay. So you explore the philosophical problem of ‘perception and representation’ from Aquinas to Descartes; what exactly is the problem?

HTA: ‘Right. So it goes like this: what is going on in your mind when you see something in your environment? Take this chair, for instance. When you see this chair, what’s happening in your mind? One answer is that you form a kind of pictorial representation of the chair. You start drawing a mental image for yourself of the thing in front of you, and you label it: ‘chair’. … But then there is a worry: if this is how it works – if this is how we access the environment cognitively – then that means there is a sort of interface between us and reality. A veil of perceptions, if you will. So what we’re thinking about is not the chair itself, but our picture of the chair.– But that can’t be right!”

Besides summarising the historical criticisms, Han Thomas seems to go along with their doubts. He suggests that metaphors trick us into such problematic beliefs: the “mental image metaphor” comes “naturally”, but brings about “major problems”.

While I have nothing but admiration for the historical analysis presented, I would like to respond to this criticism on behalf of those assuming ideas or other kinds of representational media. Let’s look at the chided metaphor again. Yes, the talk of the mental image suggests that what is depicted is remote and behind the image. But what about the act of drawing the image? Something, presumably our sense organs are exposed to the things and do some ‘drawing’. So the drawing is not done behind a veil. Rather the act of drawing serves as a transformation of what is drawn into something that is accessible to other parts of our mind.* Thus, we should imagine a series of transformations until our minds end up with ideas. But if you think of it in those terms the transformation is not akin to putting something behind a veil. Rather it is a way of making sensory input available. The same goes for glasses or indeed our eyes. They do not put something behind a veil but make it available in an accessible form. My point is that the metaphor needs to be unpacked more thoroughly. We don’t only have the image; we have the drawing, too.

Following Ruth Millikan’s account of perception,** I would like to argue that the whole opposition of indirect vs direct perception is unhelpful. It has steered both early modern and 20th-century debates in epistemology in fruitless directions. Sense perception is direct (as long as it does not involve inferences through which we explicitly reason that the presence of an idea means the presence of a represented thing). At the same time sense perception is indirect in that it requires means of transformation that make things available to different kinds of receptors. Thus, the kind of indirectness involved in certain cognitive vehicles does not lead anymore to scepticism than the fact that we use eyes to see.

What early modern philosophers call ideas are just cognitive vehicles, resulting from transformations that make things available to us. If an analogy is called for I’d suggest relating them, not to veils, but to glasses. If we unpack the metaphor more thoroughly, what is behind the veil is not only the world, but our very own sense organs making the world available by processing them through media accessible to our mind. If that evokes sceptical doubts, such doubts might be equally raised whenever you put your glasses on or indeed open you eyes to see.

___

* As Han Thomas himself notes (in the book, not the interview), many medieval authors do not think that representationalism leads to scepticism, and endorse an “epistemic optimism”. I guess these authors could be reconstructed as agreeing with my reply. After all, some stress that species (which could be seen as functionally equivalent to ideas) ought to be seen as a medium quo rather than that which is ultimately cognised.

** Ruth Millikan even claims that language is a means of direct perception: “The picture I want to leave you with, then, is that coming to believe, say, that Johnny has come in by seeing that he has come in, by hearing by his voice that he has come in, and by hearing someone say “Johnny has come in,” are normally equivalent in directness of psychological processing. There is no reason to suppose that any of these ways of gaining the information that Johnny has come in requires that one perform inferences. On the other hand, in all these cases it is likely that at least some prior dedicated representations must be formed. Translations from more primitive representations and combinations of these will be involved. If one insists on treating all translation as a form of inference, then all these require inference equally. In either event, there is no significant difference in directness among them. ”

Who’s afraid of relativism?

In recent years, relativism has had a particularly bad press. Often chided along with what some call postmodernism, relativism is held responsible for certain politicians’ complacent ignorance or bullshitting. While I’m not alone in thinking that this scapegoating is due to a severe misunderstanding of relativism, even those who should know better join the choir of condemnation:

“The advance of relativism – the notion that truth is relative to each individual’s standpoint – reached what might be seen as a new low with the recent claim by Donald Trump’s senior adviser Kellyanne Conway that there are such things as “alternative facts”. (She went so far as to cite a non-existent “Bowling Green massacre” to justify Trump’s refugee travel ban, something she later described as a “misspeak”.)” Joe Humphrey’s paraphrasing Timothy Williamson in the Irish Times, 5.7. 2017

If this is what Williamson thinks, he confuses relativism with extreme subjectivism. But I don’t want to dismiss this view too easily. The worry behind this accusation is real. If people do think that truth is relative to each individual’s standpoint, then “anything goes”. You can claim anything and there are no grounds for me to correct you. If this is truth, there is no truth. The word is a meaningless appeal. However, I don’t think that the politicians in question believe in anything as sophisticated as relativism. Following up on some intriguing discussions about the notion of “alternative facts”, I believe that the strategy is (1) to lie by (2) appealing to an (invented) set of states of affairs that supposedly has been ignored. Conway did not assume that she was in the possession of her own subjective truth; quite the contrary. Everyone would have seen what she claimed to be the truth, had they cared to look at the right time in the right way. If I am right, her strategy depends on a shared notion of truth. In other words, I guess that Williamson and Conway roughly start out from the same understanding of truth. To bring in relativism or postmodernism is not helpful when trying to understand the strategy of politicians.

By introducing the term “alternative facts” Conway reminds us of the fact (!) that we pick out truths relative to our interests. I think we are right to be afraid of certain politicians. But why are we afraid of relativism? We have to accept that truth, knowledge or morality are relative to a standard. Relativism is the view that there is more than one such standard.* This makes perfect sense. That 2 plus 2 equals 4 is not true absolutely. Arguably, this truth requires the agreement on a certain arithmetic system. I think that arithmetic and other standards evolve relative to certain interests. Of course, we might disagree about the details of how to spell out such an understanding of relativism. But it is hard to see what makes us so afraid of it.

Perhaps an answer can be given by looking at how relativism evolved historically. If you look at early modern or medieval discussions of truth, knowledge and morality, there is often a distinction between divine and human concepts. Divine knowledge is perfect; human knowledge is partial and fallible. Divine knowledge sets an absolute standard against which human failure is measured. If you look at discussions in and around Locke, for instance, especially his agnosticism about real essences and divine natural law, divine knowledge is still assumed but it loses the status of a standard for us. What we’re left with is human knowledge, in all its mediocrity and fallibility. Hume goes further and no longer even appeals to the divine as a remote standard. Our claims to knowledge are seen as rooted in custom. Now if the divine does no longer serve as an absolute measure, human claims to knowledge, truth and morality are merely one possible standard. There is no absolute standard available. Nominal essences or customs are relative to the human condition: our biological make-up and our interests. The focus on human capacities, irrespective of the divine, is a growing issue, going hand in hand with an idea of relativism.  The “loss” of the absolute is thus owing to a different understanding of theological claims about divine standards. Human knowledge is relative in that it is no longer measured against divine knowledge. If this is correct, relativism emerged (also) as a result of a dissociation of divine and human standards. Why would we be afraid of that?

____

* I’m following Martin Kusch’s definition in his proposal for the ERC project on the Emergence of Relativism“It is not easy to give a neutral definition of “relativism”: defenders and critics disagree over the question of what the relativist is committed to. Roughly put, the relativist regarding a given domain (e.g. epistemology) insists that judgments or beliefs in this domain are true or false, justified or unjustified, only relative to  systems of standards. For the relativist there is more than one such system, and there is no neutral way of adjudicating between them. Some relativists go further and claim that all such systems are equally valid.”

“Is it ok if I still work on Descartes?” The canon does not have to be canonical

Browsing through the web today, I found the following passage on the webpage of one of the few leading journals in the history of philosophy:

“Ever since the founding of the Journal of the History of Philosophy, its articles (and its submissions) have been dominated by papers on a small, select coterie of philosophers. Not surprisingly, these are Plato, Aristotle, Descartes, Spinoza, Hume, and Kant.”

“Not surprisingly” can be said in many ways, but the place and phrasing of the passage suggest some sort of pride on part of the author. But the “coterie” is so small that it still makes me chuckle. Given that this is one of the general top journals for the whole of the history of philosophy, this narrowness should be worrying. Posting this on facebook lead to some obvious entertainment. However, I also recognised some mild expression of shame from those who work on canonical figures. And I sometimes caught myself wondering whether I should continue to work on figures such as Ockham, Locke, Spinoza and Hume. Should we feel ashamed of working on the canon? In the light of such questions, I would like to briefly talk about a different worry: that of throwing out the baby with the bathwater. More precisely, I worry that attempts at diversifying the canon can harm good work on and alongside the canon. Let me explain.

Currently, we are witnessing an enormous amount of initiatives to diversify the canon, both with regard to the inclusion of women as well as of non-western traditions. The initiatives and projects I know are truly awe-inspiring. Not only do they open up new areas of research, they also affect the range of what is taught, even in survey courses. This is a great success for teaching and research in philosophy and its history. On the one hand, we learn more and more about crucial developments in the history of philosophy on a global level. On the other hand, this increase of knowledge also seems to set a moral record straight. In view of attempts to make our profession more inclusive in hiring, it’s obvious that we should also look beyond the narrow “coterie” when it comes to the content of research and teaching.

Now the moral dimension of diversification might embarrass those who continue to do teaching and research on canonical figures. “Is it ok”, one might wonder, “to teach Descartes rather than Elisabeth of Bohemia?” Of course, we might reply that it depends on one’s agenda. Yet, as much as diversification is a good thing, it will put pressure on those who choose otherwise. Given constraints of time and space, diversification might be perceived as adding to the competition. Will publishers and editors begin to favour the cool new work on non-canonical figures? Will I have to justify my canonical syllabus? While I wouldn’t worry too much about such issues, we know that our profession is rather competitive and it wouldn’t be the first time that good ideas are abused for nasty ends. – This is why it’s vital to see the whole idea of diversification as one to enrich and complement our knowledge. Rather than seeing canonical figures being pushed to the side, we should embrace the new lines of research and teaching as a way of learning new things also about canonical figures. In keeping with this spirit, I’d like to highlight two points that I find crucial in thinking about the canon and its diversification:

  • Firstly, there are non-canonical interpretations of the canon. The very idea of a canon suggests that we already know most things about certain figures and traditions. But we need to remind ourselves that the common doxography does by no means exhaust what there is to be known about authors such as Plato or Kant. Rather we need to see that most authors and debates are still unknown. On the one hand, we gather new historical knowledge about these figures. On the other hand, each generation of scholars has to make up their minds anew. Thus, even if we work on on the most canonical figures ever, we can challenge the common doxography and develop new knowledge.
  • Secondly, the diversification should also concern neglected figures alongside the canon. Have you noticed that the Middle Ages are represented by three authors? Yes, Aquinas, Aquinas, and Aquinas! Almost every study dipping into medieval discussions mentions Aquinas, while his teacher Albert the Great is hardly known outside specialist circles. But when we talk of diversification, we usually don’t think of Albert the Great, Adam of Wodeham, Kenelm Digby or Bernard Bolzano. These authors are neglected, unduly so, but they normally aren’t captured by attempts at diversification either. They run alongside the canonical figures and weigh on our conscience, but they have not much of a moral lobby. Yet, as I see it, it’s equally important that the work on them be continued and that they are studied in relation to other canonical and non-canonical figures.

In other words, the canon does not have to be canonical. The upshot is that we need as much work on canonical as on non-canonical figures in all senses of the word. We hardly know anything about either set of figures. And we constantly need to renew our understanding. Competition between these two areas of research and teaching strikes me as nonsensical. There is nothing, absolutely nothing wrong with working on canonical figures.