Philosophical genres. A response to Peter Adamson

Would you say that the novel is of a more proper literary genre than poetry? Or would you say that the pop song is less of a musical genre than the sonata? To me these questions make no sense. Both poems and novels form literary genres; both pop songs and sonatas form musical genres. And while you might have a personal preference for one over the other, I can’t see a justification for principally privileging one over the other. The same is of course true of philosophical genres: A commentary on a philosophical text is no less of a philosophical genre than the typical essay or paper.* Wait! What?

Looking at current trends that show up in publication lists, hiring practices, student assignments etc., articles (preferably in peer-reviewed journals) are the leading genre. While books still count as important contributions in various fields, my feeling is that the paper culture is beginning to dominate everything else. But what about commentaries to texts, annotated editions and translations or reviews? Although people in the profession still recognise that these genres involve work and (increasingly rare) expertise, they usually don’t count as important contributions, even in history of philosophy. I think this trend is highly problematic for various reasons. But most of all it really impoverishes the philosophical landscape. Not only will it lead to a monoculture in publishing; also our teaching of philosophy increasingly focuses on paper production. But what does this trend mean? Why don’t we hold other genres at least in equally high esteem?

What seemingly unites commentaries to texts, annotated editions and translations or reviews is that they focus on the presentation of the ideas of others. Thus, my hunch is that we seem to think more highly of people presenting their own ideas than those presenting the ideas of others. In a recent blog post, Peter Adamson notes the following:

“Nowadays we respect the original, innovative thinker more than the careful interpreter. That is rather an anomaly, though. […]

[I]t was understood that commenting is itself a creative activity, which might involve giving improved arguments for a school’s positions, or subtle, previously overlooked readings of the text being commented upon.”

Looking at ancient, medieval and even early modern traditions, the obsession with what counts as originality is an anomaly indeed. I say “obsession” because this trend is quite harmful. Not only does it impoverish our philosophical knowledge and skills, it also destroys a necessary division of labour. Why on earth should every one of us toss out “original claims” by the minute? Why not think hard about what other people wrote for a change? Why not train your philosophical chops by doing a translation? Of course the idea that originality consists in expressing one’s own ideas is fallacious anyway, since thinking is dialogical. If we stop trying to understand and uncover other texts, outside of our paper culture, our thinking will become more and more self-referential and turn into a freely spinning wheel… I’m exaggerating of course, but perhaps only a bit. We don’t even need the medieval commentary traditions to remind ourselves. Just remember that it was, amongst other things, Chomsky’s review of Skinner that changed the field of linguistics. Today, writing reviews, working on editions and translations doesn’t get you a grant, let alone a job. While we desperately need new editions, translations and materials for research and teaching, these works are esteemed more like a pastime or retirement hobby.**

Of course, many if not most of us know that this monoculture is problematic. I just don’t know how we got there that quickly. When I began to study, the work on editions and translations still seemed to flourish, at least in Germany. But it quickly died out, history of philosophy was abandoned or ‘integrated’ in positions in theoretical or practical philosophy, and many people who then worked very hard on the texts that are available in shiny editions are now without a job.

If we go on like this, we’ll soon find that no one will be able to read or work on past texts. We should then teach our students that real philosophy didn’t begin to evolve before 1970 anyway. Until it gets that bad I would plead for reintroducing a sensible division of labour, both in research and teaching. If you plan your assignments next time, don’t just offer your students to write an essay. Why not have them choose between an annotated translation, a careful commentary on a difficult passage or a review? Oh, of course, they may write an essay, too. But it’s just one of many philosophical genres, many more than I listed here.

____

* In view of the teaching practice that follows from the focus on essay writing, I’d adjust the opening analogy as follows: Imagine the music performed by a jazz combo solely consisting of soloists and no rhythm section. And imagine that all music instruction would from now on be geared towards soloing only… (Of course, this analogy would capture the skills rather than the genre.)

** See Eric Schliesser’s intriguing reply to this idea.

Against allusions

What is the worst feature of my writing? I can’t say what it is these days; you tell me please! But looking back at what I worked hardest to overcome in writing I’d say it’s using allusions. I would write things such as “in the wake of the debate on semantic externalism” or “given the disputes over divine omnipotence bla bla” without explaining what precise debate I actually meant or what kind of semantic externalism or notions of the divine I had in mind. This way, I would refer to a context without explicating it. I guess such allusions were supposed to do two things: on the one hand, I used them to abbreviate the reference to a certain context or theory etc., on the other hand, I was hoping to display my knowledge of that context. To peers, it was meant to signal awareness of the appropriate references without actually getting too involved and, most importantly, without messing up. If you don’t explicate or explain, you can’t mess things up all that much. In short, I used allusions to make the right moves. So what’s wrong with making the right moves?

Let me begin by saying something general about allusions. Allusions, also known as “hand waving”, are meant to refer to something without explicitly stating it. Thus, they are good for remaining vague or ambiguous and can serve various ends in common conversation or literature. Most importantly, their successful use presupposes sufficient knowledge on part of the listener or reader who has to have the means to disambiguate a word or phrase. Funnily enough, such presuppositions are often accompanied by phrases insinuating the contrary. Typical phrases are: “as we all know”, “as is well known”, “famously”, “obviously”, “clearly”, “it goes without saying” etc.

Such presuppositions flourish and work greatly among friends. Here, they form a code that often doesn’t require any of the listed phrases or other markers. They rather work like friendly nods or winks. But while they might be entertaining among friends, they often exclude other listeners in scholarly contexts. Now you might hasten to think that those excluded simply don’t ‘get it’, because they lack the required knowledge. But that’s not true. Disambiguation requires knowledge, yes, but it also and crucially requires confidence (since you always might make a fool of yourself after all) and an interest in the matter. If you’re unsure whether you’re really interested, allusions used among scholars often closely resemble the tone of a couple of old blokes dominating a dinner party with old insider jokes. Who wants to sound like that in writing?

Apart from sounding like a bad party guest, there is a deeper problem with allusions in scholarly contexts. They rely on the status quo of canonical knowledge. Since the presuppositions remain unspoken, the listener has go by what he or she takes to be a commonly acceptable disambiguation. Of course, we have to take some things as given and we cannot explicate everything, but when it comes to important steps in our arguments or evidence, reliance on allusions is an appeal to the authority of the status quo rather than the signalling of scholarly virtue.

I began to notice this particularly in essays by students who were writing their essays mainly for their professors. Assuming that professors know (almost) everything, nothing seems to need unpacking. But since almost all concepts in philosophy are essentially contested, such allusions often don’t work. As long as I don’t know which precise version of an idea I’m supposed to assume, I might be just as lost as if I didn’t know the next thing about it. Thus the common advice to write for beginners or fellow students. Explain and unpack at least all the things you’re committed to argue for or use as evidence for a claim. Otherwise at least I often won’t get what’s going on.

The problem with that advice is that it remains unclear how much explanation is actually appropriate. Of course, we can’t do without presuppositions. And we cannot and should not write only for beginners. If allusions are a vice, endless explanations might fare no better. Aiming at avoiding every possible misunderstanding can result in an equally dull or unintelligible prose. So I guess we have to unpack some things and merely allude to others. But which ones do we explain in detail? It’s important to see that every paper or book has (or should have) a focus: this is the claim you ultimately want to argue for. At the same time, there will be many assumptions that you shouldn’t commit yourself to showing. I attempt to explain only those things that are part of the focus. That said, it sometimes really is tricky to figure out what that focus actually is. Unpacking allusions might help with finding it, though.

Why would we want to call people “great thinkers” and cite harassers? A response to Julian Baggini

If you have ever been at a rock or pop concert, you might recognise the following phenomenon: The band on the stage begins playing an intro. Pulsing synths and roaring drums build up to a yet unrecognisable tune. Then the band breaks into the well-known chorus of their greatest hit and the audience applauds frenetically. People become enthusiastic if they recognise something. Thus, part of the “greatness” is owing to the act of recognising it. There is nothing wrong with that. It’s just that people celebrate their own recognition at least as much as the tune performed. I think much the same is true of our talk of “great thinkers”. We applaud recognised patterns. But only applauding the right kinds of patterns and thinkers secures our belonging to the ingroup. Since academic applause signals and regulates who belongs to a group, such applause has a moral dimension, especially in educational institutions. Yes, you guess right, I want to argue that we need to rethink whom and what we call great.

When we admire someone’s smartness or argument, an enormous part of our admiration is owing to our recognition of preferred patterns. This is why calling someone a “great thinker” is to a large extent self-congratulatory. It signals and reinforces canonical status. What’s important is that this works in three directions: it affirms that status of the figure, it affirms it for me, and it signals this affirmation to others. Thus, it signals where I (want to) belong and demonstrates which nuances of style and content are of the right sort. The more power I have, the more I might be able to reinforce such status. People speaking with the backing of an educational institution can help building canonical continuity. Now the word “great” is conveniently vague. But should we applaud bigots?

“Admiring the great thinkers of the past has become morally hazardous.” Thus opens Julian Baggini’s piece on “Why sexist and racist philosophers might still be admirable”. Baggini’s essay is quite thoughtful and I advise you to read it. That said, I fear it contains a rather problematic inconsistency. Arguing in favour of excusing Hume for his racism, Baggini makes an important point: “Our thinking is shaped by our environment in profound ways that we often aren’t even aware of. Those who refuse to accept that they are as much limited by these forces as anyone else have delusions of intellectual grandeur.” – I agree that our thinking is indeed very much shaped by our (social) surroundings. But while Baggini makes this point to exculpate Hume,* he clearly forgets all about it when he returns to calling Hume one of the “greatest minds”. If Hume’s racism can be excused by his embeddedness in a racist social environment, then surely much of his philosophical “genius” cannot be exempt from being explained through this embeddedness either. In other words, if Hume is not (wholly) responsible for his racism, then he cannot be (wholly) responsible for his philosophy either. So why call only him the “great mind”?

Now Baggini has a second argument for leaving Hume’s grandeur untouched. Moral outrage is wasted on the dead because, unlike the living, they can neither “face justice” nor “show remorse”. While it’s true that the dead cannot face justice, it doesn’t automatically follow that we should not “blame individuals for things they did in less enlightened times using the standards of today”. I guess we do the latter all the time. Even some court systems punish past crimes. Past Nazi crimes are still put on trial, even if the system under which they were committed had different standards and is a thing of a past (or so we hope). Moreover, even if the dead cannot face justice themselves, it does make a difference how we remember and relate to the dead. Let me make two observations that I find crucial in this respect:

(1) Sometimes we uncover “unduly neglected” figures. Thomas Hobbes, for instance, has been pushed to the side as an atheist for a long time. Margaret Cavendish is another case of a thinker whose work has been unduly neglected. When we start reading such figures again and begin to affirm their status, we declare that we see them as part of our ingroup and ancestry. Accordingly, we try and amend an intellectual injustice. Someone has been wronged by not having been recognised. And although we cannot literally change the past, in reclaiming such figures we change our intellectual past, insofar as we change the patterns that our ingroup is willing to recognise. Now if we can decide to help changing our past in that way, moral concerns apply. It seems we have a duty to recognise figures that have been shunned, unduly by our standards.**

(2) Conversely, if we do not acknowledge what we find wrong in past thinkers, we are in danger of becoming complicit in endorsing and amplifying the impact of certain wrongs or ideologies. But we have the choice of changing our past in these cases, too. This becomes even more pressing in cases where there is an institutional continuity between us and the bigots of the past. As Markus Wild points out in his post, Heidegger’s influence continues to haunt us, if those exposing his Nazism are attacked. Leaving this unacknowledged in the context of university teaching might mean becoming complicit in amplifying the pertinent ideology. That said, the fact that we do research on such figures or discuss their doctrines does not automatically mean that we endorse their views. As Charlotte Knowles makes clear, it is important how we relate or appropriate the doctrines of others. It’s one thing to appropriate someone’s ideas; it’s another thing to call that person “great” or a “genius”.

Now, how do these considerations fare with regard to current authors? Should we adjust, for instance, our citation practices in the light of cases of harassment or crimes? – I find this question rather difficult and think we should be open to all sorts of considerations.*** However, I want to make two points:

Firstly, if someone’s work has shaped a certain field, it would be both scholarly and morally wrong to lie about this fact. But the crucial question, in this case, is not whether we should shun someone’s work. The question we have to ask is rather why our community recurrently endorses people who abuse their power. If Baggini has a point, then the moral wrongs that are committed in our academic culture are most likely not just the wrongs of individual scapegoats who happen to be found out. So if we want to change that, it’s not sufficient to change our citation practice. I guess the place to start is to stop endowing individuals with the status of “great thinkers” and begin to acknowledge that thinking is embedded in social practices and requires many kinds of recognition.

Secondly, trying to take the perspective of a victim, I would feel betrayed if representatives of educational institutions would simply continue to endorse such voices and thus enlarge the impact of perpetrators who have harmed others in that institution. And victimhood doesn’t just mean “victim of overt harassment”. As I said earlier, there are intellectual victims of trends or systems that shun voices for various reasons, only to be slowly recovered by later generations who wish to amend the canon and change their past accordingly.

So the question to ask is not only whether we should change our citation practices. Rather we should wonder how many thinkers have not yet been heard because our ingroup keeps applauding one and the same “great mind”.

___

* Please note, however, that Hume’s racism was already criticised by Adam Smith and James Beattie, as Eric Schliesser notes in his intriguing discussion of Baggini’s historicism (from 26 November 2018).

** Barnaby Hutchins provides a more elaborate discussion of this issue: “The point is that a neutral approach to doing history of philosophy doesn’t seem to be a possibility, at least not if we care about, e.g., historical accuracy or innovation. Our approaches need to be responsive to the structural biases that pervade our practices; they need to be responsive to the constant threat of falling into this chauvinism. So it’s risky, at best, to take an indiscriminately positive approach towards canonical and non-canonical alike. We have an ethical duty (broadly construed) to apply a corrective generosity to the interpretation of non-canonical figures. And we also have an ethical duty to apply a corrective scepticism to the canon. Precisely because the structures of philosophy are always implicitly pulling us in favour of canonical philosophers, we need to be, at least to some extent, deliberately antagonistic towards them.”

In the light of these considerations, I now doubt my earlier conclusion that “attempts at diversifying our teaching should not be supported by arguments from supposedly different moral status”.

*** See Peter Furlong’s post for some recent discussion.

Kill your darlings! But how?

Why can’t you finish that paper? What’s keeping you? – There is something you still have to do. But where can you squeeze it in? Thinking about salient issues I want to address, I often begin to take the paper apart again, at least in my mind. – “Kill your darlings” is often offered as advice for writers in such situations. When writing or planning a paper, book or project you might be prone to stick to tropes, phrases or even topics and issues that you had better abandon. While you might love them dearly, the paper would be better off without them. So you might have your paper ready, but hesitate to send it off, because it still doesn’t address that very important issue. But does your paper really need to address this? – While I can’t give you a list of items to watch out for, I think it might help to approach this issue by looking at how it arises.

How do you pick your next topic for a project or paper? Advanced graduate students and researchers are often already immersed in their topics. At this level we often don’t realise how we get into these corners. Thus, I’d like to look at situations that I find BA students in when they think about papers or thesis topics. What I normally do is ask the student for their ideas. What I try to assess, then, are two things: does the idea work for a paper and is the student in a position to pursue it? In the following, I’ll focus on the ideas, but let’s briefly look at the second issue. Sometimes ideas are very intriguing but rather ambitious. In such cases, one might be inclined to discourage students from going through with it. But some people can make it work and shouldn’t be discouraged. You’ll notice that they have at least an inkling of a good structure, i.e. a path that leads palpably from a problem to a sufficiently narrow claim. However, more often people will say something like this: “I don’t yet know how to structure the argument, but I really love the topic.” At this point, the alarm bells should start ringing and you should look very carefully at the proposed idea. What’s wrong with darlings then?

(1) Nothing: A first problem is that nothing might seem wrong with them. Liking or being interested in a topic isn’t wrong. And it would be weird to say that someone should stop pursuing something because they like it. Liking something is in fact a good starting point. You’ve probably ended up studying philosophy because you liked something about it. (And as Sara Uckelman pointed out, thinking about your interests outside philosophy and then asking how they relate to philosophy might provide a good way to finding a dissertation topic.) At the same time, your liking something doesn’t necessarily track good paper topics. It’s a way into a field, but once you’re there other things than your liking might decide whether something works. Compare: I really love the sound of saxophones; I listen to them a lot. Perhaps I should learn to play the saxophone. So it might get me somewhere. But should start playing it live on stage now? Well …

(2) Missing tensions. What you like or love is likely to draw you in. That’s good. But it might draw you in in an explorative fashion. So you might think: “Oh, that’s interesting. I want to know all about it.” But that doesn’t give you something to work on. An explorative mood doesn’t get you a paper; you need to want to argue. Projects in philosophy and its history focus on tensions. If you want to write a paper, you’ve got to find something problematic that creates an urgent need for explanation, like an apparent contradiction or a text that does not seem to add up. Your love or interest in a topic doesn’t track tensions. If you want to find a workable idea, find a tension.

(3) Artificial tensions. Philosophy is full of tensions. When people want to “do what they love”, they often look for a tension in their field. Of course, there will be a lot of tensions discussed in the literature. But since people often believe they should be original, they will create a tension rather than pick up one already under discussion. This is where problems really kick in. You might for instance begin a thesis supervision and be greeted with a tentative “I’m interested in love and I always liked speech act theory. I would like to write about them.” I have to admit that it’s this kind of suggestion I hear most often. So what’s happening here? – What we’re looking at is not a tension but a (difficult) task. The task is created by combining two areas and hence creating the problem of applying the tools of one field to the issue of another. Don’t get me wrong: of course you can write intriguing stuff by applying speech act theory to the issue of love. But this usually requires some experience in both areas. Students often come up with some combination because they like both topics or had some good exposure to them. There might also be a vague idea of how to actually combine the issues, but there is no genuine tension. All there is is a difficult task, created ad hoc out of the need to come up with a tension.

Summing up, focusing on your interests alone doesn’t really guide you towards good topics to work on. What do I take home from these considerations? Dealing with darlings is a tricky business. Looking at my own work, I know that a strong interest in linguistics and a deep curiosity about the unity of sentences got me into my MA and PhD topics. But while these interests got me in, I had to let go of them when pursuing my actual work. So they shaped my approach, but they did not dictate the arguments. Motivationally, I could not have done without them. But in the place they actually took me, I would have been misguided by clinging to them.

Anyway, the moral is: let them draw you in, but then let go of them. Why is that worth adhering to? Because your darlings are about you, but your work should not be about yourself, at least not primarily. The tensions that you encounter will come out of existing discussions or texts, not out of tasks you create for yourself. How do you distinguish between the two? I’d advise to look for the actual point of contact that links all the issues that figure in your idea. This will most likely be a concrete piece of text or phrase or claim – the text that is central in your argument. Now ask yourself whether that piece of text really requires an answer to the question you can’t let go of. Conversely, if you have an idea but you can’t find a concrete piece of text to hang it onto, let go of the idea or keep it for another day.

“Heidegger was a Nazi.” What now?

“B was a bigot” is a phrase that raises various questions. We can say it of various figures, both dead and alive. But this kind of phrase is used for various purposes. In what follows, I’d like consider some implications of this phrase and its cognates. – Let me begin with what might seem a bit of a detour. Growing up in Germany, I learned that we are still carrying responsibility for the atrocities committed under the Nazi regime. Although some prominent figures declared otherwise even in the Eighties, I think this is true. Of course, one might think that one cannot have done things before one was born, but that does not mean that one is cut off from one’s past. Thinking historically means, amongst other things, to think of yourself as determined by continuities that run right through you from the past into the options that make your future horizon. The upshot is: we don’t start from scratch. It is with such thoughts that I look at the debates revolving around Heidegger and other bigots. Is their thought tainted by their views? Should we study and teach them? These are important questions that will continue to be asked and answered. Adding to numerous discussions, I’d like to offer three and a half considerations.*

(1) The question whether someone’s philosophical thought is tainted or even pervaded by their political views should be treated as an open question. There is no a priori consideration in favour of one answer. That said, “someone’s thought” is ambiguous. If we ask whether Heidegger’s or Frege’s (yes, Frege’s!) thought was pervaded by their anti-semitism, the notion is ambiguous between “thought” taken as an item in psychological and logical relations. The psychological aspects that explain why I reason the way I do, often do not show up in the way a thought is presented or received. – Someone’s bigotry might motivate their thinking and yet remain hidden. But even if something remains hidden, it does not mean that it carries no systematic weight. There is an old idea, pervasive in the analytic tradition, that logical and political questions are distinct. But the idea that logic and politics are distinct realms is itself a political idea. All such issues have to be studied philosophically and historically for each individual thinker. How, for instance, can Spinoza say what he says about humans and then say what he says about women? This seems glaringly inconsistent and deserves study rather than brushing off. However, careful study should involve historically crucial ties beyond the question of someone’s thought. There are social, political and institutional continuities (and discontinuities) that stabilise certain views while disqualifying others.

(2) Should we study bigots? If the forgoing is acceptable, then it follows that we shouldn’t discourage the study of bigots. Quite the contrary! This doesn’t mean that I recommend the study of bigots in particular; there are enough understudied figures that you might turn to instead. It just means that their bigotry doesn’t disqualify them as topics of study and that if you’re wondering whether you should, that might in itself be a good reason to get started. This point is of course somewhat delicate, since history of philosophy is not only studied by disinterested antiquarians, but also for reasons of justifying why we endorse certain views or because we hope to find good or true accounts of phenomena. – Do we endorse someone’s political views by showing continuities between their thoughts and ours? Again, that depends and should be treated as an open question. But I don’t think that shunning the past is a helpful strategy. After all, the past provides the premises we work from, whether we like it or not. Rather we should look carefully at possible implications. But the fact that we appropriate certain ideas does not entail that we are committed to such implications. As I said in my last post, we can adopt thoughts, while changing and improving them. That fact that Heidegger was a Nazi does not turn his students or later exegetes into Nazis. However, once we know about the bigotry we should acknowledge as much in research and teaching.

(3) What about ourselves? Part of the reason for making the second remark was that I sometimes hear people say: “A was a bigot; so we shouldn’t teach A. Let’s rather teach B.” While I agree that there are huge numbers of understudied figures that might be taught instead of the same old classics, I don’t think that this line of argument helps. As I see it, it often comes out of the problematic idea that, ideally, we should study and teach only such figures that we consider morally pure. This is a doubtful demand not only because we might end up with very little material. It is also problematic because it suggests that we can change our past at will.** Therefore, attempts at diversifying our teaching should not be supported by arguments from supposedly different moral status; rather we should see that globalisation requires us to eventually acknowledge the impact of various histories and their entanglements. – We don’t teach Heidegger because we chose to ignore his moral status. We teach his and other works because our own thought is related to these works. This has an important consequence for our own moral status. Having the histories we do, our own moral status is tainted. In keeping with my introductory musings, I’d like to say that we are responsible for our past. The historical continuities that we like and wish to embrace are as much our responsibilities as those that we wish to disown. Structurally oppressive features of the past are not disrupted just because we change our teaching schedule.

I guess the general idea behind these considerations is this: The assumption that one can cut off oneself from one’s (philosophical) past is an illusion. As philosophers in institutional contexts we cannot deny that we might be both beneficiaries of dubious heritage as well as suffering from burdens passed down. In other words, some of the bigotry will carry over. Again, this doesn’t mean that we are helpless continuants of past determinants, but it means that it is better to study our past and our involvements with it carefully rather than deny them and pretend to be starting from scratch.

___

* See especially the pieces by Peter Adamson and Eric Schliesser.

** Additional comment (25 Nov 2018): However, there is a sense in which we can change our intellectual past, namely reassessing the canon and including neglected figures, on the one hand, while relativising the impact of others. – I have to admit that now doubt the conclusion that “attempts at diversifying our teaching should not be supported by arguments from supposedly different moral status”.

I don’t know what I think. A plea for unclarity and prophecy

Would you begin a research project if there were just one more day left to work on it? I guess I wouldn’t. Why? Well, my assumption is that the point of a research project is that we improve our understanding of a phenomenon. Improvement seems to be inherently future-directed, meaning that we understand x a bit better tomorrow than today. Therefore, I am inclined to think that we would not begin to do research, had we not the hope that it might lead to more knowledge of x in the future. I think this not only true of research but of much thinking and writing in general. We wouldn’t think, talk or write certain things, had we not the hope that this leads to an improved understanding in the future. You might find this point trivial. But a while ago it began to dawn on me that the inherent future-directedness of (some) thinking and writing has a number of important consequences. One of them is that we are not the (sole) authors of our thoughts. If this is correct, it is time to rethink our ways of evaluating thoughts and their modes of expression. Let me explain.

So why am I not the (sole) author of my thoughts? Well, I hope you all know variations of the following situation: You try to express an idea. Your interlocutor frowns and points out that she doesn’t really understand what you’re saying. You try again. The frowning continues, but this time she offers a different formulation. “Exactly”, you shout, “this is exactly what I meant to say!” Now, who is the author of that thought? I guess it depends. Did she give a good paraphrase or did she also bring out an implication or a consequence? Did she use an illustration that highlights a new aspect? Did she perhaps even rephrase it in such a way that it circumvents a possible objection? And what about you? Did you mean just that? Or do you understand the idea even better than before? Perhaps you are now aware of an important implication. So whose idea is it now? Hers or yours? Perhaps you both should be seen as authors. In any case, the boundaries are not clear.

In this sense, many of my thoughts are not (solely) authored by me. We often try to acknowledge as much in forewords and footnotes. But some consequences of this fact might be more serious. Let me name three: (1) There is an obvious problem for the charge of anachronism in history of philosophy (see my inaugural lecture).  If future explications of thoughts can be seen as improvements of these very thoughts, then anachronistic interpretations should perhaps not merely be tolerated but encouraged. Are Descartes’ Meditations complete without the Objections and Replies? Can Aristotle be understood without the commentary traditions? Think about it! (2) Another issue concerns the identity of thoughts. If you are a semantic holist of sorts you might assume that a thought is individuated by numerous inferential relations. Is your thought that p really what it is without it entailing q? Is your thought that p really intelligible without seeing that it entails q? You might think so, but the referees of your latest paper might think that p doesn’t merit publication without considering q. (3) This leads to the issue of acceptability. Whose inferences or paraphrases count? You might say that p, but perhaps p is not accepted in your own formulation, while the expression of p in your superviser’s form of words is greeted with great enthusiasm. In similar spirit, Tim Crane has recently called for a reconsideration of peer review.  Even if some of these points are controversial, they should at least suggest that authorship has rather unclear boundaries.

Now the fact that thoughts are often future-directed and have multiple authors has, in turn, a number of further consequences. I’d like to highlight two of them by way of calling for some reconsiderations: a due reconsideration of unclarity and what Eric Schliesser calls “philosophic prophecy”.*

  • A plea for reconsidering unclarity. Philosophers in the analytic tradition pride themselves on clarity. But apart from the fact that the recognition of clarity is necessarily context-dependent, clarity ought to be seen as the result of a process rather than a feature of the thought or its present expression. Most texts that are considered original or important, not least in the analytic tradition, are hopelessly unclear when read without guidance. Try Russell’s “On Denoting” or Frege’s “On Sense and Reference” and you know what I mean. Or try some other classics like Aristotle’s “De anima” or Hume’s “Treatise”. Oh, your own papers are exempt from this problem? Of course! Anyway, we all know this: we begin with a glimpse of an idea. And it’s the frowning of others that either makes us commit it to oblivion or try an improvement. But if this is remotely true, there is no principled reason to see unclarity as a downside. Rather it should be seen as a typical if perhaps early stage of an idea that wants to grow.
  • A plea for coining concepts or philosophic prophecy. Simplifying an idea by Eric Schliesser, we should see both philosophy and history of philosophy as involved in the business of coining concepts that “disclose the near or distant past and create a shared horizon for our philosophical future.”* As is well-known some authors (such as Leibniz, Kant or Nietzsche) have sometimes decidedly written rather for future audiences than present ones, trying to pave conceptual paths for future dialogues between religions, metaphysicians or Übermenschen. For historians of philosophy in particular this means that history is never just an antiquarian enterprise. By offering ‘translations’ and explanations we can introduce philosophical traditions to the future or silence them. In this sense, I’d like to stress, for example, that Ryle’s famous critique of Descartes, however flawed historically, should be seen as part of Descartes’ thought. In the same vain, Albert the Great or Hilary Putnam might be said to bring out certain versions of Aristotle. This doesn’t mean that they didn’t have any thoughts of their own. But their particular thoughts might not have been possible without Aristotle, who in turn might not be intelligible (to us) without the later developments. In this sense, much if not all philosophy is a prophetic enterprise.

If my thoughts are future-directed and multi-authored in such ways, this also means that I often couldn’t know at all what I actually think, if it were not for your improvements or refinements. This is of course one of the lessons learned from Wittgenstein’s so-called private language argument. But it does not only concern the possibility of understanding and knowing. A fortiori it also concerns understanding our own public language and thought. As I said earlier, I take it to be a rationality constraint that I must agree to some degree with others in order to understand myself. This means that I need others to see the point I am trying to make. If this generalises, you cannot know thyself without listening to others.

___

* See Eric Schliesser, “Philosophic Prophecy”, in Philosophy and It’s History, 209.

 

 

Why do we share the vulgar view? Hume on the medical norms of belief*

We tend to think that beliefs are opinions that we form in the light of certain evidence. But perhaps most beliefs are not like that. Perhaps most beliefs are like contagious diseases that we catch. – When philosophers talk like that, it’s easy to think that they are speaking metaphorically. Looking at debates around Hume and other philosophers, I’ve begun to doubt that. There is good reason to see references to physiology and medical models as a genuine way of philosophical explanation. As I hope to suggest now, Hume’s account of beliefs arising from sympathy is a case in point.

Seeing the table in front of me, I believe that there is a table. Discerning the table’s colour, I believe that the table is brown. It is my philosophical education that made me wonder whether what I actually perceive might not be the table and a colour but mental representations of such things. Taking things to be as they appear to us, without wondering about cognitive intermediaries, that is what is often called the vulgar view or naïve realism. Now you might be inclined to think that this view is more or less self-evident or natural, but if you look more carefully, you’ll quickly see that it does need explaining.

As far as I know there is no historical study of the vulgar view, but I found various synonyms for this view or its adherents: Ockham, for instance, speaks of the “layperson” (laicus), Bacon, Berkeley and Hume of the “vulgar view” or “system”, Reid and Moore of “common sense”. When it is highlighted, it is often spelled out in opposition to a “philosophical view” such as representationalism, the “way of ideas” or idealism. Today, I’d like to briefly sketch what I take to be Hume’s account of this view. Not only because I like Hume, but because I think his account is both interesting and largely unknown. As I see it, Hume thinks that we adhere to the vulgar view because others around us hold it. But why, you might ask, would other people’s views affect our attitudes so strongly? If I am right, Hume holds that deviating from this view – for instance by taking a sceptical stance – will be seen as not normal and make us outsiders. Intriguingly, this normality is mediated by our physiological dispositions. Deviation from the vulgar view means deviation from the common balance of humours and, for instance, suffering from melancholy.** In this sense, the vulgar view we share is governed by medical norms, or so I argue.

The vulgar view is often explicitly discussed because it raises problems. If we want to explain false beliefs or hallucinations, it seems that we need to take recourse to representations: seeing a bent stick in water can’t mean to see a real stick, but some sort of representation or idea. Why? Because reference to the (straight) stick cannot explain why we see it as bent. Since the vulgar view doesn’t posit cognitive representations, it cannot account for erroneous perceptions. What is less often addressed, however, is that the vulgar view or realism is not at all plain or empirical in nature. The vulgar view is not a view that is confirmed empirically; rather it is a view about the nature of empirical experience. It’s not that we experience that objects are as they appear. So the source of the vulgar view cannot be given in experience or any empirical beliefs. Now if this is correct, we have to ask what it is that makes us hold this view. There is nothing natural or evident about it. But if this view is not self-evident, why do we hold it and why is it so widespread?

Enter Hume: According to Hume, most of the beliefs, sentiments and emotions we have are owing to our social environment. Hume explains this by referring to the mechanism of sympathy: “So close and intimate is the correspondence of human souls, that no sooner any person approaches me, than he diffuses on me all his opinions, and draws along my judgment in a greater or lesser degree.” (Treatise 3.3.2.1) Many of the beliefs we hold, then, are not (merely) owing to exposure to similar experiences, but to the exposure to others. Being with others affords a shared mentality. In his Essay on National Character, Hume writes: “If we run over the globe, or revolve the annals of history, we shall discover every where signs of a sympathy or contagion of manners, none of the influence of air or climate.” What is at stake here? Arguing that sympathy and contagion explain the sociological and historical facts better, Hume dismisses the traditional climate theory in favour of his account of sympathy. Our mentalities are not owing to the conditions of the place we live in but to the people that surround us.***

Now how exactly is the “contagion” of manners and opinions explained? Of course, a large part of our education is governed by linguistic and behavioural conventions. But at the bottom, there is a physiological kind of explanation that Hume could appeal to. Corresponding to our mental states are physiological dispositions, temperature of the blood etc., the effects of which are mediated through the air via vapours which, in turn, affect the imagination of the recipient. Just like material properties of things affect our sense organs, the states of other bodies can affect our organs and yield pertinent effects. When Hume speaks of the “contagion” of opinion, it is not unlikely that he has something like Malebranche’s account in mind. According to this account opinions and emotions can be contagious and spread just like diseases.

In the Search after Truth, Malebranche writes: “To understand what this contagion is, and how it is transmitted from one person to another, it is necessary to know that men need one another, and that they were created that they might form several bodies, all of whose parts have a mutual correspondence. … These natural ties we share with beasts consist in a certain disposition of the brain all men have to imitate those with whom they converse, to form the same judgments they make, and to share the same passions by which they are moved.” (SAT 161) The physiological model of sympathetic contagion, then, allows for the transmission of mental states allueded to above. This is why Hume can claim that a crucial effect of sympathy lies in the “uniformity of humours and turn of thinking”. In this sense, a certain temperament and set of beliefs might count as pertinent to a view shared by a group.

Of course, this mostly goes unnoticed. It only becomes an issue if we begin to deviate from a common view, be it out of madness or a sceptical attitude:  “We may observe the same effect of poetry in a lesser degree; and this is common both to poetry and madness, that the vivacity they bestow on the ideas is not derive’d from the particular situations or connexions of the objects of these ideas, but from the present temper and disposition of the person.” (T 1.3.10.10)

The point is that the source of a certain view might not be the object perceived but the physiological dispositions which, in turn, are substantially affected by our social environment. If this is correct, Hume’s account of sympathy is ultimately rooted in a medical model. The fact that we share the vulgar view and other attitudes can be explained by appealing to physiological interactions between humans.

As I see it, this yields a medical understanding of the normality we attribute to a view. Accordingly, Hume’s ultimate cure from scepticism is not afforded by argument but by joining the crowd and playing a game of backgammon. The supposed normality of common sense, then, is not owing to the content of the view but to the fact that it is widespread.

____

* This is a brief sketch of my Hume interpretation defended in my book on Socialising Minds: Intersubjectivity in Early Modern Philosophy, the manuscript of which I’m currently finalising. – Together with Evelina Miteva, I also co-organise a conference on “Medicine and Philosophy”. The CFP is still open (till December 15, 2018): please apply if you’re interested.

** Donald Ainslie makes a nice case for this in his Hume’s True Scepticism, but claims that Hume’s appeal to humoral theory might have to be seen as metaphorical. — I realise that proper acknowledgements to Humeans would take more than one blog post in itself:) Stefanie Rocknak’s work has been particularly important for getting to grips with Hume’s understanding of the vulgar view. – Here, I’m mainly concerned with the medical model in the background. Marina Frasca-Spada’s work has helped with that greatly. But what we’d need to understand better still is the medical part in relation to the notion of imagination, as spelled out in Malebranche, for instance. Doina Rusu and Koen Vermeir have done some great work on transmission via vapours, but the picture we end up with is still somewhat coarse-grained, to put it mildly.

*** I am grateful to Evelina Miteva for sharing a preliminary version of her paper on Climata et temperamenta, which provides a succinct account of the medieval discussion.  Hume should thus be seen as taking sides in an ongoing debate about traits and mentalities arising from climate vs. arising from sympathy.