Embracing mistakes in music and speech

Part of what I love about improvised music is the special relation to mistakes. If you listen to someone playing a well known composition, a deviation from the familiar melody, harmony or perhaps even from the rhythm might appear to be a mistake. But what if the “mistake” is played with confidence and perhaps even repeated? Compare: “An apple a day keeps the creeps away.” Knowing the proverb, you will instantly recognise that something is off. But did I make a downright mistake or did play around with the proverb? That depends I guess. But what does it depend on? On the proverb itself? On my intentions? Or does it depend on your charity as a listener? It’s hard to tell. The example is silly and simple but the phenomenon is rather complex if you think about mistakes in music and speech. What I would like to explore in the following is what constitutes the fine line between mistake and innovation. My hunch is there is no such thing as a mistake (or an innovation). Yes, I know what you’re thinking, but you’re mistaken. Please hear me out.

Like much else, the appreciation of music is based on conventions that guide our expectations. Even if your musical knowledge is largely implicit (in that you might have had no exposure to theory), you’ll recognise variations or oddities – and that even if you don’t know the piece in question. The same goes for speech. Even if you don’t know the text in question and wouldn’t recognise if the speaker messed up a quotation, you will recognise mispronunciations, oddities in rhythm and syntax and such like. We often think of such deviations from conventions as mistakes. But while you might still be assuming that the speaker is sounding somewhat odd, they might in fact be North Americans intonating statements as if they were questions, performing funny greeting rituals or even be singing rap songs. Some things might strike people as odd while others catch on, so much so that they end up turning into conventions. – But why do we classify one thing as a variation and the other as a mistake?

Let’s begin with mistakes in music. You might assume that a mistake is, for instance, a note that shouldn’t be played. We speak of a “wrong note” or a “bum note”. Play an F# with much sustain over a C Major triad and you get the idea. Even in the wildest jazz context that could sound off. But what if you hold that F# for half a bar and then add a Bb to the C Major triad? All else being equal, the F# will sound just fine (because the C Major can be heard as a C7 and the F# as a the root note of the tritone substitution F#7) and our ear might expect the resolution to a F Major triad.* Long story short: Whether something counts as a mistake does not depend on the note in question, but on what is played afterwards.**

Let this thought sink in and try to think through situations in which something sounding off was resolved. If you’re not into music, you might begin with a weird noise that makes you nervous until you notice that it’s just rain hitting the roof top. Of course, there are a number of factors that matter, but the upshot is that a seemingly wrong note will count as fine or even as an impressive variation if it’s carried on in an acceptable way. This may be through a resolution (that allows for a reinterpretation of the note) or through repetition (allowing for interpreting it as an intended or new element in its own right) or another measure. Repetition, for example, might turn a strange sequence into an acceptable form, even if the notes in question would not count as acceptable if played only once. It’s hard to say what exactly will win us over (and in fact some listeners might never be convinced). But the point is not that the notes themselves are altered, but that repetition is a form of creating a meaningful structure, while a one-off does not afford anything recognisable. That is, repetition is a means to turn mistakes into something acceptable, a pattern. If this is correct, then it seems sensible to say that the process of going through (apparent) mistakes is not only something that can lead to an amended take on the music, but also something that leads to originality. After all, it’s turning apparent mistakes into something acceptable that makes us see them as legitimate variations.

I guess the same is true of speech. Something might start out striking you as unintelligible, but will be reinterpreted as a meaningful pattern if it is resolved into something acceptable. But how far does this go? You might think that the phenomenon is merely of an aesthetic nature, pertaining to the way we hear and recontextualise sounds in the light of what comes later. We might initially hear a string of sounds that we identify as language once we recognise a pattern in the light of what is uttered later. But isn’t this also true of the way we understand thoughts in general? If so, then making (apparent) mistakes is the way forward – even in philosophy.

Now you might object that the fact that something can be identified as an item in a language (or in music) does not mean that the content of what is said makes sense or is true. If I make a mistake in thinking, it will remain a mistake, even if the linguistic expression can be amended. – Although it might seem this way, I’d like to claim that the contrary is true: The same that goes for music and basic speech comprehension also goes for thought. Thoughts that would seem wrong at the time of utterance can be adjusted in the light of what comes later. Listening to someone, we will do everything to try and make their thoughts come out true. Trying to understand a thought that might sound unintelligible and wrong in the beginning might lead us to new insights, once we find ways in which it rhymes with things we find acceptable. “Ah, that is what you mean!” As Donald Davidson put it, charity is not optional.*** And yes, bringing Davidson into the picture should make it clear that my idea is not new. Thoughts that strike us as odd might turn out fine or even original once we identify the set of beliefs that makes them coherent. — Only among professional philosophers, it seems, are we all too often inclined to make the thoughts of our interlocutors come out false. But seen in analogy to musical improvisation, the talk of mistakes is perhaps just conservatism. Branding an idea as mistaken might merely reveal our clinging to familiar patterns.

___

* Nicer still is this resolution: You hold that F# for half a bar and then add a F# in the bass. All else being equal, the F# will sound just fine (because the C Major can be heard as a D7 add9/11 without the root note) and our ear might expect the resolution to a G Major triad.

** See also Daniel Martin Feige’s Philosophie des Jazz, where this idea is discussed.

*** The basic idea is illustrated by the example at the beginning of an older post on the nature of error.

“Is it ok if I still work on Descartes?” The canon does not have to be canonical

Browsing through the web today, I found the following passage on the webpage of one of the few leading journals in the history of philosophy:

“Ever since the founding of the Journal of the History of Philosophy, its articles (and its submissions) have been dominated by papers on a small, select coterie of philosophers. Not surprisingly, these are Plato, Aristotle, Descartes, Spinoza, Hume, and Kant.”

“Not surprisingly” can be said in many ways, but the place and phrasing of the passage suggest some sort of pride on part of the author. But the “coterie” is so small that it still makes me chuckle. Given that this is one of the general top journals for the whole of the history of philosophy, this narrowness should be worrying. Posting this on facebook lead to some obvious entertainment. However, I also recognised some mild expression of shame from those who work on canonical figures. And I sometimes caught myself wondering whether I should continue to work on figures such as Ockham, Locke, Spinoza and Hume. Should we feel ashamed of working on the canon? In the light of such questions, I would like to briefly talk about a different worry: that of throwing out the baby with the bathwater. More precisely, I worry that attempts at diversifying the canon can harm good work on and alongside the canon. Let me explain.

Currently, we are witnessing an enormous amount of initiatives to diversify the canon, both with regard to the inclusion of women as well as of non-western traditions. The initiatives and projects I know are truly awe-inspiring. Not only do they open up new areas of research, they also affect the range of what is taught, even in survey courses. This is a great success for teaching and research in philosophy and its history. On the one hand, we learn more and more about crucial developments in the history of philosophy on a global level. On the other hand, this increase of knowledge also seems to set a moral record straight. In view of attempts to make our profession more inclusive in hiring, it’s obvious that we should also look beyond the narrow “coterie” when it comes to the content of research and teaching.

Now the moral dimension of diversification might embarrass those who continue to do teaching and research on canonical figures. “Is it ok”, one might wonder, “to teach Descartes rather than Elisabeth of Bohemia?” Of course, we might reply that it depends on one’s agenda. Yet, as much as diversification is a good thing, it will put pressure on those who choose otherwise. Given constraints of time and space, diversification might be perceived as adding to the competition. Will publishers and editors begin to favour the cool new work on non-canonical figures? Will I have to justify my canonical syllabus? While I wouldn’t worry too much about such issues, we know that our profession is rather competitive and it wouldn’t be the first time that good ideas are abused for nasty ends. – This is why it’s vital to see the whole idea of diversification as one to enrich and complement our knowledge. Rather than seeing canonical figures being pushed to the side, we should embrace the new lines of research and teaching as a way of learning new things also about canonical figures. In keeping with this spirit, I’d like to highlight two points that I find crucial in thinking about the canon and its diversification:

  • Firstly, there are non-canonical interpretations of the canon. The very idea of a canon suggests that we already know most things about certain figures and traditions. But we need to remind ourselves that the common doxography does by no means exhaust what there is to be known about authors such as Plato or Kant. Rather we need to see that most authors and debates are still unknown. On the one hand, we gather new historical knowledge about these figures. On the other hand, each generation of scholars has to make up their minds anew. Thus, even if we work on on the most canonical figures ever, we can challenge the common doxography and develop new knowledge.
  • Secondly, the diversification should also concern neglected figures alongside the canon. Have you noticed that the Middle Ages are represented by three authors? Yes, Aquinas, Aquinas, and Aquinas! Almost every study dipping into medieval discussions mentions Aquinas, while his teacher Albert the Great is hardly known outside specialist circles. But when we talk of diversification, we usually don’t think of Albert the Great, Adam of Wodeham, Kenelm Digby or Bernard Bolzano. These authors are neglected, unduly so, but they normally aren’t captured by attempts at diversification either. They run alongside the canonical figures and weigh on our conscience, but they have not much of a moral lobby. Yet, as I see it, it’s equally important that the work on them be continued and that they are studied in relation to other canonical and non-canonical figures.

In other words, the canon does not have to be canonical. The upshot is that we need as much work on canonical as on non-canonical figures in all senses of the word. We hardly know anything about either set of figures. And we constantly need to renew our understanding. Competition between these two areas of research and teaching strikes me as nonsensical. There is nothing, absolutely nothing wrong with working on canonical figures.

Philosophical genres. A response to Peter Adamson

Would you say that the novel is of a more proper literary genre than poetry? Or would you say that the pop song is less of a musical genre than the sonata? To me these questions make no sense. Both poems and novels form literary genres; both pop songs and sonatas form musical genres. And while you might have a personal preference for one over the other, I can’t see a justification for principally privileging one over the other. The same is of course true of philosophical genres: A commentary on a philosophical text is no less of a philosophical genre than the typical essay or paper.* Wait! What?

Looking at current trends that show up in publication lists, hiring practices, student assignments etc., articles (preferably in peer-reviewed journals) are the leading genre. While books still count as important contributions in various fields, my feeling is that the paper culture is beginning to dominate everything else. But what about commentaries to texts, annotated editions and translations or reviews? Although people in the profession still recognise that these genres involve work and (increasingly rare) expertise, they usually don’t count as important contributions, even in history of philosophy. I think this trend is highly problematic for various reasons. But most of all it really impoverishes the philosophical landscape. Not only will it lead to a monoculture in publishing; also our teaching of philosophy increasingly focuses on paper production. But what does this trend mean? Why don’t we hold other genres at least in equally high esteem?

What seemingly unites commentaries to texts, annotated editions and translations or reviews is that they focus on the presentation of the ideas of others. Thus, my hunch is that we seem to think more highly of people presenting their own ideas than those presenting the ideas of others. In a recent blog post, Peter Adamson notes the following:

“Nowadays we respect the original, innovative thinker more than the careful interpreter. That is rather an anomaly, though. […]

[I]t was understood that commenting is itself a creative activity, which might involve giving improved arguments for a school’s positions, or subtle, previously overlooked readings of the text being commented upon.”

Looking at ancient, medieval and even early modern traditions, the obsession with what counts as originality is an anomaly indeed. I say “obsession” because this trend is quite harmful. Not only does it impoverish our philosophical knowledge and skills, it also destroys a necessary division of labour. Why on earth should every one of us toss out “original claims” by the minute? Why not think hard about what other people wrote for a change? Why not train your philosophical chops by doing a translation? Of course the idea that originality consists in expressing one’s own ideas is fallacious anyway, since thinking is dialogical. If we stop trying to understand and uncover other texts, outside of our paper culture, our thinking will become more and more self-referential and turn into a freely spinning wheel… I’m exaggerating of course, but perhaps only a bit. We don’t even need the medieval commentary traditions to remind ourselves. Just remember that it was, amongst other things, Chomsky’s review of Skinner that changed the field of linguistics. Today, writing reviews, working on editions and translations doesn’t get you a grant, let alone a job. While we desperately need new editions, translations and materials for research and teaching, these works are esteemed more like a pastime or retirement hobby.**

Of course, many if not most of us know that this monoculture is problematic. I just don’t know how we got there that quickly. When I began to study, the work on editions and translations still seemed to flourish, at least in Germany. But it quickly died out, history of philosophy was abandoned or ‘integrated’ in positions in theoretical or practical philosophy, and many people who then worked very hard on the texts that are available in shiny editions are now without a job.

If we go on like this, we’ll soon find that no one will be able to read or work on past texts. We should then teach our students that real philosophy didn’t begin to evolve before 1970 anyway. Until it gets that bad I would plead for reintroducing a sensible division of labour, both in research and teaching. If you plan your assignments next time, don’t just offer your students to write an essay. Why not have them choose between an annotated translation, a careful commentary on a difficult passage or a review? Oh, of course, they may write an essay, too. But it’s just one of many philosophical genres, many more than I listed here.

____

* In view of the teaching practice that follows from the focus on essay writing, I’d adjust the opening analogy as follows: Imagine the music performed by a jazz combo solely consisting of soloists and no rhythm section. And imagine that all music instruction would from now on be geared towards soloing only… (Of course, this analogy would capture the skills rather than the genre.)

** See Eric Schliesser’s intriguing reply to this idea.

Against allusions

What is the worst feature of my writing? I can’t say what it is these days; you tell me please! But looking back at what I worked hardest to overcome in writing I’d say it’s using allusions. I would write things such as “in the wake of the debate on semantic externalism” or “given the disputes over divine omnipotence bla bla” without explaining what precise debate I actually meant or what kind of semantic externalism or notions of the divine I had in mind. This way, I would refer to a context without explicating it. I guess such allusions were supposed to do two things: on the one hand, I used them to abbreviate the reference to a certain context or theory etc., on the other hand, I was hoping to display my knowledge of that context. To peers, it was meant to signal awareness of the appropriate references without actually getting too involved and, most importantly, without messing up. If you don’t explicate or explain, you can’t mess things up all that much. In short, I used allusions to make the right moves. So what’s wrong with making the right moves?

Let me begin by saying something general about allusions. Allusions, also known as “hand waving”, are meant to refer to something without explicitly stating it. Thus, they are good for remaining vague or ambiguous and can serve various ends in common conversation or literature. Most importantly, their successful use presupposes sufficient knowledge on part of the listener or reader who has to have the means to disambiguate a word or phrase. Funnily enough, such presuppositions are often accompanied by phrases insinuating the contrary. Typical phrases are: “as we all know”, “as is well known”, “famously”, “obviously”, “clearly”, “it goes without saying” etc.

Such presuppositions flourish and work greatly among friends. Here, they form a code that often doesn’t require any of the listed phrases or other markers. They rather work like friendly nods or winks. But while they might be entertaining among friends, they often exclude other listeners in scholarly contexts. Now you might hasten to think that those excluded simply don’t ‘get it’, because they lack the required knowledge. But that’s not true. Disambiguation requires knowledge, yes, but it also and crucially requires confidence (since you always might make a fool of yourself after all) and an interest in the matter. If you’re unsure whether you’re really interested, allusions used among scholars often closely resemble the tone of a couple of old blokes dominating a dinner party with old insider jokes. Who wants to sound like that in writing?

Apart from sounding like a bad party guest, there is a deeper problem with allusions in scholarly contexts. They rely on the status quo of canonical knowledge. Since the presuppositions remain unspoken, the listener has go by what he or she takes to be a commonly acceptable disambiguation. Of course, we have to take some things as given and we cannot explicate everything, but when it comes to important steps in our arguments or evidence, reliance on allusions is an appeal to the authority of the status quo rather than the signalling of scholarly virtue.

I began to notice this particularly in essays by students who were writing their essays mainly for their professors. Assuming that professors know (almost) everything, nothing seems to need unpacking. But since almost all concepts in philosophy are essentially contested, such allusions often don’t work. As long as I don’t know which precise version of an idea I’m supposed to assume, I might be just as lost as if I didn’t know the next thing about it. Thus the common advice to write for beginners or fellow students. Explain and unpack at least all the things you’re committed to argue for or use as evidence for a claim. Otherwise at least I often won’t get what’s going on.

The problem with that advice is that it remains unclear how much explanation is actually appropriate. Of course, we can’t do without presuppositions. And we cannot and should not write only for beginners. If allusions are a vice, endless explanations might fare no better. Aiming at avoiding every possible misunderstanding can result in an equally dull or unintelligible prose. So I guess we have to unpack some things and merely allude to others. But which ones do we explain in detail? It’s important to see that every paper or book has (or should have) a focus: this is the claim you ultimately want to argue for. At the same time, there will be many assumptions that you shouldn’t commit yourself to showing. I attempt to explain only those things that are part of the focus. That said, it sometimes really is tricky to figure out what that focus actually is. Unpacking allusions might help with finding it, though.

Why would we want to call people “great thinkers” and cite harassers? A response to Julian Baggini

If you have ever been at a rock or pop concert, you might recognise the following phenomenon: The band on the stage begins playing an intro. Pulsing synths and roaring drums build up to a yet unrecognisable tune. Then the band breaks into the well-known chorus of their greatest hit and the audience applauds frenetically. People become enthusiastic if they recognise something. Thus, part of the “greatness” is owing to the act of recognising it. There is nothing wrong with that. It’s just that people celebrate their own recognition at least as much as the tune performed. I think much the same is true of our talk of “great thinkers”. We applaud recognised patterns. But only applauding the right kinds of patterns and thinkers secures our belonging to the ingroup. Since academic applause signals and regulates who belongs to a group, such applause has a moral dimension, especially in educational institutions. Yes, you guess right, I want to argue that we need to rethink whom and what we call great.

When we admire someone’s smartness or argument, an enormous part of our admiration is owing to our recognition of preferred patterns. This is why calling someone a “great thinker” is to a large extent self-congratulatory. It signals and reinforces canonical status. What’s important is that this works in three directions: it affirms that status of the figure, it affirms it for me, and it signals this affirmation to others. Thus, it signals where I (want to) belong and demonstrates which nuances of style and content are of the right sort. The more power I have, the more I might be able to reinforce such status. People speaking with the backing of an educational institution can help building canonical continuity. Now the word “great” is conveniently vague. But should we applaud bigots?

“Admiring the great thinkers of the past has become morally hazardous.” Thus opens Julian Baggini’s piece on “Why sexist and racist philosophers might still be admirable”. Baggini’s essay is quite thoughtful and I advise you to read it. That said, I fear it contains a rather problematic inconsistency. Arguing in favour of excusing Hume for his racism, Baggini makes an important point: “Our thinking is shaped by our environment in profound ways that we often aren’t even aware of. Those who refuse to accept that they are as much limited by these forces as anyone else have delusions of intellectual grandeur.” – I agree that our thinking is indeed very much shaped by our (social) surroundings. But while Baggini makes this point to exculpate Hume,* he clearly forgets all about it when he returns to calling Hume one of the “greatest minds”. If Hume’s racism can be excused by his embeddedness in a racist social environment, then surely much of his philosophical “genius” cannot be exempt from being explained through this embeddedness either. In other words, if Hume is not (wholly) responsible for his racism, then he cannot be (wholly) responsible for his philosophy either. So why call only him the “great mind”?

Now Baggini has a second argument for leaving Hume’s grandeur untouched. Moral outrage is wasted on the dead because, unlike the living, they can neither “face justice” nor “show remorse”. While it’s true that the dead cannot face justice, it doesn’t automatically follow that we should not “blame individuals for things they did in less enlightened times using the standards of today”. I guess we do the latter all the time. Even some court systems punish past crimes. Past Nazi crimes are still put on trial, even if the system under which they were committed had different standards and is a thing of a past (or so we hope). Moreover, even if the dead cannot face justice themselves, it does make a difference how we remember and relate to the dead. Let me make two observations that I find crucial in this respect:

(1) Sometimes we uncover “unduly neglected” figures. Thomas Hobbes, for instance, has been pushed to the side as an atheist for a long time. Margaret Cavendish is another case of a thinker whose work has been unduly neglected. When we start reading such figures again and begin to affirm their status, we declare that we see them as part of our ingroup and ancestry. Accordingly, we try and amend an intellectual injustice. Someone has been wronged by not having been recognised. And although we cannot literally change the past, in reclaiming such figures we change our intellectual past, insofar as we change the patterns that our ingroup is willing to recognise. Now if we can decide to help changing our past in that way, moral concerns apply. It seems we have a duty to recognise figures that have been shunned, unduly by our standards.**

(2) Conversely, if we do not acknowledge what we find wrong in past thinkers, we are in danger of becoming complicit in endorsing and amplifying the impact of certain wrongs or ideologies. But we have the choice of changing our past in these cases, too. This becomes even more pressing in cases where there is an institutional continuity between us and the bigots of the past. As Markus Wild points out in his post, Heidegger’s influence continues to haunt us, if those exposing his Nazism are attacked. Leaving this unacknowledged in the context of university teaching might mean becoming complicit in amplifying the pertinent ideology. That said, the fact that we do research on such figures or discuss their doctrines does not automatically mean that we endorse their views. As Charlotte Knowles makes clear, it is important how we relate or appropriate the doctrines of others. It’s one thing to appropriate someone’s ideas; it’s another thing to call that person “great” or a “genius”.

Now, how do these considerations fare with regard to current authors? Should we adjust, for instance, our citation practices in the light of cases of harassment or crimes? – I find this question rather difficult and think we should be open to all sorts of considerations.*** However, I want to make two points:

Firstly, if someone’s work has shaped a certain field, it would be both scholarly and morally wrong to lie about this fact. But the crucial question, in this case, is not whether we should shun someone’s work. The question we have to ask is rather why our community recurrently endorses people who abuse their power. If Baggini has a point, then the moral wrongs that are committed in our academic culture are most likely not just the wrongs of individual scapegoats who happen to be found out. So if we want to change that, it’s not sufficient to change our citation practice. I guess the place to start is to stop endowing individuals with the status of “great thinkers” and begin to acknowledge that thinking is embedded in social practices and requires many kinds of recognition.

Secondly, trying to take the perspective of a victim, I would feel betrayed if representatives of educational institutions would simply continue to endorse such voices and thus enlarge the impact of perpetrators who have harmed others in that institution. And victimhood doesn’t just mean “victim of overt harassment”. As I said earlier, there are intellectual victims of trends or systems that shun voices for various reasons, only to be slowly recovered by later generations who wish to amend the canon and change their past accordingly.

So the question to ask is not only whether we should change our citation practices. Rather we should wonder how many thinkers have not yet been heard because our ingroup keeps applauding one and the same “great mind”.

___

* Please note, however, that Hume’s racism was already criticised by Adam Smith and James Beattie, as Eric Schliesser notes in his intriguing discussion of Baggini’s historicism (from 26 November 2018).

** Barnaby Hutchins provides a more elaborate discussion of this issue: “The point is that a neutral approach to doing history of philosophy doesn’t seem to be a possibility, at least not if we care about, e.g., historical accuracy or innovation. Our approaches need to be responsive to the structural biases that pervade our practices; they need to be responsive to the constant threat of falling into this chauvinism. So it’s risky, at best, to take an indiscriminately positive approach towards canonical and non-canonical alike. We have an ethical duty (broadly construed) to apply a corrective generosity to the interpretation of non-canonical figures. And we also have an ethical duty to apply a corrective scepticism to the canon. Precisely because the structures of philosophy are always implicitly pulling us in favour of canonical philosophers, we need to be, at least to some extent, deliberately antagonistic towards them.”

In the light of these considerations, I now doubt my earlier conclusion that “attempts at diversifying our teaching should not be supported by arguments from supposedly different moral status”.

*** See Peter Furlong’s post for some recent discussion.

Heidegger: Uses and Abuse(s)

Following his post ‘‘Heidegger was a Nazi’ What now?’, Martin Lenz invited me to join the discussion.

There has been a lot written about whether we can separate out Heidegger’s philosophical work from his politics, in particular whether Being and Time – which is often seen as his most significant contribution – can be ‘saved’. There is a lot of excellent scholarship in this area (see for example the work of Mahon O’Brien), but this is not my particular field of expertise. Nevertheless, while I do not feel I can speak directly to the historical question, I would say that, personally, when I first encountered Being and Time as an undergraduate, I didn’t read it and think ‘this guy is definitely a Nazi’. However, once you have this knowledge it obviously makes you reflect on the writing, and there are certain points in the text (the issue of destiny etc), which can be read as problematic in light of his Nazism. Although I do wonder to what extent these things are read into the text in light of knowledge of his politics. I would also add that these more problematic aspects are, to my mind, not the key contributions of Being and Time and that what I take to be the more important concepts and ideas can be employed in other contexts without being ‘infected’ by his politics. In this vein, one must also note the influence of Heideggerean ideas, not only on the French tradition, but also for example on Arendt’s work. If Heidegger’s oeuvre is infected by his politics, does this mean that any work, or any thinker, that draws on his ideas is similarly infected? I think not.

Knowledge of Heideggerean ideas can help to enhance our understanding of other key thinkers, as I argue in my paper Beauvoir and Women’s Complicity in their own Unfreedom. Reading the notion of complicity in The Second Sex in light of the notions of falling and fleeing in Being and Time helps to bring about new ways of thinking about complicity that are not available if we just understand the notion of complicity with regard to the Sartrean idea of bad faith, or in light of the Republican tradition.

With regard to the broader debate about philosophers with, to put it mildly, ‘dodgy politics’, I think it is very striking that Frege, for example (who Martin does note in his original blog post), is so often not mentioned in this context and that these debates appear to be had almost exclusively in relation to Heidegger and not other thinkers who would also serve to make the same point. I would not in any way want to defend Heidegger’s politics, but I do think appeal to his politics is often used as a way to dismiss his work because people have other reasons for not wanting to engage with it, and this is an easy way to dismiss him. I’ve had people dismiss questions I’ve asked at conferences because (after a couple of follow up questions) it’s become apparent that I might be using Heideggerean ideas as a touch stone. In the formal discussion they’ve said ‘oh I don’t know anything about him’ and then shut down the discussion, even though knowledge of Heidegger wasn’t necessary to engage with the point. I don’t think if the same point was made using, for example, Kantian ideas or something inspired by Descartes anyone would dream of dismissing this in the same way. I’ve also had senior people tell me ‘you shouldn’t work on Heidegger, you’ll never get a job’. I think this attitude is unhelpful. Yes, his political views are abhorrent, but given his influence on other key thinkers and traditions I don’t think we can just dismiss his work.

I also think there seems to be an underlying assumption that anyone who works on Heidegger just uncritically accepts his ideas and worships him as a god, which is perhaps true of some (bad) Heidegger scholarship. But my own work, which draws on Heideggerean resources to make points in feminist philosophy, does not treat him in this way. One seems to encounter the attitude in a lot of people who are critical of Heidegger scholarship that anyone who works on him has been inducted into a kind of cult and completely lacks agency, that they can’t separate out the potentially fruitful ideas from those that may be politically compromised. Or that if a particular concept or idea does have some problematic elements, the scholar in question just wouldn’t be able to see it or critique it.

Aristotle, Hegel, Nietzsche all say some pretty problematic things about women, but this hasn’t stopped feminist philosophers from using their ideas and it doesn’t make the feminist scholarship that arises from this work somehow compromised, tainted, or anti-women. I think the point should be about how we engage with these thinkers and what we can do with them, rather than just dismissing them out of hand (often by people without a sufficient understanding of their work).

Charlotte Knowles, University of Groningen.

 

Kill your darlings! But how?

Why can’t you finish that paper? What’s keeping you? – There is something you still have to do. But where can you squeeze it in? Thinking about salient issues I want to address, I often begin to take the paper apart again, at least in my mind. – “Kill your darlings” is often offered as advice for writers in such situations. When writing or planning a paper, book or project you might be prone to stick to tropes, phrases or even topics and issues that you had better abandon. While you might love them dearly, the paper would be better off without them. So you might have your paper ready, but hesitate to send it off, because it still doesn’t address that very important issue. But does your paper really need to address this? – While I can’t give you a list of items to watch out for, I think it might help to approach this issue by looking at how it arises.

How do you pick your next topic for a project or paper? Advanced graduate students and researchers are often already immersed in their topics. At this level we often don’t realise how we get into these corners. Thus, I’d like to look at situations that I find BA students in when they think about papers or thesis topics. What I normally do is ask the student for their ideas. What I try to assess, then, are two things: does the idea work for a paper and is the student in a position to pursue it? In the following, I’ll focus on the ideas, but let’s briefly look at the second issue. Sometimes ideas are very intriguing but rather ambitious. In such cases, one might be inclined to discourage students from going through with it. But some people can make it work and shouldn’t be discouraged. You’ll notice that they have at least an inkling of a good structure, i.e. a path that leads palpably from a problem to a sufficiently narrow claim. However, more often people will say something like this: “I don’t yet know how to structure the argument, but I really love the topic.” At this point, the alarm bells should start ringing and you should look very carefully at the proposed idea. What’s wrong with darlings then?

(1) Nothing: A first problem is that nothing might seem wrong with them. Liking or being interested in a topic isn’t wrong. And it would be weird to say that someone should stop pursuing something because they like it. Liking something is in fact a good starting point. You’ve probably ended up studying philosophy because you liked something about it. (And as Sara Uckelman pointed out, thinking about your interests outside philosophy and then asking how they relate to philosophy might provide a good way to finding a dissertation topic.) At the same time, your liking something doesn’t necessarily track good paper topics. It’s a way into a field, but once you’re there other things than your liking might decide whether something works. Compare: I really love the sound of saxophones; I listen to them a lot. Perhaps I should learn to play the saxophone. So it might get me somewhere. But should start playing it live on stage now? Well …

(2) Missing tensions. What you like or love is likely to draw you in. That’s good. But it might draw you in in an explorative fashion. So you might think: “Oh, that’s interesting. I want to know all about it.” But that doesn’t give you something to work on. An explorative mood doesn’t get you a paper; you need to want to argue. Projects in philosophy and its history focus on tensions. If you want to write a paper, you’ve got to find something problematic that creates an urgent need for explanation, like an apparent contradiction or a text that does not seem to add up. Your love or interest in a topic doesn’t track tensions. If you want to find a workable idea, find a tension.

(3) Artificial tensions. Philosophy is full of tensions. When people want to “do what they love”, they often look for a tension in their field. Of course, there will be a lot of tensions discussed in the literature. But since people often believe they should be original, they will create a tension rather than pick up one already under discussion. This is where problems really kick in. You might for instance begin a thesis supervision and be greeted with a tentative “I’m interested in love and I always liked speech act theory. I would like to write about them.” I have to admit that it’s this kind of suggestion I hear most often. So what’s happening here? – What we’re looking at is not a tension but a (difficult) task. The task is created by combining two areas and hence creating the problem of applying the tools of one field to the issue of another. Don’t get me wrong: of course you can write intriguing stuff by applying speech act theory to the issue of love. But this usually requires some experience in both areas. Students often come up with some combination because they like both topics or had some good exposure to them. There might also be a vague idea of how to actually combine the issues, but there is no genuine tension. All there is is a difficult task, created ad hoc out of the need to come up with a tension.

Summing up, focusing on your interests alone doesn’t really guide you towards good topics to work on. What do I take home from these considerations? Dealing with darlings is a tricky business. Looking at my own work, I know that a strong interest in linguistics and a deep curiosity about the unity of sentences got me into my MA and PhD topics. But while these interests got me in, I had to let go of them when pursuing my actual work. So they shaped my approach, but they did not dictate the arguments. Motivationally, I could not have done without them. But in the place they actually took me, I would have been misguided by clinging to them.

Anyway, the moral is: let them draw you in, but then let go of them. Why is that worth adhering to? Because your darlings are about you, but your work should not be about yourself, at least not primarily. The tensions that you encounter will come out of existing discussions or texts, not out of tasks you create for yourself. How do you distinguish between the two? I’d advise to look for the actual point of contact that links all the issues that figure in your idea. This will most likely be a concrete piece of text or phrase or claim – the text that is central in your argument. Now ask yourself whether that piece of text really requires an answer to the question you can’t let go of. Conversely, if you have an idea but you can’t find a concrete piece of text to hang it onto, let go of the idea or keep it for another day.