Spotting mistakes and getting it right

“Know thyself” is probably a fairly well known maxim among philosophers. But the maxim we live by rather seems to be one along the lines of “know the mistakes of others”. In calling this out I am of course no better. What prompts me to write about this now is a recent observation, not new but clearly refreshed with the beginning of the academic year: it is the obvious desire of students to “get it right”, right from the start. But what could be wrong with desiring to be right?

Philosophers these days don’t love wisdom but truth. Now spotting the mistakes of others is often presented as truth-conducive. If we refute and exclude the falsehoods of others, it seems, we’re making progress on our way to finding out the truth. This seems to be the reason why most papers in philosophy build their cases on refuting opposing claims and why most talks are met with unwavering criticism of the view presented. Killing off all the wrongs must leave you with the truth, no? I think this exclusion principle has all sorts of effects, but I doubt that it helps us in making the desired progress. Here is why.

A first set of reasons relates to the pragmatic aspects of academic exchange: I believe that the binary distinction between getting it right or wrong is misleading. More often than not the views offered to us are neither right nor wrong. This is owing to the fact that we have to present views successively, by putting forward a claim and explaining and arguing for it. What such a process exposes is normally not the truth or falsity of the view, but a need for further elaboration: by unpacking concepts and consequences, ruling out undesired implications, clarifying assumptions etc.

Now you might object that calling a view false is designed to prompt exactly that: clarification and exploration. But I doubt that this is the case. After all, much of academic exchange is driven by perceived reputation: More often than not criticism makes the speaker revert to defensive moves, if it doesn’t paralyse them: Rather than exploring the criticised view, speakers will be tempted to use strategies of immunising their paper against further criticism. If speakers don’t retract, they might at least reduce the scope of their claims and align themselves with more accepted tenets. This, I believe, blocks further exploration and sets an incentive for damage control and conformism. If you doubt this, just go and tell a student (or colleague) that they got it wrong and see what happens.

Still, you might object, such initial responses can be overcome. It might take time, but eventually the criticised speaker will think again and learn to argue for their view more thoroughly. – I wish I could share this optimism. (And I sometimes do.) But I guess the reason that this won’t happen, or not very often, is simply this: What counts in scholarly exchange is the publicly observable moment. Someone criticised by an opponent will see themselves as challenged not only as a representative of a view but as a member of the academic community. Maintaining or restoring our reputation will seem thus vital in contexts in which we consider ourselves as judged and questioned: If we’re not actually graded, under review or in a job talk, we will still anticipate or compare such situations. What counts in these moments is not the truth of our accounts, but whether we convince others of the account and, in the process, of our competence. If you go home as defeated, your account will be seen as defeated too, no matter whether you just didn’t mount the courage or concentration to make a more convincing move.

A second set of reasons is owing to the conviction that spotting falsehoods is just that: spotting falsehoods. As such, it’s not truth-conducive. Refuting claims does not (or at least not necessarily) lead to any truth. Why? Spotting a falsehood or problem does not automatically make any opposing claim true. Let me give an example: It is fairly common to call the so-called picture theory of meaning, as presented in Wittgenstein’s Tractatus, a failure. The perhaps intuitive plausibility that sentences function as pictures of states of affairs seems quickly refuted when asking how such pictures can be said to be true or false of a supposed chunk of reality. What do you do? Step out of the picture and compare it with the proper chunk? Haha! – Refuting the picture theory, then, seems to bring us one step closer to an appropriate theory of meaning. But such a dismissal makes us overlook that the picture theory has enormous merits. Once you see it as a theory of representation and stop demanding that it also accounts for the truth and falsity of representations, you begin to realise that it can work very well when combined with a theory of use or a teleosemantic theory. (See e.g. Ruth Millikan’s recontextualisation) The upshot is that our dismissals are often resulting from overlooking crucial further assumptions that would reinstate the dismissed account.

Now you might object that an incomplete account is still a bad account. Pointing this out is not per se wrong but will eventually prompt a recontextualisation that works. In this sense, you might say, the criticism becomes part of the recontextualised account. – To this I agree. I also think that such dialogues can prompt more satisfying results. But bearing the pragmatic aspects of academic exchange in mind, I think that such results are more likely if we present our criticism for what it is: not as barking at falsehoods but attempts to clarify, complete or complement ideas.

Now you might object that the difference between barking at falsehoods and attempts to clarify can be seen as amounting just to a matter of style. – But why would you think that this is an objection? Style matters. Much more than is commonly acknowledged.

Diversifying scholarship. Or how the paper model kills history

Once upon a time a BA student handed in a proposal for a paper on Hume’s account of substance. The student proposed to show that Hume’s account was wrong, and that Aristotle’s account was superior to Hume’s. If memory serves, I talked the student out of this idea and suggested that he build his paper around an analysis of a brief passage in Hume’s Treatise. – The proposal was problematic for several reasons. But what I want to write about is not the student or his proposal. Rather I want to zoom in on our way of approaching historical texts (in philosophy). The anecdote about the proposal can help to show what the problem is. As I see it, the standard journal article has severe repercussions on the way we teach and practise scholarship in the history of philosophy. It narrows our way of reading texts and counters attempts at diversification of the canon. If we want to overcome these repercussions, it will help to reinstate other forms of writing, especially the form of the commentary.

So what’s wrong with journal articles? Let me begin by saying that there is nothing wrong with articles themselves. The problem is that articles are the decisive and almost only form of disseminating scholarship. The typical structure of a paper is governed by two elements: the claim, and arguments for that claim. So a historian typically articulates a claim about a text (or more often about claims in the secondary literature about a text) and provides arguments for embracing that claim. This way we produce a lot of fine scholarship and discussion. But if we make it the leading format, a number of things fall through the cracks.

An immediate consequence is that that the historical text has the status of evidence for the claim. So the focus is not on the historical material but the claim of the historian. If we teach students to write papers of this sort, we teach them to focus on their claims rather than on the material. You can see this in the student’s approach to Hume: the point was to evaluate Hume’s account. Rather than figuring out what was going on in Hume’s text and what it might be responding to, the focus is on making a claim about what is the supposed doctrine. The latter approach immediately abstracts away from the text and thus from the material of discussion. What’s wrong with that? Of course, such an abstract approach is fine if you’re already immersed in an on-going discussion or perhaps even a tradition of discussions about the text. In that case you’re mainly engaging with the secondary literature. But this abstract approach does not work for beginners. Why? Arguably, the text itself sets constraints that have to observed if the discussion is to make sense. What are these constraints? I’m not saying they are fixed once and for all. Quite the contrary! But they have to be established in relation to the text. So before you can say anything about substance in Hume, you have to see where and how the term is used and whether it makes sense to evaluate it in relation to Aristotle. (My hunch is that, in Treatise 1.1.6.1-2, Hume rejects the Aristotelian idea of substance altogether; thus saying that Aristotle’s notion is superior is like saying that apples are superior to bananas). The upshot is: before you can digest the secondary literature, you have to understand how the textual constraints are established that guide the discussions in the secondary literature.

What we might forget, then, if we teach on the basis of secondary literature, is how these constraints were established in the long tradition of textual scholarship. When we open an edition of the Critique of Pure Reason, we see the text through the lens of thick layers of scholarship. When we say that certain passages are “dark”, “difficult” or “important”, we don’t just speak our mind. Rather we echo many generations of diligent scholarship. We might hear that a certain passage is tricky before we even open the book. But rather than having students parrot that Kant writes “difficult prose”, we should teach them to find their way through that prose. That requires engagement with the text: line by line, word by word, translation by mistranslation. Let’s call this mode of reading linear reading as opposed to abstract reading. It is one thing to say what “synthetic apperception” is. It’s quite another thing to figure out how Kant moves from one sentence to the next. The close and often despair-inducing attention to the details of the text are necessary for establishing an interpretation. Of course, it is fine to resort to guidance, but we have to see the often tenuous connection between the text and the interpretation, let a lone the claim about a text. In other words, we have to see how abstract reading emerges from linear reading.

My point is not that we shouldn’t read (or teach what’s in the) secondary literature. My point is that secondary literature or abstract reading is based on a linear engagement with the text that is obscured by the paper model. The paper model suggests that you read a bit and then make a fairly abstract claim (about the text or, more often, about an interpretation of the text). But the paper model obscures hundreds of years or at least decennia of linear reading. What students have to learn (and what perhaps even we, as teachers, need to remind ourselves of) is how one sentence leads to the next. Only then does the abstract reading presented in the secondary literature become visible for what it is: as an outcome of a particular linear reading.

But how can we teach linear reading? My suggestion is quite simple: Rather than essay writing, students in the history of philosophy should begin by learning to write commentaries to texts. As I argued earlier, there is a fair amount of philosophical genres beyond the paper model. At least part of our education should consist in being confronted with a piece of text (no more than half a page) and learning to comment on that piece, perhaps translating it first, going through it line by line, pointing out claims as well as obscurities and raising questions that point to desirable explanations. This way, students will learn to approach the texts independently. While it might be easy to parrot that “Hegel is difficult to read”, it takes courage to say that a concrete piece of text is difficult to understand. In the latter case, the remark is not a judgment but the starting point of an analysis that might allow for a first tentative explanation (e.g. of why the difficulty arises).

Ultimately, my hope is that this approach, i.e. the linear commentary to concrete pieces of text, will lead (back) to a diversification of scholarship. Of course, it’s nice to read, for instance, the next paper on Hume claiming that he is an idealist or whatever. But it would help if that scholarship would (again) be complemented by commentaries to the texts. Nota bene: such scholarship is available even today. But we don’t teach it very much.

Apart from learning how to read linearly and closely, such training is the precondition of what is often called the diversification of the canon. If we really want to expand the boundaries of the canon, the paper model will restrain us (too much) in what we find acceptable. Before we even open a page of Kant, our lens is shaped through layers of linear reading. But when we open the books of authors that are still fairly new to us, we have hardly any traditions of reading to fall back on. If we start writing the typical papers in advance of establishing constraints through careful linear reading, we are prone to just carry over the claims and habits familiar from familiar scholarship. I’m not saying that this is bound to happen, but diligent textual commentaries would provide a firmer grasp of the texts on their own terms. In this sense, diversification of the canon requires diversification of scholarship.

Surprises. (The fancy title would have been: “On the phenomenology of writing”)

It’s all worked out in my head: the paper is there, the literature is reviewed, I’ve found a spot where my claim makes sense without merely repeating what has been said, my argument is fairly clear, the sequence of sections seems fine. But there is a problem: the darn thing still needs to be written! What’s keeping me? The deadline is approaching. Why can’t I just do it? Everyone else can do it! Just like that: sit down, open the file and write down what’s in your head.

I’ve had sufficiently many conversations and looks at “shit academics say” to know that I am not even alone in this. But it’s both irritating and surprising that it keeps recurring. So why is it happening? I know there are many possible explanations. There is the shaming one: “You’re just lazy; stop procrastinating!” Then there is the emphatic one: “You’re overworked; take a break!” And there is probably a proper scientific one, too, but I’m too lazy to even google it. Without wanting to counter any of these possible explanations, it might be rewarding to reflect on personal experience. So what’s going on?

If I am really in the state described in the first paragraph (or at least close to it), I have some sort of awareness of what I would be doing were I to write. This experience has two intriguing features:

  • One aspect is that it is enormously fast: the actual thinking or the internal verbalisations of what I would (want to) write pass by quickly. They are passing much faster than the actual writing would or eventually will take. (There are intriguing studies on this phenomenon; see for instance Charles Fernyhough’s The Voices Within)* This pace of the thoughts is often pleasing. I can rush through a great amount of stuff, often enlarged by association, in very little time. But at the same time this experience might be part of the reason why I don’t get to write. If everything seems (yes, seems!) worked out like that, it feels as if it were done. But why should I do work that is already done? The idea of merely writing down what I have worked out is somewhat boring. Of course, this doesn’t mean that the actual writing is boring. It just means that I take it to be boring. This brings me to the second aspect:
  • The actual writing often comes with surprises. Yes, it’s often worked out in my head, but when I try to phrase the first sentence, I notice two other things. Firstly, I’m less sure about the actual formulations on the page than I was when merely verbalising them internally. Secondly, seeing what I write triggers new associations and perhaps even new insights. It’s as if the actual writing were based on a different logic (or dynamic), pushing me somewhere else. Then it feels like I cannot write what I want to write, at least not before I write down what the actual writing pushes me to write first: Yes, I want to say p, but first have to explain q. Priorities seem to change. Something like that. These surprises are at once frustrating and motivating or pleasing. They are frustrating, because I realise I hadn’t worked it out properly. I just thought I was done. Now the real work begins. On the other hand, they are motivating because I feel like I’m actually learning or understanding something I had not seen before.

I don’t know whether this experience resonates with you, but I guess I need such surprises. I can’t sit down and merely write what I thought out beforehand. And I don’t know whether that is the case because I want to avoid the boredom of merely ‘writing it up’ or whether ‘merely writing it up’ is actually not doable. Not doable because it wouldn’t convince me or surprise me or perhaps even because it is impossible or whatever. The moral is that you should start writing early. Because if you’re a bit like me, part of the real work only starts during the actual writing. Then you face the real problems that aren’t visible when all the neat sentences are just gliding through your head. – Well, I guess I should get going.

___

* And this post just came out today.

On taking risks. With an afterthought on peer review

Jumping over a puddle is both fun to try and to watch. It’s a small risk to take, but some puddles are too large to cross… There are greater risks, but whatever the stakes, they create excitement. And in the face of possible failure, success feels quite different. If you play a difficult run on the piano, the listeners will equally feel relief when you manage to land on the right note in time. The same goes for academic research and writing. If you start out with a provocative hypothesis, people will get excited about the way you mount the evidence. Although at least some grant agencies ask for risks taken in proposals, risk taking is hardly ever addressed in philosophy or writing guides. Perhaps people think it’s not a serious issue, but I believe it might be one of the crucial elements.

In philosophy, every move worth our time probably involves a risk. Arguing that mistakes or successes depend on their later contextualisation, I already looked at the “the fine line between mistake and innovation.” But how do we get onto that fine line? This, I think, involves taking a risk. Taking a risk in philosophy means saying or doing something that will likely be met with objections. That’s probably why criticising interlocutors is so widespread. But there are many ways of taking risks. Sitting in a seminar, it might already feel risky to just raise your voice and ask a question. You feel you might make a fool of yourself and lose the respect of your fellow students or instructor. But if you make the effort you might also be met with the admiration for going through with an only seemingly trivial point. I guess it’s that oscillation between the possibility of failure and success that also moves the listeners or readers. It’s important to note that risk taking has a decidedly emotional dimension. Jumping across the puddle might land you in the puddle. But even if you don’t make it all the way, you’ll have moved more than yourself.

In designing papers or research projects, risk taking is most of the time rewarded, at least with initial attention. You can make an outrageous sounding claim like “thinking is being” or “panpsychism is true”. You can present a non-canonical interpretation or focus on a historical figure like “Hume was a racist” or “Descartes was an Aristotelian”. You can edit or write on the work of a non-canonical figure or provide an uncommon translation of a technical term. This list is not exhaustive, and depending on the conventions of your audience all sorts of moves might be risky. Of course, then there is work to be done. You’ve got to make your case. But if you’re set to make a leap, people will often listen more diligently than when you merely promise to summarise the state of the art. In other words, taking a risk will be seen as original. That said, the leap has to be well prepared. It has to work from elements that are familiar to your audience. Otherwise the risk cannot be appreciated for what it is. On the other hand, mounting the evidence must be presented as feasible. Otherwise you’ll come across as merely ambitious.

Whatever you do, in taking a risk you’ll certainly antagonise some people. Some will be cheering and applauding your courage and originality. Others will shake their heads and call you weird or other endearing things. What to do? It might feel difficult to live with opposition. But if you have two opposed groups, one positive, one negative, you can be sure you’re onto something. Go for it! It’s important to trust your instincts and intuitions. You might make it across the puddle, even if half of your peers don’t believe it. If you fail, you’ve just attempted what everyone else should attempt, too. Unless it’s part of the job to stick to reinventing the wheel.

Now the fact that risks will be met with much opposition but might indicate innovation should give us pause when it comes to peer review. In view of the enormous competition, journals seem to encourage that authors comply with the demands of two reviewers. (Reviewer #2 is a haunting meme by now.)  A paper that gets one wholly negative review will often be rejected. But if it’s true that risks, while indicative of originality, will incur strong opposition, should we not think that a paper is particularly promising when met with two opposing reviews? Compliance with every possible reviewer seems to encourage risk aversion. Conversely, looking out for opposing reviews would probably change a number of things in our current practice. I guess managing such a process wouldn’t be easier. So it’s not surprising if things won’t change anytime soon. But such change, if considered desirable, is probably best incentivised bottom-up. And this would mean to begin in teaching.

The fact, then, that a claim or move provokes opposition or even refutation should not be seen as a negative trait. Rather it indicates that something is at stake. It is important, I believe, to convey this message, especially to beginners who should learn to enjoy taking risks and listening to others doing it.

Philosophical genres. A response to Peter Adamson

Would you say that the novel is of a more proper literary genre than poetry? Or would you say that the pop song is less of a musical genre than the sonata? To me these questions make no sense. Both poems and novels form literary genres; both pop songs and sonatas form musical genres. And while you might have a personal preference for one over the other, I can’t see a justification for principally privileging one over the other. The same is of course true of philosophical genres: A commentary on a philosophical text is no less of a philosophical genre than the typical essay or paper.* Wait! What?

Looking at current trends that show up in publication lists, hiring practices, student assignments etc., articles (preferably in peer-reviewed journals) are the leading genre. While books still count as important contributions in various fields, my feeling is that the paper culture is beginning to dominate everything else. But what about commentaries to texts, annotated editions and translations or reviews? Although people in the profession still recognise that these genres involve work and (increasingly rare) expertise, they usually don’t count as important contributions, even in history of philosophy. I think this trend is highly problematic for various reasons. But most of all it really impoverishes the philosophical landscape. Not only will it lead to a monoculture in publishing; also our teaching of philosophy increasingly focuses on paper production. But what does this trend mean? Why don’t we hold other genres at least in equally high esteem?

What seemingly unites commentaries to texts, annotated editions and translations or reviews is that they focus on the presentation of the ideas of others. Thus, my hunch is that we seem to think more highly of people presenting their own ideas than those presenting the ideas of others. In a recent blog post, Peter Adamson notes the following:

“Nowadays we respect the original, innovative thinker more than the careful interpreter. That is rather an anomaly, though. […]

[I]t was understood that commenting is itself a creative activity, which might involve giving improved arguments for a school’s positions, or subtle, previously overlooked readings of the text being commented upon.”

Looking at ancient, medieval and even early modern traditions, the obsession with what counts as originality is an anomaly indeed. I say “obsession” because this trend is quite harmful. Not only does it impoverish our philosophical knowledge and skills, it also destroys a necessary division of labour. Why on earth should every one of us toss out “original claims” by the minute? Why not think hard about what other people wrote for a change? Why not train your philosophical chops by doing a translation? Of course the idea that originality consists in expressing one’s own ideas is fallacious anyway, since thinking is dialogical. If we stop trying to understand and uncover other texts, outside of our paper culture, our thinking will become more and more self-referential and turn into a freely spinning wheel… I’m exaggerating of course, but perhaps only a bit. We don’t even need the medieval commentary traditions to remind ourselves. Just remember that it was, amongst other things, Chomsky’s review of Skinner that changed the field of linguistics. Today, writing reviews, working on editions and translations doesn’t get you a grant, let alone a job. While we desperately need new editions, translations and materials for research and teaching, these works are esteemed more like a pastime or retirement hobby.**

Of course, many if not most of us know that this monoculture is problematic. I just don’t know how we got there that quickly. When I began to study, the work on editions and translations still seemed to flourish, at least in Germany. But it quickly died out, history of philosophy was abandoned or ‘integrated’ in positions in theoretical or practical philosophy, and many people who then worked very hard on the texts that are available in shiny editions are now without a job.

If we go on like this, we’ll soon find that no one will be able to read or work on past texts. We should then teach our students that real philosophy didn’t begin to evolve before 1970 anyway. Until it gets that bad I would plead for reintroducing a sensible division of labour, both in research and teaching. If you plan your assignments next time, don’t just offer your students to write an essay. Why not have them choose between an annotated translation, a careful commentary on a difficult passage or a review? Oh, of course, they may write an essay, too. But it’s just one of many philosophical genres, many more than I listed here.

____

* In view of the teaching practice that follows from the focus on essay writing, I’d adjust the opening analogy as follows: Imagine the music performed by a jazz combo solely consisting of soloists and no rhythm section. And imagine that all music instruction would from now on be geared towards soloing only… (Of course, this analogy would capture the skills rather than the genre.)

** See Eric Schliesser’s intriguing reply to this idea.

Against allusions

What is the worst feature of my writing? I can’t say what it is these days; you tell me please! But looking back at what I worked hardest to overcome in writing I’d say it’s using allusions. I would write things such as “in the wake of the debate on semantic externalism” or “given the disputes over divine omnipotence bla bla” without explaining what precise debate I actually meant or what kind of semantic externalism or notions of the divine I had in mind. This way, I would refer to a context without explicating it. I guess such allusions were supposed to do two things: on the one hand, I used them to abbreviate the reference to a certain context or theory etc., on the other hand, I was hoping to display my knowledge of that context. To peers, it was meant to signal awareness of the appropriate references without actually getting too involved and, most importantly, without messing up. If you don’t explicate or explain, you can’t mess things up all that much. In short, I used allusions to make the right moves. So what’s wrong with making the right moves?

Let me begin by saying something general about allusions. Allusions, also known as “hand waving”, are meant to refer to something without explicitly stating it. Thus, they are good for remaining vague or ambiguous and can serve various ends in common conversation or literature. Most importantly, their successful use presupposes sufficient knowledge on part of the listener or reader who has to have the means to disambiguate a word or phrase. Funnily enough, such presuppositions are often accompanied by phrases insinuating the contrary. Typical phrases are: “as we all know”, “as is well known”, “famously”, “obviously”, “clearly”, “it goes without saying” etc.

Such presuppositions flourish and work greatly among friends. Here, they form a code that often doesn’t require any of the listed phrases or other markers. They rather work like friendly nods or winks. But while they might be entertaining among friends, they often exclude other listeners in scholarly contexts. Now you might hasten to think that those excluded simply don’t ‘get it’, because they lack the required knowledge. But that’s not true. Disambiguation requires knowledge, yes, but it also and crucially requires confidence (since you always might make a fool of yourself after all) and an interest in the matter. If you’re unsure whether you’re really interested, allusions used among scholars often closely resemble the tone of a couple of old blokes dominating a dinner party with old insider jokes. Who wants to sound like that in writing?

Apart from sounding like a bad party guest, there is a deeper problem with allusions in scholarly contexts. They rely on the status quo of canonical knowledge. Since the presuppositions remain unspoken, the listener has go by what he or she takes to be a commonly acceptable disambiguation. Of course, we have to take some things as given and we cannot explicate everything, but when it comes to important steps in our arguments or evidence, reliance on allusions is an appeal to the authority of the status quo rather than the signalling of scholarly virtue.

I began to notice this particularly in essays by students who were writing their essays mainly for their professors. Assuming that professors know (almost) everything, nothing seems to need unpacking. But since almost all concepts in philosophy are essentially contested, such allusions often don’t work. As long as I don’t know which precise version of an idea I’m supposed to assume, I might be just as lost as if I didn’t know the next thing about it. Thus the common advice to write for beginners or fellow students. Explain and unpack at least all the things you’re committed to argue for or use as evidence for a claim. Otherwise at least I often won’t get what’s going on.

The problem with that advice is that it remains unclear how much explanation is actually appropriate. Of course, we can’t do without presuppositions. And we cannot and should not write only for beginners. If allusions are a vice, endless explanations might fare no better. Aiming at avoiding every possible misunderstanding can result in an equally dull or unintelligible prose. So I guess we have to unpack some things and merely allude to others. But which ones do we explain in detail? It’s important to see that every paper or book has (or should have) a focus: this is the claim you ultimately want to argue for. At the same time, there will be many assumptions that you shouldn’t commit yourself to showing. I attempt to explain only those things that are part of the focus. That said, it sometimes really is tricky to figure out what that focus actually is. Unpacking allusions might help with finding it, though.

Kill your darlings! But how?

Why can’t you finish that paper? What’s keeping you? – There is something you still have to do. But where can you squeeze it in? Thinking about salient issues I want to address, I often begin to take the paper apart again, at least in my mind. – “Kill your darlings” is often offered as advice for writers in such situations. When writing or planning a paper, book or project you might be prone to stick to tropes, phrases or even topics and issues that you had better abandon. While you might love them dearly, the paper would be better off without them. So you might have your paper ready, but hesitate to send it off, because it still doesn’t address that very important issue. But does your paper really need to address this? – While I can’t give you a list of items to watch out for, I think it might help to approach this issue by looking at how it arises.

How do you pick your next topic for a project or paper? Advanced graduate students and researchers are often already immersed in their topics. At this level we often don’t realise how we get into these corners. Thus, I’d like to look at situations that I find BA students in when they think about papers or thesis topics. What I normally do is ask the student for their ideas. What I try to assess, then, are two things: does the idea work for a paper and is the student in a position to pursue it? In the following, I’ll focus on the ideas, but let’s briefly look at the second issue. Sometimes ideas are very intriguing but rather ambitious. In such cases, one might be inclined to discourage students from going through with it. But some people can make it work and shouldn’t be discouraged. You’ll notice that they have at least an inkling of a good structure, i.e. a path that leads palpably from a problem to a sufficiently narrow claim. However, more often people will say something like this: “I don’t yet know how to structure the argument, but I really love the topic.” At this point, the alarm bells should start ringing and you should look very carefully at the proposed idea. What’s wrong with darlings then?

(1) Nothing: A first problem is that nothing might seem wrong with them. Liking or being interested in a topic isn’t wrong. And it would be weird to say that someone should stop pursuing something because they like it. Liking something is in fact a good starting point. You’ve probably ended up studying philosophy because you liked something about it. (And as Sara Uckelman pointed out, thinking about your interests outside philosophy and then asking how they relate to philosophy might provide a good way to finding a dissertation topic.) At the same time, your liking something doesn’t necessarily track good paper topics. It’s a way into a field, but once you’re there other things than your liking might decide whether something works. Compare: I really love the sound of saxophones; I listen to them a lot. Perhaps I should learn to play the saxophone. So it might get me somewhere. But should start playing it live on stage now? Well …

(2) Missing tensions. What you like or love is likely to draw you in. That’s good. But it might draw you in in an explorative fashion. So you might think: “Oh, that’s interesting. I want to know all about it.” But that doesn’t give you something to work on. An explorative mood doesn’t get you a paper; you need to want to argue. Projects in philosophy and its history focus on tensions. If you want to write a paper, you’ve got to find something problematic that creates an urgent need for explanation, like an apparent contradiction or a text that does not seem to add up. Your love or interest in a topic doesn’t track tensions. If you want to find a workable idea, find a tension.

(3) Artificial tensions. Philosophy is full of tensions. When people want to “do what they love”, they often look for a tension in their field. Of course, there will be a lot of tensions discussed in the literature. But since people often believe they should be original, they will create a tension rather than pick up one already under discussion. This is where problems really kick in. You might for instance begin a thesis supervision and be greeted with a tentative “I’m interested in love and I always liked speech act theory. I would like to write about them.” I have to admit that it’s this kind of suggestion I hear most often. So what’s happening here? – What we’re looking at is not a tension but a (difficult) task. The task is created by combining two areas and hence creating the problem of applying the tools of one field to the issue of another. Don’t get me wrong: of course you can write intriguing stuff by applying speech act theory to the issue of love. But this usually requires some experience in both areas. Students often come up with some combination because they like both topics or had some good exposure to them. There might also be a vague idea of how to actually combine the issues, but there is no genuine tension. All there is is a difficult task, created ad hoc out of the need to come up with a tension.

Summing up, focusing on your interests alone doesn’t really guide you towards good topics to work on. What do I take home from these considerations? Dealing with darlings is a tricky business. Looking at my own work, I know that a strong interest in linguistics and a deep curiosity about the unity of sentences got me into my MA and PhD topics. But while these interests got me in, I had to let go of them when pursuing my actual work. So they shaped my approach, but they did not dictate the arguments. Motivationally, I could not have done without them. But in the place they actually took me, I would have been misguided by clinging to them.

Anyway, the moral is: let them draw you in, but then let go of them. Why is that worth adhering to? Because your darlings are about you, but your work should not be about yourself, at least not primarily. The tensions that you encounter will come out of existing discussions or texts, not out of tasks you create for yourself. How do you distinguish between the two? I’d advise to look for the actual point of contact that links all the issues that figure in your idea. This will most likely be a concrete piece of text or phrase or claim – the text that is central in your argument. Now ask yourself whether that piece of text really requires an answer to the question you can’t let go of. Conversely, if you have an idea but you can’t find a concrete piece of text to hang it onto, let go of the idea or keep it for another day.