How do you turn a half-baked idea into a paper?

Chatting about yesterday’s post on reducing one’s ideas to one single claim, I received the question of what to do in the opposite scenario. “It’s quite a luxury to have too many ideas. Normally, I have just about half an idea and an imminent deadline.” Welcome to my world! Although I think that the problems of too many ideas and too little of an idea are closely related, I think this is worth an extra treatment.

Before trying to give some more practical advice, I think it’s important to see what it actually means to have a half-baked idea. So what is a half-baked idea or half an idea? What is it that is actually missing when we speak of such an idea? – The first thing that comes to mind is confidence. You might secretly like what you think but lack the confidence to go for it. What can you do about that? I think that the advice to work on one’s confidence is cold comfort. Contrary to common opinion, more often than not lack of confidence is not about you but about a lack of legitimacy or authority. If you were an old don, you probably wouldn’t worry too much whether people think your idea a bit underdeveloped. “Hey, it’s work in progress!” But if you are going to be marked or are on the market, then presenting real progress is a privilege you don’t necessarily enjoy.

Now if you lack certain privileges, you can’t do much about that yourself. Luckily, this is not the end of the story. I think that what we call “half-baked ideas” lacks visible agreement with other ideas. In keeping with the three agreement constraints I mentioned earlier, your idea might either lack (1) agreement with the ideas of others (authorities, secondary literature etc.), (2) with the facts or – in this case – with (textual) material or (3) with your own other ideas. If you can’t see where your agreement or disagreement lies, this might affect your confidence quite drastically, because you don’t know where you actually are in the philosophical conversation. In view of these agreement relations, I’d take two steps to amend this. The first thing I would advise to figure out is how your idea agrees on these different levels. So how does it relate to the literature, how does it relate to your material or the facts under discussion and how does it relate to your common or former intuitions? If you make these relations clearer, your idea will certainly become a bit clearer, too. (We often do this by rushing through the secondary literature, trying to see whether what we say is off the mark. But it’s important to see that this is just one step.) In a second and perhaps more crucial step, I would look for disagreements. Locating a disagreement within the literature will help you to work on the so-called USP, the “unique selling point” of your paper. If your idea doesn’t fit the material, it might be good to re-read the texts and see what makes you think about them in such a disparate way. If you disagree with your (former) intuitions, you might be onto something really intriguing, too. In any case, it’s crucial to locate your disagreements as clearly as possible. Because it is those disagreements that might add precision to your idea.*

Another way in which ideas can be half-baked is if they are too broad. Yes, it might be right that, say, Ockham is a nominalist, but if that’s your main claim, no one will want to read on. (History of) Philosophy is a conversation, and you won’t feel like you’re contributing anything if you come up with too broad a claim. But how can you narrow down your claim to a size that makes it interesting and gives structure to your paper? I think this is one of the hardest tasks ever, but here is what I think might be a start. Write an introduction or abstract, using the following headers:

  1. Topic: If your claim is too broad, then you’re probably talking about a topic rather than your actual claim. If you can’t narrow it down, begin by writing about the topic (say, Ockahm’s nominalism), bearing in mind that you will narrow it down later.
  2. Problem: If everything is fine, you won’t have something to write on. But if your questions are too broad, they are probably still referring to a common problem discussed in the literature. It’s fine to write about this in order to say what the common problem is, say, with Ockham’s nominalism.
  3. Hypothesis: Only in the light of a common problem can you formulate a solution. If you find that your solution is in total agreement with the literature, then it might be better to go back and see where your solution disagrees. (Don’t be discouraged by that. Even if you agree with a common claim, you might have different paths to the same goal or think of different material.) Anyway, in keeping with the “one idea per paper” rule, now is the time to say what you think about one single aspect of Ockham’s nominalism! That’s your hypothesis.
  4. Question: If you have such a claim, you’re nearly there. Now you have to think again: Which question has to be answered in order to show that your hypothesis is correct? Is there a special feature of Ockham’s nominalism that has to be shown as being present in his texts? Or is there a common misunderstanding in the literature that has to be amended? Or is there a thesis that needs some refinement? Spelling out that question as precisely as possible gives you a research question or a set of them. Answering that set of questions will support your claim.

Going through these steps, you can draw on your insights regarding the disagreements mentioned earlier. But even then you might still have the impression that your thesis is too broad to be interesting or too broad to be pursued in a single paper. What then? I’d say, take what you call the hypothesis and make it your topic, and take what you call the question and make it your problem. Then try to narrow down again until you reach a workable size. If you have that, you have written a kind of introduction. That doesn’t yet give you a complete structure. But once you break down the research question into manageable parts, you might get the structure of your paper out of that, too.

____

* It’s important to note that the task of locating agreement and disagreement requires an explicit point of contact on which the (dis)agreement can be plotted. So you should make sure to find a concrete sentence or passage about which you (dis)agree. You’ll find more on points of contact here.

One idea per paper!

The new academic year is approaching rapidly and I’m thinking about student essays again. In Groningen, we now devote a certain amount of course hours to the actual writing of term papers. This has made me think not only about the kind of advice I want to give, but also about the kind of advice that actually works, in the sense that it can be implemented demonstrably. Given that I’m better at giving than following advice myself, that is quite a difficult question for me. One of the best pieces of advice I ever received came rather late, during my postdoc years in Berlin. I was discussing my worries about a paper with a good friend of mine. It was a paper on Ockham’s theory of mental language, and most of these worries concerned what I could possibly leave out. So much needed to be said – and he just stopped my flow by exclaiming: “one idea per paper!”

At first I thought that he was just trying to mock me. But thinking about my actual worries, I soon began to realise that this advice was pure gold. It settled quite a number of questions. Unfortunately, it also raised new obstacles. Nevertheless, I now think it’s good advice even for monographs and will try to go through some issues that it settled for me.

(1) What do I actually want to claim? – When writing the paper in question, I wanted to say a number of different things. I was proud that had discovered a number of intriguing passages in Ockham that had not yet been taken seriously in the secondary literature. Reading these passages, I had a pile of ideas that I thought were new or deserved more attention, but I couldn’t quite put them into a proper sequence, let alone an argument. My new rule made me ask: what is it that I actually think is new? I initially came up with two and a half points, but soon realised that these points had different priorities. The one and half had to be shown in order to make the crucial point work. So the question I had asked myself had imposed an argumentative order onto my points. Now I was not just presenting bits of information, however new, but an argument for a single claim. (For the curious: this was the claim that Ockham’s mental language is conventional.)

(2) How much contextual information is required? – Once I had an argumentative order, a sequence of presenting the material suggested itself. But now that I had one single point at the centre of attention, another problem settled itself. Talking about any somewhat technical topic in a historically remote period requires invoking a lot of information. Even if you just want to explain what’s going on in some passages of a widely read text, you need to say at least bit about the origin of the issue and the discussion it’s placed in. If you have more than one idea under discussion this requires you to bring up multiple contexts. But if you’re confining yourself to one single claim, this narrows this demand considerably. As a rule of thumb I’d say: don’t bring up more than is required to make your one single claim intelligible.

(3) What do I have to argue for? – However, often contextual information that makes a claim intelligible is in itself not well explored and might need further argument to establish why it works as support of your claim. This could of course get you into an infinity of further demands. How do you interrupt the chain sensibly? Often this issue is settled by the fact that scholars (or your supervisor) simply take certain things for granted: the conventions of your discipline settle some of these issues, then. But I don’t think that this makes for a helpful strategy. My rule is: you should only commit yourself to argue for the one single claim at hand. – “But”, you will ask, “what about the intermediate claims that my argument depends on?” I’d say that you don’t have to argue for those. All you have to do is say that your argument is conditional on these further claims (and then name the claims in question). Rather than making the argument yourself, you can tackle these conditions by pointing out what you have to take for granted or what others have taken for granted (in the secondary literature) or what would have to be shown in order to take up these conditions individually. (To give a simple example, if you bring up textual evidence from Crathorn to address the consequences of Ockham’s theory, you don’t have to begin discussing Crathorn’s theory on its own terms. Why not? Well, because you support a claim about Ockham rather than Crathorn.) Of course, someone might question the plausibility of your supporting evidence, but then you have a different claim under discussion. In sum, it’s crucial to distinguish between the claim you’re committed to argue for and supporting evidence or information. For the latter you can shift the burden by indicating a possible route of tackling difficulties.

So the rule “one idea per paper” imposes structure in several ways: it provides an argumentative hierarchy, allows for restricting contextual information, and provides a distinction between tenets you’re committed to as opposed to tenets whose explication you can delegate to others (in the literature). Your paper may still contain many bits and pieces, but it they are all geared towards supporting one single idea. If you’re revising your first draft, always ask yourself: how does that paragraph contribute to arguing for that claim? If you can say how, state this explicitly at the beginning of the paragraph. If you can’t say how, delete the paragraph and save it for a later day.

Voices inside my head. On constructive criticism

Most of the time when writing or just thinking, I hear voices uttering what I (want to) write. Often this is a version of my own voice, but there are also numerous other voices. These are voices of friends, colleagues and students. Sometimes I hear them because I remember what they said during a discussion. But more often I imagine them saying things in the way they would phrase an objection or a refinement of what I wanted to say. Yet, although it is me who imagines them, it’s their convictions and style that determines the content and phrasing. If I succeed in my attempts to write in a dialogical fashion, it is the achievement of others in leaving their traces in my memory and eventually in my texts. It is this kind of experience that makes writing fun. But what I want to claim now is that this is also a good way of integrating criticism. This way the strengths of others can become strengths in your own writing.

Why is this important? Philosophy is often taken to thrive on criticism. Some would even claim that it lies at the heart of intellectual exchange. Only if we take into account the critique of others, can we expect to have considered an issue sufficiently. I agree. Assuming that reason is social, philosophers need to expose themselves to others and see what they have to say. However, it’s not clear that the social nature of reason requires criticism as the primary mode of exchange. There are various styles of thinking; understanding and engaging with ideas can happen in many different ways.

Some people will ask to be “destroyed” by their interlocutors, while others might think that any question might amount to an impertinent transgression. Might there be a middle ground between these extremes? What is telling is that the metaphors around philosophical argumentation are mostly intimating opposition or even war. (See for instance the intriguing discussion in and of Catarina Dutilh Novaes’ great piece on metaphors for argumentation). In view of this practice, I think it’s crucial to remember that spotting mistakes does not turn anything into a good idea. The fact that you know how to find flaws does not mean that you’re able to improve an idea. (See Maarten Steenhagen’s excellent clip on this point and make sure to turn up the sound) In any case, it’s not surprising that there is an on-going debate and a bit of a clash of intuitions between philosophers who like and who dislike an adversarial style of conversation. Some think criticism fosters progress, while others think criticism blocks progress.

How can we move on? I think it’s crucial to consider the precise nature of the criticism in question. The point is not whether people are nice to one another; the point is whether criticism is genuine. But what is genuine criticism? I think genuine criticism takes a paper or talk on its own terms. Now what does that mean? Here, it helps to rely on a distinction between internal and external criticism. Internal criticism takes the premises of a contribution seriously and focuses on issues within an argument or view. A good example of a whole genre of internal criticism is the medieval commentary tradition. A commentary exposes the premises, and aims at clarification and refinement without undermining the proposed idea. By contrast, external criticism often starts from the assumption that the whole way of framing an issue is mistaken. A good example for such an external criticism is the debate between hylomorphists and mechanists in early modern philosophy.*

I think that only internal criticism is genuine. That doesn’t mean that external criticism is useless, but it is not an engagement with the opposing position; at least not in such a way that it attempts to leave the main claim intact. It is the view that the opponent’s position is not acceptable.** I think it is important to see that these different criticisms are completely different moves in the game or even wholly different games. Internal criticism happens on common ground; external criticism is the denial of common ground. Both forms are legitimate. But I think that a lot of discussions would be better if those involved would be clear about the question whether their criticism is internal or external. Ideally, both kinds of criticism are presented along with an indication of what an answer to the challenge would actually look like.

How can we apply this to our writing? I think it is vital to include both kinds of criticism. But it helps me to make the difference. If someone tells me that my argument for taking Locke as a social externalist about semantics might need more textual support or a refined exposition of how the evidence supports my claim, I will see their point as supporting my idea. (Of course, if I have no such evidence, this criticism would be fatal, however genuine.) If someone tells me that Locke’s semantics isn’t worth studying in the first place, their point is clearly external. That doesn’t mean that I don’t need to reply to the challenge of external criticism. But the point is that the latter targets my endeavour in a different way. External criticism questions the very point of doing what I do and cannot be addressed by amending this or that argument. Responding to internal criticism happens within a shared set of ideas. Responding to a clash of intuitions means to decide for or against a whole way of framing an issue. Only internal criticism is constructive, but we need to respond to external criticism in order to see why it is constructive. So when you work on your argument, don’t bother with external criticism. If you write the introduction or conclusion, by contrast, reach out to those who question your project entirely.

How then should we deal with such criticisms in practice? It’s sometimes difficult to deal with either. This is why I ultimately like the approach of internalising all kinds of criticism into the choir of voices inside my head. Once they are in my head, it feels like I can control them. They become part of my thinking and my chops, as it were. I can turn up the volume of any voice, and in writing it’s me who in charge of the volume.*** Thus, I’d suggest we should appropriate both kinds of criticism. It’s just crucial to recognise them for what they are. Appropriating all the voices gives you some control over each of them.

To be sure, at the end of the day it’s important to see that we’re all in this together. We’re doing philosophy. And even if people don’t agree with our projects, they endorse that larger project called philosophy. So even in external criticism there must be some sort of common ground. Most of the time, I can’t see what the precise features of this common ground are, but being lost in that way makes me feel at home.

______

* Of course, there are tipping points at which internal can turn into external criticism and vice versa.

** This doesn’t mean that external criticism is hostile or not genuine in other ways. One can criticise externally out of genuine concern, assuming perhaps that an idea requires a different kind of framework or that work on a seriously flawed position might prove a waste of time for the addressee.

*** Reaching a state of control or even balance is not easy, though. It is often the most critical voices that are loudest. In such cases, it might be best to follow Hume’s advice and seek good company.

Getting started: exploiting embarrassment

You probably all know this moment at the beginning of almost every course. “Any questions?” – Silence. No one, it seems, has a question. The same thing might happen after a conference talk. – Silence. The silence after a talk, in a reading group or seminar is quite embarrassing. It is particularly embarrassing because it happens out of embarrassment. You know it: it’s not because there are no questions; it’s because no one wants to make a fool of themselves. – What I would like to suggest today is that there is a simple technique of exploiting this embarrassment in order to get started both with a discussion and with writing.

How then can we exploit this embarrassment? By making it worse of course! We are embarrassed when we want to look smart and are unsure how to achieve that. Seeing other people stuck like that who see you stuck like that doesn’t help either. When we want to speak up or start writing, we probably focus too much on what we know (and then we think that everyone already knows that and that we’d look foolish by saying something trivial). My suggestion is: don’t focus on what you know; focus on what you don’t understand. That might seem worse, but that’s the point. To be sure, you should not just use the donnish phrase “I don’t understand”, while implying that something is just stupid. What I mean is: focus on something you genuinely don’t understand; that way you’ll raise a genuine question. And everyone will be grateful to you for breaking the ice in a genuine way.

How then do you find something that you genuinely don’t understand? – You might be surprised to learn that this will require some practice. That is because it is often difficult to pin down what precisely it is that you don’t understand. Anyway, in philosophy we can be sure of one thing: nothing is ever (sufficiently) justified by itself. Going from this premise, you can develop a question in two steps. Step one: you need to locate a phrase or passage that you find doubtful. (You don’t find one? Well, then ask yourself why everything is so incredibly clear. Are you omniscient?) Step two: ask yourself why you don’t understand that passage. Yes, you will deepen your embarrassment now, but only for a second. Because in asking that question, you will start looking for reasons for your lack of understanding. And reasons are a good thing in philosophy. The other upside of this technique is that you will begin phrasing a question with your own voice. Why? Well, because that question zooms in on the relation between the passage and yourself. But there is no need to fear exposure, for this “you” is not the personal “you”; it is the presuppositions, biases, and convictions of the epistemic culture you are part of.

Where to begin then? Quote the passage and highlight the move or term that you find doubtful. Then begin to spell out the presupposition that makes it seem doubtful to you. “This passage seems to presuppose that p. But I would presuppose that q. So why does it seem apt for the author to presuppose that p?” – Now you have a genuine question. And if you don’t know the answer, you can move on by exploring reasons for your own presupposition: “Why does it seem natural (to me) to presuppose that q?” And then you can ask what reasons there might be for giving up your presupposition.

If this is a helpful strategy, then why don’t we do this more often? I suppose that we assume something like the following: I don’t know why the author presupposes p, but the rest of my peers certainly knows why! – Well, even if they do know the answer, it is still required to make the reasons for accepting the presupposition explicit. Because in philosophy nothing is ever justified just by itself.

Who are we? Myths in the history of philosophy (Part I)

“Instead of assuming that the historical figures we study are motivated by the same philosophical worries that worry us, we need to understand why they care about each issue they raise.” This is part of rule 7 from Peter Adamson’s Rules for the History of Philosophy, and it is very good advice indeed. When reading texts, we should try to be aware of certain traps of anachronism. (See the intriguing debate between Eric Schliesser and Peter Adamson) People don’t always care about the same things, and if they do, they might do so for different reasons.

While I don’t believe that we can avoid anachronism, I think it is important to be aware of it. We can’t sidestep our interests, but it helps to make these interests explicit. What I would like to focus on today are some of the personal pronouns in the quoted passage: “us” and “we”. Saying that there are “worries that worry us”, places the diversity in the past history but seems to presuppose a fair amount of unity amongst us. But who are we? I picked rule 7, but it is safe to say that most historians of philosophy give in to this inclination of presupposing a “we”. I do find that funny. Historians (of philosophy) often like to mock people who indulge in generalised ideas about past periods such as the Middle Ages. “You wouldn’t believe”, they will say, “how diverse they were. The idea that all their philosophy is in fact about God is quite mistaken.” But then they turn around, saying that the medievals were quite different from us, where “us” is indexing some unified idea of a current philosophical state of the art. What I find funny, then, is that historians will chide you for claiming something about the past that they are happy to claim about the present. Last time I checked there was no “current philosophical debate”. At the same time, I should admit that I help myself a lot to these pronouns and generalisations. So if I sound like I’m ridculing that practice, I should be taken as ridiculing myself most of all.

My point is simple. It’s not enough to be aware of diachronic anachronism, we also need to be aware of what I’d like to call synchronic anachronism. Why? Well, for one thing, claims about the “current debate” are supposed to track relevance in some domain. If something is debated currently or by us, it might signal that we have reason to study its history. Wanting to avoid anachronism, historians often use an inversion of this relevance tracker: facts about historical debates might be interesting because they are not relevant today, in this sense they can teach us how the past is intriguingly and instructively different.

The second reason for highlighting synchronic anachronism is that it obscures the heterogeneity of current debates and the fact that we are anachronistic beings. Looking closely, we will find that we are a jumble of things that render us anachronistic: we are part of different generations, have different educational pasts and cling to various fashions; we might be nostalgic or prophetic, we live in different social situations and adhere to different authorities and canons. And sometimes we even have to defer to “the taxpayer” for relevance. So the idea that rationality requires agreement with others (the third “agreement constraint”, mentioned in one of my previous posts) should be seen in connection with the idea that such agreement might involve quite different and even opposing groups. The idealised present tense in which we talk about “us” and “our current interests” is merely a handy illusion. Acknowledging and respecting synchronic anachronism might seem tedious, but at least historians of philosophy should see the fun in it.

What are you good at?

Many philosophy papers have a similar structure. That is quite helpful, since you know your way around quickly. It’s like walking through a pedestrian zone: even if you are in a completely strange town, you immediately know where you find the kinds of shops you’re looking for. But apart from the macro-structure, this often also is true of the micro-structure: the way the sentences are phrased, the vocabulary is employed, the rhythm of the paragraphs. “I shall argue” in every introduction.

I don’t mean this as a criticism. I attempt to write like that myself, and I also try to teach it. Our writing is formulaic, and that’s fine. But what I would like to suggest is that we try to teach and use more ingredients in that framework. I vividly remember these moments when a fellow student or colleague got up and, instead of stringing yet another eloquent sentence together, drew something on the blackboard or attempted to impose order by presenting some crucial concepts in a table. For some strange reason, these ways of presenting a thought or some material rarely find their ways into our papers. Why not?

I think of these and other means as styles of thinking. Visualising thoughts, for instance, is something that I’m not very good at myself. But that’s precisely why I learn so much from them. And even if I can’t draw, I can attempt to describe the visualisations. Describing a visualisation (or a sound or taste) is quite different from stringing arguments together. You might extend this point to all the other arts: literature, music, what have you!

Trying to think of language as one sense-modality amongst others might help to think differently about certain questions. Visit your phenomenologist! On the one hand, you can use such styles as aids in a toolkit that will not replace but enrich your ways of producing evidence or conveying an idea. On the other hand, they might actually enrich the understanding of an issue itself. In any case, such styles should be encouraged and find their way into our papers and books more prominently.

As I said, I’m not good at visualising, but it helps me enormously if someone else does it. Assuming that we all have somewhat different talents, I often ask students: “What are you good at?” Whatever the answer is, there is always something that will lend itself to promoting a certain style of thinking, ready to be exploited in the next paper to be written.

Originality: What is a reformulation? (Part II)

In my last post, I claimed that originality amounts to nothing but the reformulation of theses or arguments. Although that might sound dismissive, I’m afraid I have to say quite a bit more about the topic of originality. So more posts will follow in due course. It worries me that such a central concept is still much in the grip of an unfounded genius cult. Being as unclear as the notion of clarity itself, it creates anxieties in students and gives undue power to examiners and reviewers. On the other hand, I would like to stress that I think very highly of reformulations and thus of what I call originality. In what follows, I’d like to say a bit more about reformulations.

Let me start with a clarification. I’m talking about originality in philosophy. Once you move outside that narrow field, there are more ways of being original. Already historians of philosophy, for example, can be original by starting to work on a new text, a forgotten author or by invoking new technology such as distant reading. Moreover. recombinations of technologies and traditional approaches in the humanities can bring about a lot of new insights. But there are limits. Once we return to the business of asking questions and giving reasons, we are back to our linguistic basis. – Let’s now move on to reformulations.

Before any reformulation can count as original, it has to count as rational, at least in the sense that it is accepted by our interlocutors. To count as rational, any formulation has to meet three agreement constraints. One’s claim has to agree

(1) with facts (i.e. non-textual phenomena)

(2) with oneself (i.e. with one’s own other beliefs etc.)

(3) with others (fellow academics, canons, authorities)

Constraint (3) is crucial. I might assume to be in agreement with facts or myself as much as I want, being rational is a matter of being in agreement with a community. This is why originality can’t completely transcend the community. Being original is not something you can ascribe to yourself; it’s the community that attributes that status to you.

Within these constraints, we might encounter various kinds of reformulations. Starting from a repetition (in a different context), a reformulation might be a variation, an opposition (in the sense that saying “not-p” requires saying “p”) or a recontextualisation. In this sense you might say that Descartes’ cogito is a variation on Augustine’s cogito, or that Walter Chatton’s anti-razor is an original opposition to William of Ockham’s razor. What makes these items original? I’d say it’s the fact that these theses have been given a decided new twist or turn. Their originality can be seen, as it were, because the initial thesis is still identifiable. They changed the topic or direction of the conversation while remaining in agreement with a community.

Personally, I think the most interesting cases of originality occur when a claim is reformulated such that it is received by different communities. The point is that constraint (3) might work for more than one community. I can think of quite a number of cases where this happened. John Locke combined bits of an Aristotelian theory of language with Pufendorf’s political theory. This way, his theory of language became relevant for different philosophical topics and communities. Another example is Robert Brandom’s reformulation of (a Habermasian) Kant and Hegel that migrated into new communities, even in Germany. The most recent (and for me rather impressive) example is David Livingstone Smith’s reformulation of Ruth Millikan’s teleosemantics within the context of a theory of ideology. (By contrast, I find that attempts to shun another community are often rather uninspiring: hello, continental-analytic divide…)

So, yes, I’m not trying to be dismissive when construing originality as a kind of reformulation. Quite the contrary! But I find it helpful to consider the social constraints that govern the notion of rationality and originality, not least to explore the possibilities of transcending or merging communities.

___

On a personal note, given the time of the year, I’ll have to reduce the frequency of my posts for the following weeks. But I’ll be back soon with more on these issues.

Originality? – Don’t make a fool of yourself! (Part I)

What is originality? I have been studying and even teaching philosophy for quite some time now, but I still don’t know what fellow philosophers really mean when they say that something is original. Kurt Flasch, my thesis advisor in the nineties, used to say that you become original once you forget where you’ve read your claims. I am myself a bit more positive. I think one can be original in finding a good reformulation of an existing claim or argument. But that’s all there is to it, really. So if you think that originality has to do with novelty, think again.

Why do I believe that originality is not about novelty? Well, I assume that philosophy is an on-going conversation. And in a conversation, conversational rules apply. Reformulating a point is great. It might highlight unexpected aspects or trigger interesting associations. But don’t start talking about things that don’t relate to the current exchange. People will just think you’re weird.

I’m not saying this to discourage anyone from trying to be original. But originality is always listed as a crucial assessment criterion, no matter whether it’s about student essays, PhD dissertations or grant applications. Yet, as far as I can see it doesn’t amount to more than this: reviewer has not thought of the idea in quite those terms. – Again, that’s fine. But let’s be clear about what it amounts to.

When I ask students what they want to achieve in their work, they often reply that they wish to say something original. In order to find out what they mean by that, I have designed a little test. I let them write a small paragraph on a topic of their choice. When I look at what they’ve written, I almost always find something that sounds like it’s coming straight out of a handbook on the issue. – Why, I ask, did you write this? We knew that already. A particularly ambitious and honest student once replied: “Well, I didn’t want to make a fool of myself.” – I guess that is what it comes down to. Wanting to be original might just mean wanting to belong. Belong to that that club in which everyone is original.

Procrastination as conversation, really?

Writing the first post to my blog, I hesitated a lot. Should I really say this? Should I put it like that? – My point was that such hesitations can be seen as conversations with our potential readers and former selves. You might burst out with an idea and then refine it in the light of second thoughts or amend it because you remember someone saying that this idea was no good. If we take our hesitations seriously, they might actually turn into interesting philosophy. Why? Because hesitations are often dialogical and such dialogues display more of the actual thought process, providing refinements that sharpen our understanding of an issue. In the following, I’d like to give some hints at what this might mean and how this can be turned into writing.

Let me give you an example: Initially I wrote above “My point was that procrastination can be seen as a conversation … ” – But then I thought: No, what I mean is hesitation. – But in my last post I also spoke of procrastination, didn’t I? So is procrastination a form of hesitation? – Well, I suppose some is and some isn’t. So, some forms of procrastination might qualify (I guess watching telly doesn’t qualify, but reading blog posts or staring into the distance might). So, sometimes when I procrastinate I engage in a dialogue. — OK, this is a lame example. But now you’ve seen more of my thought process than in the first paragraph. The upshot is: Not only hesitation but also some forms of procrastination might be dialogical. You wouldn’t have got that refinement, had I not added this paragraph. Now, if you want a really thrilling example of the phenomenon, go and read Wittgenstein’s Philosophical Investigations again. – Still, you might ask: what’s so great about hesitation?

I said that hesitation is dialogical. But dialogical writing, it seems, isn’t much encouraged in academia. So how can this idea be applied? – One of my greatest worries in writing was and often still is that I can’t say everything at once. I’m not joking! Having to write a paragraph about x and leave you, dear reader, with the idea that this is really all I have to say about x might be embarrassing. To amend this impression before it could even arise, I initially wrote very long paragraphs. Yes, horrible. But then I noticed that other people don’t do this. Good writers have no qualms to say very little or even something blatantly false about x. How do they get away with it? – Well, they write a second paragraph! And then they challenge what they said in the beginning. This simple scheme of thesis-question-refinement-question does not only display a thought process but often provides very intriguing refinements. (If you look at scholastic quaestiones you can see how it’s turned into a labyrinthic art.)

Of course, this is a simple technique of implementing dialogue. But it can be applied easily to regular papers without having to bring in Theaitetos or Socrates. What’s tricky about it is that some readers still stop reading after the first paragraph…

Procrastinating? Hesitating as engaging in conversation

Great timing… While most of you are on holiday, I’m starting a blog on (writing) philosophy. Yes, I know! There are so many blogs, even on philosophy, and we don’t really need another one… Actually, I’m just finalising a book, but since that’s rather scary and torturous, I thought about my new book project called Handling Ideas: it’s supposed to be a (fairly popular, yes!) book about understanding, expressing and applying ideas. And since that is scary, too, why not start by writing about writing instead? Also, it’s 37 degrees in Groningen.

On a more serious note (don’t think you can skip this!), I think that the practice of philosophy and writing are intimately connected. You all know how long the distance between the thought in your head and the page or screen in front of you really is. And you already know that before you actually finish this damn sentence that you started crafting yesterday, you’ll soon rush to the delete button to change a few words again…

Writing, that is amongst other things: deciding on the ultimate way of expressing a thought, is scary for many of us, but I think that it is an integral part of an important process: when we rush to change a word before we settle on a formulation, we actually engage in a conversation with our readers and former selves. You might think something along the lines of “you won’t like this, so…” or “why did I come up with that?” or “no, I should have put this differently.” – But what is actually going on in these moments? – I think that what we often call procrastination or hesitation is part of a conversational exchange or thought process: it’s part of practising philosophy. It’s all the back and forth that you might remember from Plato’s dialogues. Just a little less elaborate perhaps, but certainly just as interesting.

More often than not, these conversations are suppressed, though. They might seem imperfect or whatever. So in many of the following posts I would like to invite you and myself to bring these conversations to the fore. There are excellent guides on writing and philosophy, but most of them aim at good products. I’m more interested in the doubtful stages that all too often fall through the cracks. This concerns both the writing and the actual philosophy.

In keeping with the conversational spirit, I not only hope for comments on posts but for many guest posts. Enjoy the summer and see you around!

***

PS. I’d like to thank my former student assistant César Reigosa. It was in conversation with him that I decided to settle on the title “Handling Ideas”.