How do I figure out what to think? (Part I)

Which view of the matter is right? When I started out studying philosophy, I had a problem that often continues to haunt me. Reading a paper on a given topic, I thought: yes, that makes sense! Reading a counterargument the next day, I thought: right, that makes more sense! Reading a defence of paper one, I thought: oh, I had better swing back. Talking to others about it, I found there were two groups of people: those who had made up their mind for one side, and those who admitted to swinging back and forth just like I did. I guess we all experience this swinging back and forth in many aspects of life, but in philosophy it felt unsettling because there seemed to be the option of just betting on the wrong horse. But there was something even worse than betting on the wrong horse and finding myself in disagreement with someone I respected. It was the insight that I had no clue how to make up my mind in such questions. How did people end up being compatibilists about freedom and determinism? Why do you end up calling yourself an externalist about meaning? Why do you think that Ruth Millikan or Nietzsche make more sense than Jerry Fodor or Kant? – I thought very hard about this and related questions and came up with different answers, but today I thought: right, I actually have something to say about it! So here we go.

Let’s first see how the unsettling feeling arises. The way much philosophy is taught is by setting out a problem and then presenting options to solve it. Sometimes they are presented more historically, like: Nietzsche tried to refute Schopenhauer. Sometimes they are presented as theoretical alternatives, like: this is an argument for compatibilism and here is a problem for that argument. I had a number of reactions to such scenarios, but my basic response was not: right, so these are the options. It was rather: I have no idea how to oversee them. How was I supposed to make up my mind? Surely that would require overseeing all the consequences and possible counterarguments, when I had already trouble to get the presented position in the first place. I went away with three impressions: (1) a feeling of confusion, (2) the feeling that some of the views must be better than others, and (3) the assumption that I had to make up my mind about these options. But I couldn’t! Ergo, I sucked at philosophy.

In this muddle, history of philosophy seemed to come to the rescue. It seemed to promise that I didn’t have to make up my mind, but merely give accurate accounts of encountered views. – Ha! The sense of relief didn’t last long. First, you still have to make up your mind about interpretations, and somehow the views presented in primary texts still seemed to pull me in different directions. My problem wasn’t solved but worsened, because now you were supposed to figure out philological nuances and historical details on top of everything else. Ergo, the very idea of reporting ideas without picking a side turned out to be misguiding.

Back to square one, I eventually made what I thought was a bold move: I just picked a side, more or less at random. The unease about not seeing through the view I had picked didn’t really go away, but who cares: we’re all just finite mortals! – Having picked a side gave me a new feeling: confidence. I had not seen the light, but hey, I belonged to a group, and some people in that group surely had advanced. Picking a side feels random only at the beginning: then things fall into place; soon you start to foresee and refute counterarguments; what your interlocutors say matters in a new way. You listen not just in an attempt to understand the view “an sich”, but you’re involved. Tensions arise. It’s fun, at least for a while. In any case, picking a side counters lack of confidence: it gives your work direction and makes exchanges meaningful.

For better or worse, I would recommend picking a side if your confusion gets the better of you all the time. At least as a pragmatic device. It’s how you make things fall into place and can take your first steps. However, the unease doesn’t go away. At least for me it didn’t. Why? Let’s face it, I often felt like an actor who impersonates someone who has a view. Two questions remained: What if people could find out that I had just randomly picked a side? This is part of what nourished impostor syndrome (for the wrong reasons, as might turn out later). And how could I work out what I should really think about certain things? – While getting a job partly helped with the first question, a lot of my mode of working revolves around the second question. I got very interested in questions of norms, of methodology and the relation between philosophy and its history. And while these issues are intriguing in their own right, they also helped me with the questions of what to think and how to figure out what to think. So here are a few steps I’d like to consider.

Step one: You don’t have to pick a side. – It helps to look more closely at the effect of picking a side. I said that it gave direction and meaning to my exchanges. It did. But how? Picking a side means to enter a game, by and large an adversarial game. If you pick a side, then it seems that there is a right and wrong side just as there is winning and losing in an argumentative setting. Well, I certainly think there is winning and losing. But I doubt that there is right and wrong involved in picking a side. So here is my thesis: Picking a side helps you to play the game. But it doesn’t help you in figuring out what you should think. In other words, in order to work out what to think, you don’t have to pick a side at all.

Step two: Picking a side does not lead you to the truth. – As I noted, the way much philosophy is taught to us is by setting out a problem and then presenting options to solve it. The options are set up as better or worse options. And now it seems that picking a side does not only associate you with winning, say, a certain argument, but also with truth. And the truth is what you should think and be convinced of, right? But winning an argument doesn’t (necessarily) mean to hit on the truth of a matter. The fact that you win in an exchange does not mean that you win the next crucial exchange. In fact, it’s at least possible that you win every argument and never hit on any truth. It’s merely the adversarial practice of philosophy that creates the illusion that winning is related to finding the truth.

Now you might want to object that I got things the wrong way round. We argue, not to win, but about what’s true. That doesn’t make winning automatically true, but neither does it dissociate truth from arguing. Let’s look at an example: You can argue about whether it was the gardener or the butler who committed the murder. Of course, you might win but end up convicting, wrongly, the gardener. Now that does show that not all arguments bring out the truth. But they still can decide between true and false options. Let me address this challenge in the next step.

Step three: In philosophy, there are no sides. – It’s true that presenting philosophical theories as true or false, or at least as better or worse solutions to a given problem makes them look like gardeners or butlers in a whodunit. Like a crime novel, problems have solutions, and if not one solution, then at least one kind of solution. – This is certainly true of certain problems. Asking about an individual cause or element as being responsible or decisive is the sort of setting that allows for true and false answers. But the problems of philosophy are hardly ever of that sort. To see this, consider the example again. Mutatis mutandis, what matters to the philosopher is not mainly who committed the crime, but whether the gardener and the butler have reasons to commit the murder. And once someone pins down the gardener as the culprit, philosophers will likely raise the question whether we have overlooked other suspects or whether the supposed culprit is really to blame (rather than, say, society). This might sound as if I were making fun of philosophy, but the point is that philosophers are more engaged in understanding than in providing the one true account.

How does understanding differ from solving a problem? Understanding involves understanding both or all the options and trying to see where they lead. Understanding is a comprehensive analysis of an issue and an attempt to integrate as many facts as possible in that analysis. This actually involves translating contrary accounts into one another and seeing how different theories deal with the (supposedly) same facts. Rather than pinning down the murderer you’ll be asking what murder is. But most of the time, it’s not your job to conclusively decide what murder is (in the sense of what should count as murder in a given jurisdiction), but to analyse the factual and conceptual space of murder. Yes, we can carve up that space differently. But this carving up is not competitive; rather it tells us something about our carving tools. To use a different analogy, asking which philosophical theory is right is like asking whether you should play a certain melody on the piano or on the trombone. There are differences: the kinds of moves you need to make to produce the notes on a trombone differ vastly from those you need to make on the piano. Oh, and your preference might differ. But would you really want to say there is a side to be taken? – Ha! You might say that you can’t produce chords on a trombone, so it’s less qualified for playing chord changes. Well, just get more trombone players then!

I know that the foregoing steps raise a number of questions, which is why I’d like to dedicate a number of posts to this issue. To return to swinging back and forth between contrary options, this feeling does not indicate that you are undecided. It indicates that you are trying to understand different options in a setting. Ultimately, this feeling measures our attempts to integrate new facts, while we are confronted with pressures arising from observing people who actually adhere to one side or another. For the time being, I’d like to conclude by repeating that it is the adversarial style that creates the illusion that winning and losing are related to giving true and false accounts. The very idea of having to pick a side is, while understandable in the current style of playing the game, misguided. If there are sides, they are already picked, rooted in what we call perspectives. In other words, one need not worry which side to choose, but rather think through the side you already find yourself on. There are no wrong sides. Philosophy is not a whodunit. And the piano might be out of tune.

What is a debate? On the kinds of things we study in history of philosophy

Philosophers focus on problems; historians of philosophy also focus on texts. That’s what I sometimes say when I have to explain the difference between doing philosophy and history of philosophy. The point is that historians, in addition to trying and understanding what’s going on in a text or between texts, also deal with the ‘material basis’ on which the problems are handed down to us: the genres, dates, production and dissemination, the language, style and what have you. But what is it that we actually find in the texts? Of course, we are used to offer interpretations, but I think that, before we even start reading, we all tend to have presumptions about what we find. Now these presumptions can be quite different. And it matters greatly what we think we find. In the following, I want to say a few things about this issue, not to offer conclusions, but to get the ball rolling.

An assumption that is both common and rightly contested is that we might find the intention of the author. Wanting to get Aristotle, Cavendish or Fodor right, seems to mean that we look for what the author meant to say. It’s understandable that this matters to us, but apart from the fact that such a search is often in vain, we can understand texts independently from intentions. – Another unit is of course the focus on arguments. We can read a text as an argument for a conclusion and thus analyse its internal structure. Getting into the details of arguments often involves unpacking and explaining claims, concepts, assumptions in the background, and examples. Evaluating the arguments will mean, in turn, to assess how well they support the claims (I like to think of an evaluation as indicating the distance between claim and argument). But while all this is a crucial part in the philosophical analysis, it does not explain what is going on in the text, that is: it does not explain why and on what basis an author might argue for a certain conclusion, reject a certain view, make a certain move, use a certain strategy, use a certain term or concept. In other words, in addition to the internal analysis we need to invoke some of the so-called context.

As I see it, a fruitful approach to providing context, at least in the history of philosophy, is to study texts as elements of debates. One reason I like this is that it immediately opens up the possibility to locate the text (and the claims of an author) in a larger interaction. We hardly ever write just because we want to express a view. Normally we write in response to other texts, no matter whether we reply to a question, reject a claim, highlight a point of interest etc., and no matter whether that other text is a day or thousand years old.

But even if you agree that debates are a helpful focus both for studying a historical or contemporary text (in research as well as in teaching), there might be quite some disagreement as to what a debate actually is or what we are looking for in a debate. I think this matters not only for historians but also for understanding debates more generally. – Currently, for instance, we have a public debate about climate change. What kind of ‘unit’ is this? There are conditions under which the debate arose quite some decennia ago, with claims being put forward in research contexts, schools and the media. These conditions vary greatly: there are political, technological, scientific, educational and many other kinds of conditions. Then there are different participants, many kinds of scientists, citizens, politicians, journalists. Then there are different genres: scientific publications, media outlets, referee reports for politicians, interviews, protests in the streets and online etc. What is it that holds all this together and makes it part of a debate? My hunch is that it is a question. But which one? Here, I think it is important to get the priorities right. There are sub-questions, follow-up questions, all sorts, but is there a main question? This is tricky. But I guess it should be the most common and salient point of contact between all the items constituting the debate. For this debate, it is perhaps the question: How shall we respond to climate change?

Once we determine such a question, we can group the items, especially the texts, accordingly. The debate is one of the crucial factors that makes the text meaningful, that places it in a dialogical space, even if we do not understand very much of what it says (yet). Even if I am not a climate scientist, I understand the role of a paper within the debate and might be able to place it quite well just by reading the abstract. The same is true of a medieval treatise on logic or an early modern text on first philosophy. – So this is a good way in, I guess. But where do we go from here? You probably can already guess that I want to say something critical now. Yes, I do. The point I want to address is this: How is a debate structured?

When we think about debates in philosophy, we obviously start out from what we perceive debates to be nowadays. As pointed out earlier, much philosophical exchange is based on criticising others. Therefore, it seems fair to assume that debates are structured by opposition. There is a question and opposing answers to it. Indeed, many categories in philosophical historiography are ordered in oppositions and it helps to understand one term through thinking in relation to its opposite. Just think of empiricism versus rationalism, realism versus nominalism etc. That’s all fine. But it only gets you so far. Understanding the content, motivation and addressees of a text as a response in an actual debate requires going far beyond such oppositions. Of course, we can place someone by saying he’s a climate change denier; but that doesn’t help us in understanding the motivations and contents of the text. It’s just a heuristic device to get started.

Today I had the pleasure of listening in on a meeting of Andrea Sangiacomo’s ERC project team working on a large database to study trends in early modern natural philosophy.* It’s a very exciting project, not least in that they are trying to analyse the social and semantic networks in which some of the teaching took place. Not being well-versed in digital humanities myself, I was mainly in awe of the meticulous attention to details of working with the data. But then it struck me: They are tracking teaching practices and yet they were making their first steps by tracing opposing views (on occasionalism). Why would you look for oppositions, I wondered half aloud. Of course, it is a heuristic way of structuring the field. It was then that I began to wonder how we should analyse debates, going beyond oppositions.

Now you might ask why one should go beyond. My answer is that debates, even though the term might suggest critical opposition over a question, might be structured by opposition. But the actual moves that explain what’s going on in a text on a more detailed level, that is: from one passage or even one sentence to the next, are way more fine-grained. Again, as in the case of the straightforward opposition, these moves should be thought of as (implicit) responses to other texts.** Here is a list of moves I think of ad hoc:

  • reformulating a claim
  • quoting a claim (with or without acknowledgement)
  • paraphrasing a claim
  • translating a claim (into a different language, terminology)
  • formalising a claim
  • simplifying a claim
  • embedding a claim into a more complex one
  • ascribing a claim (to someone)
  • (intentionally) misacscribing a claim
  • making up a claim (as a view of someone)
  • commenting on a claim
  • elaborating or developing an idea
  • locating a view in a context
  • deriving (someone’s) claim from another claim
  • deriving (someone’s) claim from the Bible
  • asserting that a claim, actually, is another claim
  • asserting that a claim is ambiguous
  • asserting that a claim is self-evident
  • asserting that a claim is true, false, paradoxical, contradictory, opposing another one, an axiom, demonstrable, not demonstrable
  • asserting that a claim is confirmed by experience
  • asserting that a claim is intuitive, plausible, implausible, unbelievable
  • raising (new) questions
  • answering a question raised by a claim
  • doubting and questioning a result
  • revising a claim
  • revising one’s own claim in view of another claim
  • understanding a view
  • failing to understand a view
  • misrepresenting a view
  • distorting a view
  • evaluating a view
  • dismissing a view
  • re-interpreting a (well-known) view
  • undermining a claim, one’s own claim
  • exposing assumptions
  • explaining an idea in view of its premises or implications
  • illustrating a view
  • finding (further) evidence for or against a view
  • transforming or applying a concept or view to a new issue, in philosophy or elsewhere
  • recontextualising a view
  • repairing a view or argument
  • popularising a view
  • trying to conserve a view
  • trying to advance a view
  • juxtaposing views
  • comparing views
  • resolving a tension between views
  • highlighting a tension between views
  • associating a view with another one
  • appropriating a view
  • pretending to merely repeat a traditional view, while presenting a bold re-interpretation of it [yes, what Ockham does to Aristotle]
  • explicitly accepting a view
  • pretending to accept a view
  • accepting a view, while condemning the proponent
  • rejecting a view, while praising the proponent
  • pretending to reject a view, while actually appropriating (part of) it [yes, I’m thinking of Reid]
  • pretending to accept a view, while rejecting its premises
  • highlighting relations between views (analogies etc.)
  • ridiculing a view
  • belittling a view
  • shunning a view
  • showing societal consequences of a view
  • suppressing or hiding a claim
  • disavowing a claim
  • retracting a claim
  • putting a view in euphemistic terms
  • showing that a claim is outrageous, heretical, controversial, complacent
  • polemicising against a view
  • etc.

This list is certainly not exhaustive. And “view” or “claim” might concern the whole or a part, an argument, a term or concept. Even if we have some more positive or negative forms of responses, we have to see that all of these ways go beyond mere opposition, counterargument or criticism. Sometimes the listed moves are made explicitly; sometimes a move in a text might be explicable as result of such a move. What is perhaps most salient is that they often say as much about the commitments of the respondent as they are intended to say about the other text that is being responded to. While mere criticism of an opponent does not require us to expose our commitments, much of what we find in (historical) texts is owing to commitments. (In other words, adversarial communication in current professional settings, such as the Q&A after talks, might often be taken as people merely showing off their chops, without invoking their own commitments and vulnerabilities. But this is not what we should expect to find in historical texts.)*** So if we look at Spinoza as criticising Descartes, for instance, we should not overlook that the agreements between the commitments and interests of these authors are just as important as the tensions and explicit disagreement. Looking again at the issue of climate change, it is clear that most moves probably consist in understanding claims and their implications, establishing agreement and noting tensions, corroborating ideas, assessing consequences, providing evidence, trying to confirm results etc. So the focus on opposition might be said to give us a wrong idea of the real moves within a historical debate and of the moves that stabilise a debate or make it stick.

Anyway, the main idea of beginning such a list is to see the variety of moves we might find in a text responding to someone else. To analyse a text merely as an opposing move with pertinent counterarguments or as presenting a contrary theory makes us overlook the richness of the philosophical interactions.

____

*Here is a recent blog post by Raluca Tanasescu, Andrea Sangiacomo, Silvia Donker, and Hugo Hogenbirk on their work. I’m only beginning to learn about the methods and considerations in digital humanities. But I have to say that this field strikes me as holding a lot of (methodological) inspiration (for history of philosophy and science etc.) even if you continue to work mostly in more traditional ways.

** Besides texts of different authors, this might of course also concern other texts of oneself or parts or temporal stages (drafts) of the same text.

*** I’m grateful to Laura Georgescu for pointing out this difference between criticism in current professional settings as opposed to many historical texts.

Ugly ducklings and progress in philosophy

Agnes Callard recently gave an entertaining interview at 3:16 AM. Besides her lovely list of views that should count as much less controversial than they do, she made an intriguing remark about her book:

“I had this talk on weakness of will that people kept refuting, and I was torn between recognizing the correctness of their counter-arguments (especially one by Kate Manne, then a grad student at MIT), and the feeling my theory was right. I realized: it was a bad theory of weakness of will, but a good theory of another thing. That other thing was aspiration. So the topic came last in the order of discovery.”

Changing the framing or framework of an idea might resolve seemingly persisting problems and make it shine in a new and favourable light. Reminded of Andersen’s fairy tale in which a duckling is considered ugly until it turns out that the poor animal is actually a swan, I’d like to call this the ugly duckling effect. In what follows, I’d like to suggest that this might be a good, if underrated, form of making progress in philosophy.

Callard’s description stirred a number of memories. You write and refine a piece, but something feels decidedly off. Then you change the title or topic or tweak the context ever so slightly and, at last, everything falls into place. It might happen in a conversation or during a run, but you’re lucky if it does happen at all. I know all too well that I abandoned many ideas, before I eventually and accidentally stumbled on a change of framework that restored (the reputation of) the idea. As I argued in my last post, all too often criticism in professional settings provides incentives to tone down or give up on the idea. Perhaps unsurprisingly, many criticisms focus on the idea or argument itself, rather than on the framework in which the idea is to function. My hunch is that we should pay more attention to such frameworks. After all, people might stop complaining about the quality of your hammer, if you tell them that it’s actually a screwdriver.

I doubt that there is a precise recipe to do this. I guess what helps most are activities that help you tweaking the context, topic or terminology. This might be achieved by playful conversations or even by diverting your attention to something else. Perhaps a good start is to think of precedents in which this happened. So let’s just look at some ugly duckling effects in history:

  • In my last post I already pointed to Wittgenstein’s picture theory of meaning. Recontextualising this as a theory of representation and connecting it to a use theory or a teleosemantic account restored the picture theory as a component that makes perfect sense.
  • Another precendent might be seen in the reinterpretations of Cartesian substance dualism. If you’re unhappy with the interaction problem, you might see the light when, following Spinoza, you reinterpret the dualism as a difference of aspects or perspectives rather than of substances. All of a sudden you can move from a dualist framework to monism but retain an intuitively plausible distinction.
  • A less well known case are the reinterpretations of Ockham’s theory of mental language, which was seen as a theory of ideal language, a theory of logical deep structure, a theory of angelic speech etc.

I’m sure the list is endless and I’d be curious to hear more examples. What’s perhaps important to note is that we can also reverse this effect and turn swans into ugly ducklings. This means that we use the strategy of recontextualisation also when we want to debunk an idea or expose it as problematic:

  • An obvious example is Wilfried Sellars’ myth of the given: Arguing that reference to sense data or other supposedly immediate elements of perception cannot serve as a foundation or justification of knowledge, Sellars dismissed a whole strand of epistemology.
  • Similarly, Quine’s myth of the museum serves to dismiss theories of meaning invoking the idea that words serve as labels for (mental) objects.
  • Another interesting move can be seen in Nicholas of Cusa’s coincidentia oppositorum, restricting the principle of non-contradiction to the domain of rationality and allowing for the claim that the intellect transcends this domain.

If we want to assess such dismissals in a balanced manner, it might help to look twice at the contexts in which the dismissed accounts used to make sense. I’m not saying that the possibility of recontextualisation restores or relativises all our ideas. Rather I think of this option as a tool for thinking about theories in a playful and constructive manner.

Nevertheless, it is crucial to see that the ugly duckling effect works in both ways, to dismiss and restore ideas. In any case, we should try to consider a framework in which the ideas in question make sense. And sometimes dismissal is the way to go.

At the end of the day, it could be helpful to see that the ugly duckling effect might not be owing to the duck being actually a swan. Rather, we might be confronted with duck-swan or duck-rabbit.

Spotting mistakes and getting it right

“Know thyself” is probably a fairly well known maxim among philosophers. But the maxim we live by rather seems to be one along the lines of “know the mistakes of others”. In calling this out I am of course no better. What prompts me to write about this now is a recent observation, not new but clearly refreshed with the beginning of the academic year: it is the obvious desire of students to “get it right”, right from the start. But what could be wrong with desiring to be right?

Philosophers these days don’t love wisdom but truth. Now spotting the mistakes of others is often presented as truth-conducive. If we refute and exclude the falsehoods of others, it seems, we’re making progress on our way to finding out the truth. This seems to be the reason why most papers in philosophy build their cases on refuting opposing claims and why most talks are met with unwavering criticism of the view presented. Killing off all the wrongs must leave you with the truth, no? I think this exclusion principle has all sorts of effects, but I doubt that it helps us in making the desired progress. Here is why.

A first set of reasons relates to the pragmatic aspects of academic exchange: I believe that the binary distinction between getting it right or wrong is misleading. More often than not the views offered to us are neither right nor wrong. This is owing to the fact that we have to present views successively, by putting forward a claim and explaining and arguing for it. What such a process exposes is normally not the truth or falsity of the view, but a need for further elaboration: by unpacking concepts and consequences, ruling out undesired implications, clarifying assumptions etc.

Now you might object that calling a view false is designed to prompt exactly that: clarification and exploration. But I doubt that this is the case. After all, much of academic exchange is driven by perceived reputation: More often than not criticism makes the speaker revert to defensive moves, if it doesn’t paralyse them: Rather than exploring the criticised view, speakers will be tempted to use strategies of immunising their paper against further criticism. If speakers don’t retract, they might at least reduce the scope of their claims and align themselves with more accepted tenets. This, I believe, blocks further exploration and sets an incentive for damage control and conformism. If you doubt this, just go and tell a student (or colleague) that they got it wrong and see what happens.

Still, you might object, such initial responses can be overcome. It might take time, but eventually the criticised speaker will think again and learn to argue for their view more thoroughly. – I wish I could share this optimism. (And I sometimes do.) But I guess the reason that this won’t happen, or not very often, is simply this: What counts in scholarly exchange is the publicly observable moment. Someone criticised by an opponent will see themselves as challenged not only as a representative of a view but as a member of the academic community. Maintaining or restoring our reputation will seem thus vital in contexts in which we consider ourselves as judged and questioned: If we’re not actually graded, under review or in a job talk, we will still anticipate or compare such situations. What counts in these moments is not the truth of our accounts, but whether we convince others of the account and, in the process, of our competence. If you go home as defeated, your account will be seen as defeated too, no matter whether you just didn’t mount the courage or concentration to make a more convincing move.

A second set of reasons is owing to the conviction that spotting falsehoods is just that: spotting falsehoods. As such, it’s not truth-conducive. Refuting claims does not (or at least not necessarily) lead to any truth. Why? Spotting a falsehood or problem does not automatically make any opposing claim true. Let me give an example: It is fairly common to call the so-called picture theory of meaning, as presented in Wittgenstein’s Tractatus, a failure. The perhaps intuitive plausibility that sentences function as pictures of states of affairs seems quickly refuted when asking how such pictures can be said to be true or false of a supposed chunk of reality. What do you do? Step out of the picture and compare it with the proper chunk? Haha! – Refuting the picture theory, then, seems to bring us one step closer to an appropriate theory of meaning. But such a dismissal makes us overlook that the picture theory has enormous merits. Once you see it as a theory of representation and stop demanding that it also accounts for the truth and falsity of representations, you begin to realise that it can work very well when combined with a theory of use or a teleosemantic theory. (See e.g. Ruth Millikan’s recontextualisation) The upshot is that our dismissals are often resulting from overlooking crucial further assumptions that would reinstate the dismissed account.

Now you might object that an incomplete account is still a bad account. Pointing this out is not per se wrong but will eventually prompt a recontextualisation that works. In this sense, you might say, the criticism becomes part of the recontextualised account. – To this I agree. I also think that such dialogues can prompt more satisfying results. But bearing the pragmatic aspects of academic exchange in mind, I think that such results are more likely if we present our criticism for what it is: not as barking at falsehoods but attempts to clarify, complete or complement ideas.

Now you might object that the difference between barking at falsehoods and attempts to clarify can be seen as amounting just to a matter of style. – But why would you think that this is an objection? Style matters. Much more than is commonly acknowledged.

Diversifying scholarship. Or how the paper model kills history

Once upon a time a BA student handed in a proposal for a paper on Hume’s account of substance. The student proposed to show that Hume’s account was wrong, and that Aristotle’s account was superior to Hume’s. If memory serves, I talked the student out of this idea and suggested that he build his paper around an analysis of a brief passage in Hume’s Treatise. – The proposal was problematic for several reasons. But what I want to write about is not the student or his proposal. Rather I want to zoom in on our way of approaching historical texts (in philosophy). The anecdote about the proposal can help to show what the problem is. As I see it, the standard journal article has severe repercussions on the way we teach and practise scholarship in the history of philosophy. It narrows our way of reading texts and counters attempts at diversification of the canon. If we want to overcome these repercussions, it will help to reinstate other forms of writing, especially the form of the commentary.

So what’s wrong with journal articles? Let me begin by saying that there is nothing wrong with articles themselves. The problem is that articles are the decisive and almost only form of disseminating scholarship. The typical structure of a paper is governed by two elements: the claim, and arguments for that claim. So a historian typically articulates a claim about a text (or more often about claims in the secondary literature about a text) and provides arguments for embracing that claim. This way we produce a lot of fine scholarship and discussion. But if we make it the leading format, a number of things fall through the cracks.

An immediate consequence is that that the historical text has the status of evidence for the claim. So the focus is not on the historical material but the claim of the historian. If we teach students to write papers of this sort, we teach them to focus on their claims rather than on the material. You can see this in the student’s approach to Hume: the point was to evaluate Hume’s account. Rather than figuring out what was going on in Hume’s text and what it might be responding to, the focus is on making a claim about what is the supposed doctrine. The latter approach immediately abstracts away from the text and thus from the material of discussion. What’s wrong with that? Of course, such an abstract approach is fine if you’re already immersed in an on-going discussion or perhaps even a tradition of discussions about the text. In that case you’re mainly engaging with the secondary literature. But this abstract approach does not work for beginners. Why? Arguably, the text itself sets constraints that have to observed if the discussion is to make sense. What are these constraints? I’m not saying they are fixed once and for all. Quite the contrary! But they have to be established in relation to the text. So before you can say anything about substance in Hume, you have to see where and how the term is used and whether it makes sense to evaluate it in relation to Aristotle. (My hunch is that, in Treatise 1.1.6.1-2, Hume rejects the Aristotelian idea of substance altogether; thus saying that Aristotle’s notion is superior is like saying that apples are superior to bananas). The upshot is: before you can digest the secondary literature, you have to understand how the textual constraints are established that guide the discussions in the secondary literature.

What we might forget, then, if we teach on the basis of secondary literature, is how these constraints were established in the long tradition of textual scholarship. When we open an edition of the Critique of Pure Reason, we see the text through the lens of thick layers of scholarship. When we say that certain passages are “dark”, “difficult” or “important”, we don’t just speak our mind. Rather we echo many generations of diligent scholarship. We might hear that a certain passage is tricky before we even open the book. But rather than having students parrot that Kant writes “difficult prose”, we should teach them to find their way through that prose. That requires engagement with the text: line by line, word by word, translation by mistranslation. Let’s call this mode of reading linear reading as opposed to abstract reading. It is one thing to say what “synthetic apperception” is. It’s quite another thing to figure out how Kant moves from one sentence to the next. The close and often despair-inducing attention to the details of the text are necessary for establishing an interpretation. Of course, it is fine to resort to guidance, but we have to see the often tenuous connection between the text and the interpretation, let a lone the claim about a text. In other words, we have to see how abstract reading emerges from linear reading.

My point is not that we shouldn’t read (or teach what’s in the) secondary literature. My point is that secondary literature or abstract reading is based on a linear engagement with the text that is obscured by the paper model. The paper model suggests that you read a bit and then make a fairly abstract claim (about the text or, more often, about an interpretation of the text). But the paper model obscures hundreds of years or at least decennia of linear reading. What students have to learn (and what perhaps even we, as teachers, need to remind ourselves of) is how one sentence leads to the next. Only then does the abstract reading presented in the secondary literature become visible for what it is: as an outcome of a particular linear reading.

But how can we teach linear reading? My suggestion is quite simple: Rather than essay writing, students in the history of philosophy should begin by learning to write commentaries to texts. As I argued earlier, there is a fair amount of philosophical genres beyond the paper model. At least part of our education should consist in being confronted with a piece of text (no more than half a page) and learning to comment on that piece, perhaps translating it first, going through it line by line, pointing out claims as well as obscurities and raising questions that point to desirable explanations. This way, students will learn to approach the texts independently. While it might be easy to parrot that “Hegel is difficult to read”, it takes courage to say that a concrete piece of text is difficult to understand. In the latter case, the remark is not a judgment but the starting point of an analysis that might allow for a first tentative explanation (e.g. of why the difficulty arises).

Ultimately, my hope is that this approach, i.e. the linear commentary to concrete pieces of text, will lead (back) to a diversification of scholarship. Of course, it’s nice to read, for instance, the next paper on Hume claiming that he is an idealist or whatever. But it would help if that scholarship would (again) be complemented by commentaries to the texts. Nota bene: such scholarship is available even today. But we don’t teach it very much.

Apart from learning how to read linearly and closely, such training is the precondition of what is often called the diversification of the canon. If we really want to expand the boundaries of the canon, the paper model will restrain us (too much) in what we find acceptable. Before we even open a page of Kant, our lens is shaped through layers of linear reading. But when we open the books of authors that are still fairly new to us, we have hardly any traditions of reading to fall back on. If we start writing the typical papers in advance of establishing constraints through careful linear reading, we are prone to just carry over the claims and habits familiar from familiar scholarship. I’m not saying that this is bound to happen, but diligent textual commentaries would provide a firmer grasp of the texts on their own terms. In this sense, diversification of the canon requires diversification of scholarship.

Surprises. (The fancy title would have been: “On the phenomenology of writing”)

It’s all worked out in my head: the paper is there, the literature is reviewed, I’ve found a spot where my claim makes sense without merely repeating what has been said, my argument is fairly clear, the sequence of sections seems fine. But there is a problem: the darn thing still needs to be written! What’s keeping me? The deadline is approaching. Why can’t I just do it? Everyone else can do it! Just like that: sit down, open the file and write down what’s in your head.

I’ve had sufficiently many conversations and looks at “shit academics say” to know that I am not even alone in this. But it’s both irritating and surprising that it keeps recurring. So why is it happening? I know there are many possible explanations. There is the shaming one: “You’re just lazy; stop procrastinating!” Then there is the emphatic one: “You’re overworked; take a break!” And there is probably a proper scientific one, too, but I’m too lazy to even google it. Without wanting to counter any of these possible explanations, it might be rewarding to reflect on personal experience. So what’s going on?

If I am really in the state described in the first paragraph (or at least close to it), I have some sort of awareness of what I would be doing were I to write. This experience has two intriguing features:

  • One aspect is that it is enormously fast: the actual thinking or the internal verbalisations of what I would (want to) write pass by quickly. They are passing much faster than the actual writing would or eventually will take. (There are intriguing studies on this phenomenon; see for instance Charles Fernyhough’s The Voices Within)* This pace of the thoughts is often pleasing. I can rush through a great amount of stuff, often enlarged by association, in very little time. But at the same time this experience might be part of the reason why I don’t get to write. If everything seems (yes, seems!) worked out like that, it feels as if it were done. But why should I do work that is already done? The idea of merely writing down what I have worked out is somewhat boring. Of course, this doesn’t mean that the actual writing is boring. It just means that I take it to be boring. This brings me to the second aspect:
  • The actual writing often comes with surprises. Yes, it’s often worked out in my head, but when I try to phrase the first sentence, I notice two other things. Firstly, I’m less sure about the actual formulations on the page than I was when merely verbalising them internally. Secondly, seeing what I write triggers new associations and perhaps even new insights. It’s as if the actual writing were based on a different logic (or dynamic), pushing me somewhere else. Then it feels like I cannot write what I want to write, at least not before I write down what the actual writing pushes me to write first: Yes, I want to say p, but first have to explain q. Priorities seem to change. Something like that. These surprises are at once frustrating and motivating or pleasing. They are frustrating, because I realise I hadn’t worked it out properly. I just thought I was done. Now the real work begins. On the other hand, they are motivating because I feel like I’m actually learning or understanding something I had not seen before.

I don’t know whether this experience resonates with you, but I guess I need such surprises. I can’t sit down and merely write what I thought out beforehand. And I don’t know whether that is the case because I want to avoid the boredom of merely ‘writing it up’ or whether ‘merely writing it up’ is actually not doable. Not doable because it wouldn’t convince me or surprise me or perhaps even because it is impossible or whatever. The moral is that you should start writing early. Because if you’re a bit like me, part of the real work only starts during the actual writing. Then you face the real problems that aren’t visible when all the neat sentences are just gliding through your head. – Well, I guess I should get going.

___

* And this post just came out today.

On taking risks. With an afterthought on peer review

Jumping over a puddle is both fun to try and to watch. It’s a small risk to take, but some puddles are too large to cross… There are greater risks, but whatever the stakes, they create excitement. And in the face of possible failure, success feels quite different. If you play a difficult run on the piano, the listeners will equally feel relief when you manage to land on the right note in time. The same goes for academic research and writing. If you start out with a provocative hypothesis, people will get excited about the way you mount the evidence. Although at least some grant agencies ask for risks taken in proposals, risk taking is hardly ever addressed in philosophy or writing guides. Perhaps people think it’s not a serious issue, but I believe it might be one of the crucial elements.

In philosophy, every move worth our time probably involves a risk. Arguing that mistakes or successes depend on their later contextualisation, I already looked at the “the fine line between mistake and innovation.” But how do we get onto that fine line? This, I think, involves taking a risk. Taking a risk in philosophy means saying or doing something that will likely be met with objections. That’s probably why criticising interlocutors is so widespread. But there are many ways of taking risks. Sitting in a seminar, it might already feel risky to just raise your voice and ask a question. You feel you might make a fool of yourself and lose the respect of your fellow students or instructor. But if you make the effort you might also be met with the admiration for going through with an only seemingly trivial point. I guess it’s that oscillation between the possibility of failure and success that also moves the listeners or readers. It’s important to note that risk taking has a decidedly emotional dimension. Jumping across the puddle might land you in the puddle. But even if you don’t make it all the way, you’ll have moved more than yourself.

In designing papers or research projects, risk taking is most of the time rewarded, at least with initial attention. You can make an outrageous sounding claim like “thinking is being” or “panpsychism is true”. You can present a non-canonical interpretation or focus on a historical figure like “Hume was a racist” or “Descartes was an Aristotelian”. You can edit or write on the work of a non-canonical figure or provide an uncommon translation of a technical term. This list is not exhaustive, and depending on the conventions of your audience all sorts of moves might be risky. Of course, then there is work to be done. You’ve got to make your case. But if you’re set to make a leap, people will often listen more diligently than when you merely promise to summarise the state of the art. In other words, taking a risk will be seen as original. That said, the leap has to be well prepared. It has to work from elements that are familiar to your audience. Otherwise the risk cannot be appreciated for what it is. On the other hand, mounting the evidence must be presented as feasible. Otherwise you’ll come across as merely ambitious.

Whatever you do, in taking a risk you’ll certainly antagonise some people. Some will be cheering and applauding your courage and originality. Others will shake their heads and call you weird or other endearing things. What to do? It might feel difficult to live with opposition. But if you have two opposed groups, one positive, one negative, you can be sure you’re onto something. Go for it! It’s important to trust your instincts and intuitions. You might make it across the puddle, even if half of your peers don’t believe it. If you fail, you’ve just attempted what everyone else should attempt, too. Unless it’s part of the job to stick to reinventing the wheel.

Now the fact that risks will be met with much opposition but might indicate innovation should give us pause when it comes to peer review. In view of the enormous competition, journals seem to encourage that authors comply with the demands of two reviewers. (Reviewer #2 is a haunting meme by now.)  A paper that gets one wholly negative review will often be rejected. But if it’s true that risks, while indicative of originality, will incur strong opposition, should we not think that a paper is particularly promising when met with two opposing reviews? Compliance with every possible reviewer seems to encourage risk aversion. Conversely, looking out for opposing reviews would probably change a number of things in our current practice. I guess managing such a process wouldn’t be easier. So it’s not surprising if things won’t change anytime soon. But such change, if considered desirable, is probably best incentivised bottom-up. And this would mean to begin in teaching.

The fact, then, that a claim or move provokes opposition or even refutation should not be seen as a negative trait. Rather it indicates that something is at stake. It is important, I believe, to convey this message, especially to beginners who should learn to enjoy taking risks and listening to others doing it.

Philosophical genres. A response to Peter Adamson

Would you say that the novel is of a more proper literary genre than poetry? Or would you say that the pop song is less of a musical genre than the sonata? To me these questions make no sense. Both poems and novels form literary genres; both pop songs and sonatas form musical genres. And while you might have a personal preference for one over the other, I can’t see a justification for principally privileging one over the other. The same is of course true of philosophical genres: A commentary on a philosophical text is no less of a philosophical genre than the typical essay or paper.* Wait! What?

Looking at current trends that show up in publication lists, hiring practices, student assignments etc., articles (preferably in peer-reviewed journals) are the leading genre. While books still count as important contributions in various fields, my feeling is that the paper culture is beginning to dominate everything else. But what about commentaries to texts, annotated editions and translations or reviews? Although people in the profession still recognise that these genres involve work and (increasingly rare) expertise, they usually don’t count as important contributions, even in history of philosophy. I think this trend is highly problematic for various reasons. But most of all it really impoverishes the philosophical landscape. Not only will it lead to a monoculture in publishing; also our teaching of philosophy increasingly focuses on paper production. But what does this trend mean? Why don’t we hold other genres at least in equally high esteem?

What seemingly unites commentaries to texts, annotated editions and translations or reviews is that they focus on the presentation of the ideas of others. Thus, my hunch is that we seem to think more highly of people presenting their own ideas than those presenting the ideas of others. In a recent blog post, Peter Adamson notes the following:

“Nowadays we respect the original, innovative thinker more than the careful interpreter. That is rather an anomaly, though. […]

[I]t was understood that commenting is itself a creative activity, which might involve giving improved arguments for a school’s positions, or subtle, previously overlooked readings of the text being commented upon.”

Looking at ancient, medieval and even early modern traditions, the obsession with what counts as originality is an anomaly indeed. I say “obsession” because this trend is quite harmful. Not only does it impoverish our philosophical knowledge and skills, it also destroys a necessary division of labour. Why on earth should every one of us toss out “original claims” by the minute? Why not think hard about what other people wrote for a change? Why not train your philosophical chops by doing a translation? Of course the idea that originality consists in expressing one’s own ideas is fallacious anyway, since thinking is dialogical. If we stop trying to understand and uncover other texts, outside of our paper culture, our thinking will become more and more self-referential and turn into a freely spinning wheel… I’m exaggerating of course, but perhaps only a bit. We don’t even need the medieval commentary traditions to remind ourselves. Just remember that it was, amongst other things, Chomsky’s review of Skinner that changed the field of linguistics. Today, writing reviews, working on editions and translations doesn’t get you a grant, let alone a job. While we desperately need new editions, translations and materials for research and teaching, these works are esteemed more like a pastime or retirement hobby.**

Of course, many if not most of us know that this monoculture is problematic. I just don’t know how we got there that quickly. When I began to study, the work on editions and translations still seemed to flourish, at least in Germany. But it quickly died out, history of philosophy was abandoned or ‘integrated’ in positions in theoretical or practical philosophy, and many people who then worked very hard on the texts that are available in shiny editions are now without a job.

If we go on like this, we’ll soon find that no one will be able to read or work on past texts. We should then teach our students that real philosophy didn’t begin to evolve before 1970 anyway. Until it gets that bad I would plead for reintroducing a sensible division of labour, both in research and teaching. If you plan your assignments next time, don’t just offer your students to write an essay. Why not have them choose between an annotated translation, a careful commentary on a difficult passage or a review? Oh, of course, they may write an essay, too. But it’s just one of many philosophical genres, many more than I listed here.

____

* In view of the teaching practice that follows from the focus on essay writing, I’d adjust the opening analogy as follows: Imagine the music performed by a jazz combo solely consisting of soloists and no rhythm section. And imagine that all music instruction would from now on be geared towards soloing only… (Of course, this analogy would capture the skills rather than the genre.)

** See Eric Schliesser’s intriguing reply to this idea.

Against allusions

What is the worst feature of my writing? I can’t say what it is these days; you tell me please! But looking back at what I worked hardest to overcome in writing I’d say it’s using allusions. I would write things such as “in the wake of the debate on semantic externalism” or “given the disputes over divine omnipotence bla bla” without explaining what precise debate I actually meant or what kind of semantic externalism or notions of the divine I had in mind. This way, I would refer to a context without explicating it. I guess such allusions were supposed to do two things: on the one hand, I used them to abbreviate the reference to a certain context or theory etc., on the other hand, I was hoping to display my knowledge of that context. To peers, it was meant to signal awareness of the appropriate references without actually getting too involved and, most importantly, without messing up. If you don’t explicate or explain, you can’t mess things up all that much. In short, I used allusions to make the right moves. So what’s wrong with making the right moves?

Let me begin by saying something general about allusions. Allusions, also known as “hand waving”, are meant to refer to something without explicitly stating it. Thus, they are good for remaining vague or ambiguous and can serve various ends in common conversation or literature. Most importantly, their successful use presupposes sufficient knowledge on part of the listener or reader who has to have the means to disambiguate a word or phrase. Funnily enough, such presuppositions are often accompanied by phrases insinuating the contrary. Typical phrases are: “as we all know”, “as is well known”, “famously”, “obviously”, “clearly”, “it goes without saying” etc.

Such presuppositions flourish and work greatly among friends. Here, they form a code that often doesn’t require any of the listed phrases or other markers. They rather work like friendly nods or winks. But while they might be entertaining among friends, they often exclude other listeners in scholarly contexts. Now you might hasten to think that those excluded simply don’t ‘get it’, because they lack the required knowledge. But that’s not true. Disambiguation requires knowledge, yes, but it also and crucially requires confidence (since you always might make a fool of yourself after all) and an interest in the matter. If you’re unsure whether you’re really interested, allusions used among scholars often closely resemble the tone of a couple of old blokes dominating a dinner party with old insider jokes. Who wants to sound like that in writing?

Apart from sounding like a bad party guest, there is a deeper problem with allusions in scholarly contexts. They rely on the status quo of canonical knowledge. Since the presuppositions remain unspoken, the listener has go by what he or she takes to be a commonly acceptable disambiguation. Of course, we have to take some things as given and we cannot explicate everything, but when it comes to important steps in our arguments or evidence, reliance on allusions is an appeal to the authority of the status quo rather than the signalling of scholarly virtue.

I began to notice this particularly in essays by students who were writing their essays mainly for their professors. Assuming that professors know (almost) everything, nothing seems to need unpacking. But since almost all concepts in philosophy are essentially contested, such allusions often don’t work. As long as I don’t know which precise version of an idea I’m supposed to assume, I might be just as lost as if I didn’t know the next thing about it. Thus the common advice to write for beginners or fellow students. Explain and unpack at least all the things you’re committed to argue for or use as evidence for a claim. Otherwise at least I often won’t get what’s going on.

The problem with that advice is that it remains unclear how much explanation is actually appropriate. Of course, we can’t do without presuppositions. And we cannot and should not write only for beginners. If allusions are a vice, endless explanations might fare no better. Aiming at avoiding every possible misunderstanding can result in an equally dull or unintelligible prose. So I guess we have to unpack some things and merely allude to others. But which ones do we explain in detail? It’s important to see that every paper or book has (or should have) a focus: this is the claim you ultimately want to argue for. At the same time, there will be many assumptions that you shouldn’t commit yourself to showing. I attempt to explain only those things that are part of the focus. That said, it sometimes really is tricky to figure out what that focus actually is. Unpacking allusions might help with finding it, though.

Kill your darlings! But how?

Why can’t you finish that paper? What’s keeping you? – There is something you still have to do. But where can you squeeze it in? Thinking about salient issues I want to address, I often begin to take the paper apart again, at least in my mind. – “Kill your darlings” is often offered as advice for writers in such situations. When writing or planning a paper, book or project you might be prone to stick to tropes, phrases or even topics and issues that you had better abandon. While you might love them dearly, the paper would be better off without them. So you might have your paper ready, but hesitate to send it off, because it still doesn’t address that very important issue. But does your paper really need to address this? – While I can’t give you a list of items to watch out for, I think it might help to approach this issue by looking at how it arises.

How do you pick your next topic for a project or paper? Advanced graduate students and researchers are often already immersed in their topics. At this level we often don’t realise how we get into these corners. Thus, I’d like to look at situations that I find BA students in when they think about papers or thesis topics. What I normally do is ask the student for their ideas. What I try to assess, then, are two things: does the idea work for a paper and is the student in a position to pursue it? In the following, I’ll focus on the ideas, but let’s briefly look at the second issue. Sometimes ideas are very intriguing but rather ambitious. In such cases, one might be inclined to discourage students from going through with it. But some people can make it work and shouldn’t be discouraged. You’ll notice that they have at least an inkling of a good structure, i.e. a path that leads palpably from a problem to a sufficiently narrow claim. However, more often people will say something like this: “I don’t yet know how to structure the argument, but I really love the topic.” At this point, the alarm bells should start ringing and you should look very carefully at the proposed idea. What’s wrong with darlings then?

(1) Nothing: A first problem is that nothing might seem wrong with them. Liking or being interested in a topic isn’t wrong. And it would be weird to say that someone should stop pursuing something because they like it. Liking something is in fact a good starting point. You’ve probably ended up studying philosophy because you liked something about it. (And as Sara Uckelman pointed out, thinking about your interests outside philosophy and then asking how they relate to philosophy might provide a good way to finding a dissertation topic.) At the same time, your liking something doesn’t necessarily track good paper topics. It’s a way into a field, but once you’re there other things than your liking might decide whether something works. Compare: I really love the sound of saxophones; I listen to them a lot. Perhaps I should learn to play the saxophone. So it might get me somewhere. But should start playing it live on stage now? Well …

(2) Missing tensions. What you like or love is likely to draw you in. That’s good. But it might draw you in in an explorative fashion. So you might think: “Oh, that’s interesting. I want to know all about it.” But that doesn’t give you something to work on. An explorative mood doesn’t get you a paper; you need to want to argue. Projects in philosophy and its history focus on tensions. If you want to write a paper, you’ve got to find something problematic that creates an urgent need for explanation, like an apparent contradiction or a text that does not seem to add up. Your love or interest in a topic doesn’t track tensions. If you want to find a workable idea, find a tension.

(3) Artificial tensions. Philosophy is full of tensions. When people want to “do what they love”, they often look for a tension in their field. Of course, there will be a lot of tensions discussed in the literature. But since people often believe they should be original, they will create a tension rather than pick up one already under discussion. This is where problems really kick in. You might for instance begin a thesis supervision and be greeted with a tentative “I’m interested in love and I always liked speech act theory. I would like to write about them.” I have to admit that it’s this kind of suggestion I hear most often. So what’s happening here? – What we’re looking at is not a tension but a (difficult) task. The task is created by combining two areas and hence creating the problem of applying the tools of one field to the issue of another. Don’t get me wrong: of course you can write intriguing stuff by applying speech act theory to the issue of love. But this usually requires some experience in both areas. Students often come up with some combination because they like both topics or had some good exposure to them. There might also be a vague idea of how to actually combine the issues, but there is no genuine tension. All there is is a difficult task, created ad hoc out of the need to come up with a tension.

Summing up, focusing on your interests alone doesn’t really guide you towards good topics to work on. What do I take home from these considerations? Dealing with darlings is a tricky business. Looking at my own work, I know that a strong interest in linguistics and a deep curiosity about the unity of sentences got me into my MA and PhD topics. But while these interests got me in, I had to let go of them when pursuing my actual work. So they shaped my approach, but they did not dictate the arguments. Motivationally, I could not have done without them. But in the place they actually took me, I would have been misguided by clinging to them.

Anyway, the moral is: let them draw you in, but then let go of them. Why is that worth adhering to? Because your darlings are about you, but your work should not be about yourself, at least not primarily. The tensions that you encounter will come out of existing discussions or texts, not out of tasks you create for yourself. How do you distinguish between the two? I’d advise to look for the actual point of contact that links all the issues that figure in your idea. This will most likely be a concrete piece of text or phrase or claim – the text that is central in your argument. Now ask yourself whether that piece of text really requires an answer to the question you can’t let go of. Conversely, if you have an idea but you can’t find a concrete piece of text to hang it onto, let go of the idea or keep it for another day.