I don’t know what I think. A plea for unclarity and prophecy

Would you begin a research project if there were just one more day left to work on it? I guess I wouldn’t. Why? Well, my assumption is that the point of a research project is that we improve our understanding of a phenomenon. Improvement seems to be inherently future-directed, meaning that we understand x a bit better tomorrow than today. Therefore, I am inclined to think that we would not begin to do research, had we not the hope that it might lead to more knowledge of x in the future. I think this not only true of research but of much thinking and writing in general. We wouldn’t think, talk or write certain things, had we not the hope that this leads to an improved understanding in the future. You might find this point trivial. But a while ago it began to dawn on me that the inherent future-directedness of (some) thinking and writing has a number of important consequences. One of them is that we are not the (sole) authors of our thoughts. If this is correct, it is time to rethink our ways of evaluating thoughts and their modes of expression. Let me explain.

So why am I not the (sole) author of my thoughts? Well, I hope you all know variations of the following situation: You try to express an idea. Your interlocutor frowns and points out that she doesn’t really understand what you’re saying. You try again. The frowning continues, but this time she offers a different formulation. “Exactly”, you shout, “this is exactly what I meant to say!” Now, who is the author of that thought? I guess it depends. Did she give a good paraphrase or did she also bring out an implication or a consequence? Did she use an illustration that highlights a new aspect? Did she perhaps even rephrase it in such a way that it circumvents a possible objection? And what about you? Did you mean just that? Or do you understand the idea even better than before? Perhaps you are now aware of an important implication. So whose idea is it now? Hers or yours? Perhaps you both should be seen as authors. In any case, the boundaries are not clear.

In this sense, many of my thoughts are not (solely) authored by me. We often try to acknowledge as much in forewords and footnotes. But some consequences of this fact might be more serious. Let me name three: (1) There is an obvious problem for the charge of anachronism in history of philosophy (see my inaugural lecture).  If future explications of thoughts can be seen as improvements of these very thoughts, then anachronistic interpretations should perhaps not merely be tolerated but encouraged. Are Descartes’ Meditations complete without the Objections and Replies? Can Aristotle be understood without the commentary traditions? Think about it! (2) Another issue concerns the identity of thoughts. If you are a semantic holist of sorts you might assume that a thought is individuated by numerous inferential relations. Is your thought that p really what it is without it entailing q? Is your thought that p really intelligible without seeing that it entails q? You might think so, but the referees of your latest paper might think that p doesn’t merit publication without considering q. (3) This leads to the issue of acceptability. Whose inferences or paraphrases count? You might say that p, but perhaps p is not accepted in your own formulation, while the expression of p in your superviser’s form of words is greeted with great enthusiasm. In similar spirit, Tim Crane has recently called for a reconsideration of peer review.  Even if some of these points are controversial, they should at least suggest that authorship has rather unclear boundaries.

Now the fact that thoughts are often future-directed and have multiple authors has, in turn, a number of further consequences. I’d like to highlight two of them by way of calling for some reconsiderations: a due reconsideration of unclarity and what Eric Schliesser calls “philosophic prophecy”.*

  • A plea for reconsidering unclarity. Philosophers in the analytic tradition pride themselves on clarity. But apart from the fact that the recognition of clarity is necessarily context-dependent, clarity ought to be seen as the result of a process rather than a feature of the thought or its present expression. Most texts that are considered original or important, not least in the analytic tradition, are hopelessly unclear when read without guidance. Try Russell’s “On Denoting” or Frege’s “On Sense and Reference” and you know what I mean. Or try some other classics like Aristotle’s “De anima” or Hume’s “Treatise”. Oh, your own papers are exempt from this problem? Of course! Anyway, we all know this: we begin with a glimpse of an idea. And it’s the frowning of others that either makes us commit it to oblivion or try an improvement. But if this is remotely true, there is no principled reason to see unclarity as a downside. Rather it should be seen as a typical if perhaps early stage of an idea that wants to grow.
  • A plea for coining concepts or philosophic prophecy. Simplifying an idea by Eric Schliesser, we should see both philosophy and history of philosophy as involved in the business of coining concepts that “disclose the near or distant past and create a shared horizon for our philosophical future.”* As is well-known some authors (such as Leibniz, Kant or Nietzsche) have sometimes decidedly written rather for future audiences than present ones, trying to pave conceptual paths for future dialogues between religions, metaphysicians or Übermenschen. For historians of philosophy in particular this means that history is never just an antiquarian enterprise. By offering ‘translations’ and explanations we can introduce philosophical traditions to the future or silence them. In this sense, I’d like to stress, for example, that Ryle’s famous critique of Descartes, however flawed historically, should be seen as part of Descartes’ thought. In the same vain, Albert the Great or Hilary Putnam might be said to bring out certain versions of Aristotle. This doesn’t mean that they didn’t have any thoughts of their own. But their particular thoughts might not have been possible without Aristotle, who in turn might not be intelligible (to us) without the later developments. In this sense, much if not all philosophy is a prophetic enterprise.

If my thoughts are future-directed and multi-authored in such ways, this also means that I often couldn’t know at all what I actually think, if it were not for your improvements or refinements. This is of course one of the lessons learned from Wittgenstein’s so-called private language argument. But it does not only concern the possibility of understanding and knowing. A fortiori it also concerns understanding our own public language and thought. As I said earlier, I take it to be a rationality constraint that I must agree to some degree with others in order to understand myself. This means that I need others to see the point I am trying to make. If this generalises, you cannot know thyself without listening to others.

___

* See Eric Schliesser, “Philosophic Prophecy”, in Philosophy and It’s History, 209.

 

 

Abstract cruelty. On dismissive attitudes

Do you know the story about the PhD student whose supervisor overslept and refused to come to the defence, saying he had no interest in such nonsense? – No? I don’t know it either, by which I mean: I don’t know exactly what happened. However, some recurrent rumours have it that on the day of the PhD student’s defence, the supervisor didn’t turn up and was called by the secretary. After admitting that he overslept, he must indeed have said that he didn’t want to come because he wasn’t convinced that the thesis was any good. Someone else took over the supervisor’s role in the defence, and the PhD was ultimately conferred. I don’t know the details of the story but I have a vivid imagination. There are many aspects to this story that deserve attention, but in the following I want to concentrate on the dismissive attitude of the supervisor.

Let’s face it, we all might oversleep. But what on earth brings someone to say that they are not coming to the event because the thesis isn’t any good? The case is certainly outrageous. And I keep wondering why an institution like a university lets a professor get away with such behaviour. As far as I know the supervisor was never reprimanded, while the candidate increasingly went to bars rather than the library. I guess many people can tell similar stories, and we all know about the notorious discussions around powerful people in philosophy. Many of those discussions focus on institutional and personal failures or power imbalances. But while such points are doubtlessly worth addressing, I would like to focus on something else: What is it that enables such dismissive attitudes?

Although such and other kinds of unprofessional behaviour are certainly sanctioned too rarely, we have measures against it in principle. Oversleeping and rejecting to fulfil one’s duties can be reprimanded effectively, but what can we do about the most damning part of it: the dismissive attitude according to which the thesis was just no good? Of course, using it as a reason to circumvent duties can be called out, but the problem is the attitude itself. I guess that all of us think every now and then that something is so bad that, at least in principle, it isn’t worth getting up for. What is more, there is in principle nothing wrong with finding something bad. Quite the contrary, we have every reason to be sincere interlocutors and call a spade a spade, and sometimes this involves severe criticism.

However, some cases do not merely constitute criticism but acts of cruelty. But how can we distinguish between the two? I have to admit that I am not entirely sure about this, but genuine criticism strikes me as an invitation to respond, while in the case under discussion the remark about the quality of the thesis was given as a reason to end the conversation.* Ending a conversation or dismissing a view like that is cruel. It leaves the recipient of the critique with no means to answer or account for their position. Of course, sometimes we might have good reasons for ending a conversation like that. I can imagine political contexts in which I see no other way than turning my back on people. But apart from the fact that a doctoral defence shouldn’t be such an occasion, I find it suspicious if philosophers end conversations like that. What is at stake here?

First of all, we should note that this kind of cruelty is much more common than meets the eye. Sure, we rarely witness that a supervisor refuses to turn up for a defence. But anyone sitting in on seminars, faculty talks or lectures will have occasion to see that sometimes criticism is offered not as an invitation for response, but as a dismissal that is only thinly disguised as an objection. How can we recognise such a dismissal? The difference is that an opinion is not merely criticised but considered a waste of time. This and other slogans effectively end a conversation. Rather than addressing what one might find wanting, the opponent’s view will be belittled and portrayed as not being worth to be taken seriously. As I see it, such speech acts are acts of cruelty because they are always (even if tacitly) ad hominem. The conjunction of critical remarks and of ending a conversation shows that it is not merely the opinion that is rejected but that there is no expectation that the argument could be improved by continuing the conversation. In this sense, ending a conversation is owing to a severe lack of charity, ultimately dismissing the opponent as incapable or even irrational.

You would think that such behaviour gets called out quickly, at least among philosophers. But the problem is that this kind of intellectual bullying is actually rather widespread: Whenever we say that an opinion isn’t worth listening to, when we say, for instance, that analytical or continental philosophy is just completely wrongheaded or something of the kind, we are at least in danger of engaging in it.** Often this goes unnoticed because we move within circles that legitimise such statements. Within such circles we enjoy privilege and status; outside our positions are belittled as a waste of time. And the transition from calling something bad to calling something a waste of time is rather smooth, if no one challenges such a speech act.

Having said as much, you might think I am rather pessimistic about the profession. But I am not. In fact I think there is a straightforward remedy. Decouple criticisms from ending conversations! But now you might respond that sometimes a conversation cannot continue because we really do not share standards of scholarship or argument. And we certainly shouldn’t give up our standards easily. – I totally agree, but I think that rather than being dismissive we might admit that we have a clash of intuitions. Generally speaking, we might distinguish between two kinds of critical opposition: disagreements and clashes of intuition. While disagreements are opposing views that can be plotted on a common ground, clashes of intuition mark the lack of relevant common ground. In other words, we might distinguish between internal and external criticism, the latter rejecting the entire way of framing an issue. I think that it is entirely legitimate to utter external criticism and signal such a clash. It is another way of saying that one doesn’t share sufficient philosophical ground. But it also signals that the opposing view might still deserve to be taken seriously, provided one accepts different premises or priorities.*** Rather than bluntly dismissing a view because one feels safeguarded by the standards of one’s own community, diagnosing a clash respects that the opponent might have good reasons and ultimately engages in the same kind of enterprise.

The behaviour of the supervisor who overslept is certainly beyond good and evil. Why do I find this anecdote so striking? Because it’s so easy to call out the obvious failure on part of the supervisor. It’s much harder to see how we or certain groups are complicit in legitimising the dismissive attitude behind it. While we might be quick to call out such a brutality, the damning dismissive attitude is more widespread than meets the eye. Yet, it could be amended by admitting to a clash of intuitions, but that requires some careful consideration of the nature of the clash and perhaps the decency of getting out of bed on time.

_____

This post by Regina Rini must have been at the back of my mind when I thought about conversation-enders; not entitrely the same issue but a great read anyway.

**A related instance can be to call a contemporary or a historical view “weird”. See my post on relevance and othering.

*** Examples of rather respectable clashes are dualism vs. monism or representationalism vs. inferentialism. The point is that the debates run into a stalemate, and picking a side is a matter of decision rather than argument.

Should we stop talking about “minor figures”?

Every now and then, I hear someone mentioning that they work on “minor figures” in the history of philosophy. For reasons not entirely clear to me, the very term “minor figures” makes me cringe. Perhaps it is the brutally belittling way of picking out the authors in question. Let’s face it, when we’re speaking of “minor figures” we don’t necessarily mean “unduly underrated” or “neglected”. At the same time, the reasons are not clear to me indeed, since I know perfectly well that especially people who work on them do anything but belittle them. Nevertheless, the use of the term indicates that there is something wrong with our historiographical and linguistic practice. In what follows, I want to have a stab at what’s wrong, first with “minor”, then with “figures”.

Let me begin by saying that I deem most of the work done on “minor figures” very important and instructive. Projects such as Peter Adamson’s “History of Philosophy without any Gaps” or Lisa Shapiro’s and Karen Detlefsen’s “New Narratives” constantly challenge our canon by providing great resources. What’s wrong with the talk of “minor figures” then? I guess the use of the term “minor” confirms the canonical figures in their role as “major figures” or even geniuses. Even if I shift the focus to some hardly known or even entirely anonymous person, the reference to them is mostly justified by being an “interlocutor” of a “major” figure. Who begins to study Walter Chatton not because of William Ockham or Burthogge not because of Locke? The context that these minors are supposed to provide is still built around an “absurdly narrow” set of canonical figures. But even if researchers might eventually study such figures “in their own right”, the gatekeeping practice among book and journal editors doesn’t seem to change anytime soon. In other words, attempts at diversification or challenging of the canon paradoxically stabilize it.

Now you might argue that there is good reason to focus on major figures. Presumably they are singled out because they write indeed the best texts, raise the most intriguing issues, present the best arguments or have the greatest impact on others. Although I don’t want to downplay the fact that most canonical authors are truly worth reading, we simply aren’t in a position to know. And you don’t even need to pick hardly known people such as Adam Wodeham or Giovanni Battista Giattini. Why not prefer Albert the Great over the notorious Aquinas? Why not read Burthogge or Zabarella in the first-year course? Really, there is nothing that would justify the relatively minor status irrespective of existing preferences.

But perhaps the central worry is not the talk of “minor”. What seems worse is the fact that we focus so much on figures rather than debates, questions or topics. Why not work on debates about intentionality or social justice rather than Plato or Sartre? Of course you might indeed have an interest in studying a figure, minor or major. But unless you have a particular biographical interest, you might, even as a dedicated historian of philosophy, be more likely to actually focus on a topic in a figure or on the debate that that person is participating in. I see two main reasons for shifting the focus from figures to debates. Firstly, philosophy does not really happen within people but between them. Secondly, the focus on a person suggests that we try to figure out the intention of an author, but unless you take such a way of speaking as a shorthand for textual analysis, your object of study is not easily available.

By the way, if we shift the focus from people to debates, we don’t need the distinction between minor and major any longer. When I studied Locke, it became natural to study figures such as Burthogge. When I studied Ockham, it became natural to study figures such as Adam Wodeham or various anonymi. But perhaps, you might argue, our reason for focussing on figures is more human: we’re interested in what people think rather than in the arguments in texts alone. When we make assumptions, we think along with people and try to account for their ideas as well as their shortcomings and inconsistencies. But even if that is true, we shouldn’t forget that people are not really ever geniuses. Their thoughts mature in dialogue, not least in dialogue with minor figures such as ourselves.

Brave questions. A response to Sara Uckelman

Sara Uckelman has great advice for new students: be brave and ask questions! Even and especially those questions that you might find silly. Why should you? “Because I can guarantee you that every question you have, someone else in the class is going to have it too, and they’re not going to be brave enough to ask, and they will be so grateful to you that you were.”

Going from my own experience as a student and professor, this is quite true. The only thing I’d like to add is that this advice applies not only to beginners but perhaps especially to advanced practitioners. The reason is that there is no such thing as a question that is both genuine and silly. Why? Because at least in philosophy nothing is ever justified by itself.

Nevertheless, asking questions is difficult. As Sara Uckelman points out, it involves bravely embracing “your ignorance and confusion”. Moreover, questions are almost a textual genre unto themselves. (See Eric Schliesser’s advice on how to develop more elaborate questions.) Therefore, I think it’s worthwhile to acually practise asking questions. Here are a few ideas how to get started:

(1) Write down your question! You don’t even need to ask it if you’re unsure. But writing it down will enable you to keep track of your concern as the discussion moves on. You can perhaps see how close your question is to other questions (which might be variants of your question). And you can still choose to leave it at that or ask it later or even after the talk or class.

(2) Figure out what kind of question you have! Back in the day, I often felt stupid because I couldn’t actually pin down what to ask for in the first place. Asking for the meaning of an unfamiliar term is fairly simple (and it’s always a good thing to ask, because terminology is often used in specific and different ways by different people). But more often than not, I just felt like saying “I don’t understand that passage at all.” If you feel like that, it might be a good start to figure out more clearly what exactly you don’t understand about it: a word, a certain argumentative move, the relation between two sentences etc. You can then begin by stating what you do understand and then move on to saying where exactly you lose track. It locates the problem, makes one feel less helpless, and will help your interlocutor.

(3) Structure your question! Sometimes you might just want to get it out and over with. But if you feel comfortable enough it might be helpful to raise a question in a more elaborate manner. I find the following parts useful:

  • target: say what the question is about
  • state the actual question
  • give a brief explanation why the question arises
  • perhaps provide a brief anticipation of possible answers (at talks this is helpful to prepare follow-up questions)

Of course, it’s not necessary to do all of those things. But bearing such a structure in mind often helped me to prevent myself from losing track of where I actually am. Sometimes even the mere act of talking might seem difficult. In such cases, this structure might help you to say some things without having to think (which is difficult when you’re nervous). So you might begin by saying “I’d like to ask a question about this … (insert term or phrase)” or by saying “I have a question. Let me explain how it arises.” Uttering such (or other) words will perhaps make you feel more at home in the space you’re inhabiting.

What is philosophy? A response to Laura Georgescu

Some years ago, I began to make a habit of telling students what I think philosophy is. Not in order to tell them the one and only Truth, but in order to clarify what I look out for in papers and interactions. So I will say something like the following: “Philosophy is a huge and on-going conversation. Thus, philosophy is not directly about phenomena; rather it deals with claims about phenomena. So you won’t ask “What is thinking or justice?” Rather you will deal with what other people or you say about thinking or justice etc.” This is normally followed up by some explanation of how to identify claims and arguments (or other kinds of evidence) in relation to such claims. Finally, I try to explain that the evaluation of a position does not mean to say that one is for or against it, but to spell out in what way and how convincingly the arguments support the claim.

Recently, I’ve grown a bit wary of saying such things. Why? Of course, I like to think about philosophy that way because it highlights the fact that it’s a conversational practice where certain rules of discourse apply. And sometimes it also stops people from doing their head in about the phenomena themselves. If you have to write a paper it’s easier to think about the things people say than to ask yourself what consciousness really is. But on the other hand it sends the message that philosophy is all about making claims. Now what’s wrong with that? In a way, not much. But then again it seems to have things the wrong way round. One might even think that we are mainly trained to question others rather than our own beliefs. But in fact a claim is something you might come to after a lot of thinking, talking and doubting yourself. A claim is not one’s starting point, or is it?

I couldn’t quite put my finger on it before reading Laura Georgescu’s recent blog post “Against Confidence in Opinions”. Go and read it! I’ll just give you the main take-home message it had for me: Like the much of the world, academic philosophy is infected with the idea that it’s a good thing to have confidence. Especially in the rightness of one’s opinions. So it’s all about defending one’s claims. But what is the outcome of that? Something most of us would claim not to find desirable: dogmatism. So there’s a cultivation of confidence and it leads to dogmatism. This nailed it for me. I realised that what I found problematic was that people were too often invested in defending their positions rather than questioning them. If you look at philosophers, you might want to distinguish two types: the one who is self-undermining and asking questions all the time as opposed to the confident one, tossing out one claim after another. The latter type seems to abound. – But as much as I like to believe in such a distinction, I doubt that it holds. So what now?

I recently said that advising to work on one’s confidence is cold comfort. Neither do I think that we can just ignore this culture. So let’s think what precisely might be undesirable about it. When I remember my student days, I remember myself admiring many of my fellow students for their confidence. They were speaking up, eloquently so, while I was running through possible formulations of an idea and remained doubtful whether I had a point at all. That feeling remained for a very long time. After the PhD and Habilitation it got better, but whenever I went out of my scholarly comfort zone, I felt I had no points to make. There is a kind of confidence that depends on having a feeling of legitimacy, and I often think getting a permanent job helps a lot with that feeling. – So now that I feel confident enough to write blog posts about what philosophy is I should start preaching that confidence is a bad thing? Doesn’t sound convincing to me. So what precisely is wrong with it?

First of all, there is a lot right with it. It helps getting through the day in all sorts of ways. But as Laura Georgescu emphasises, it’s confidence in opinions that is troublesome. How then can we prevent dogmatism without giving up on being confident entirely?

I think it might help to come back to the idea of philosophy as a conversational practice and to distinguish two kinds of conversation: an internal conversation that one has with oneself (Plato called this “thinking”) and the external conversation that one has with others. When we see external conversations happening between people, we often hear someone asking a question and someone else responding with a claim. Further questions ensue and the claim gets defended. What we observe are two types: the questioner and the one defending claims. This is what we often witness in philosophy talks, and our paper structures imitate that practice, mostly with the author as the one making claims. The upshot is that, in papers and talks, we often play just one of two or more possible roles. That might be undesirable.

However, if we focus on internal conversations we find that we do in fact both. The claims we pin down come after a lot of self-undermining back and forth. And the confidence we can muster might be the last resort to cover the relentless doubting that goes on behind our foreheads. In our internal conversations, I guess most of us are far from any kind of dogmatism.

I suppose, then, if we see reason to change the practice of coming across as dogmatic, a good start might be to bring some of that internal conversation to bear on external conversations. Rather than playing sceptic versus dogmatist, we might every now and then remember that, most of the time, we just do what Plato called thinking. Having a dialogue in which we take on all sorts of roles and propositional attitudes. Bring it on! But I guess it takes some confidence.

Is there a difference between offline and online discussions? A response to Amy Olberding

“My trouble is usually… that I don’t entirely know what I think. And not knowing what to think is itself sometimes cast as shameful.”

Thanks to Daily Nous, I recently came across these sentences in a moving blog post by Amy Olberding. The message is clear and there is perhaps nothing substantial I have to add. As Justin Weinberg aptly notes, there is an “irony to philosophers in particular—whose job description has long included undermining certainty and complicating the obvious …” I agree that questioning one’s beliefs is one of the main points of doing philosophy. Having opinions and especially “defending” them is seriously overrated. But if Amy Olberding’s observations are correct, we are mainly trained to question others rather than our own beliefs.

However, I wonder why she restricts her observations to the “online discourse”. It seems to me that aggressive assertiveness is encouraged in many places, not least among philosophers. Of course, there is a particularly worrying trend in anonymous comments on social media, but the attitude seems to be a (perhaps somewhat inflated) reflection of our common modes of offline interaction.

This makes me wonder about the general distinction between online and offline discourse. It is now common to distinguish between social media of the internet and our real life. But although online interaction requires technological aids and enables, among other things, anonymity, I don’t see a principled difference between the two. If I insult you, this is an impertinent behaviour, no matter whether I do it online or offline. Yes, I have more options to hide or pretend when online, but that does not alter the moral dimension of interactions. Online interaction can be good or bad, because we behave well or badly. And despite all the hate there are good interactions online. They just don’t receive as much attention.

So the one thought I would like to add, after all, is that there might be good reasons to deflate the distinction between online and offline interactions. It’s not as if we were angels who happen to turn into moral monsters (only) when online. This is also why I have mixed feelings about the idea of “leaving” online discussions and “returning” to real life. Our lives and interactions are real wherever we are. Leaving an online discussion does not just mean switching off a machine. It means to stop interacting with certain people (that one can only reach online).*

___

* That said, I don’t question Amy Olberding’s reasons for leaving the discussions from which she resigned. I just think such a resignation is not all that different from leaving a discussion in a room full of people.

Voices inside my head. On constructive criticism

Most of the time when writing or just thinking, I hear voices uttering what I (want to) write. Often this is a version of my own voice, but there are also numerous other voices. These are voices of friends, colleagues and students. Sometimes I hear them because I remember what they said during a discussion. But more often I imagine them saying things in the way they would phrase an objection or a refinement of what I wanted to say. Yet, although it is me who imagines them, it’s their convictions and style that determines the content and phrasing. If I succeed in my attempts to write in a dialogical fashion, it is the achievement of others in leaving their traces in my memory and eventually in my texts. It is this kind of experience that makes writing fun. But what I want to claim now is that this is also a good way of integrating criticism. This way the strengths of others can become strengths in your own writing.

Why is this important? Philosophy is often taken to thrive on criticism. Some would even claim that it lies at the heart of intellectual exchange. Only if we take into account the critique of others, can we expect to have considered an issue sufficiently. I agree. Assuming that reason is social, philosophers need to expose themselves to others and see what they have to say. However, it’s not clear that the social nature of reason requires criticism as the primary mode of exchange. There are various styles of thinking; understanding and engaging with ideas can happen in many different ways.

Some people will ask to be “destroyed” by their interlocutors, while others might think that any question might amount to an impertinent transgression. Might there be a middle ground between these extremes? What is telling is that the metaphors around philosophical argumentation are mostly intimating opposition or even war. (See for instance the intriguing discussion in and of Catarina Dutilh Novaes’ great piece on metaphors for argumentation). In view of this practice, I think it’s crucial to remember that spotting mistakes does not turn anything into a good idea. The fact that you know how to find flaws does not mean that you’re able to improve an idea. (See Maarten Steenhagen’s excellent clip on this point and make sure to turn up the sound) In any case, it’s not surprising that there is an on-going debate and a bit of a clash of intuitions between philosophers who like and who dislike an adversarial style of conversation. Some think criticism fosters progress, while others think criticism blocks progress.

How can we move on? I think it’s crucial to consider the precise nature of the criticism in question. The point is not whether people are nice to one another; the point is whether criticism is genuine. But what is genuine criticism? I think genuine criticism takes a paper or talk on its own terms. Now what does that mean? Here, it helps to rely on a distinction between internal and external criticism. Internal criticism takes the premises of a contribution seriously and focuses on issues within an argument or view. A good example of a whole genre of internal criticism is the medieval commentary tradition. A commentary exposes the premises, and aims at clarification and refinement without undermining the proposed idea. By contrast, external criticism often starts from the assumption that the whole way of framing an issue is mistaken. A good example for such an external criticism is the debate between hylomorphists and mechanists in early modern philosophy.*

I think that only internal criticism is genuine. That doesn’t mean that external criticism is useless, but it is not an engagement with the opposing position; at least not in such a way that it attempts to leave the main claim intact. It is the view that the opponent’s position is not acceptable.** I think it is important to see that these different criticisms are completely different moves in the game or even wholly different games. Internal criticism happens on common ground; external criticism is the denial of common ground. Both forms are legitimate. But I think that a lot of discussions would be better if those involved would be clear about the question whether their criticism is internal or external. Ideally, both kinds of criticism are presented along with an indication of what an answer to the challenge would actually look like.

How can we apply this to our writing? I think it is vital to include both kinds of criticism. But it helps me to make the difference. If someone tells me that my argument for taking Locke as a social externalist about semantics might need more textual support or a refined exposition of how the evidence supports my claim, I will see their point as supporting my idea. (Of course, if I have no such evidence, this criticism would be fatal, however genuine.) If someone tells me that Locke’s semantics isn’t worth studying in the first place, their point is clearly external. That doesn’t mean that I don’t need to reply to the challenge of external criticism. But the point is that the latter targets my endeavour in a different way. External criticism questions the very point of doing what I do and cannot be addressed by amending this or that argument. Responding to internal criticism happens within a shared set of ideas. Responding to a clash of intuitions means to decide for or against a whole way of framing an issue. Only internal criticism is constructive, but we need to respond to external criticism in order to see why it is constructive. So when you work on your argument, don’t bother with external criticism. If you write the introduction or conclusion, by contrast, reach out to those who question your project entirely.

How then should we deal with such criticisms in practice? It’s sometimes difficult to deal with either. This is why I ultimately like the approach of internalising all kinds of criticism into the choir of voices inside my head. Once they are in my head, it feels like I can control them. They become part of my thinking and my chops, as it were. I can turn up the volume of any voice, and in writing it’s me who in charge of the volume.*** Thus, I’d suggest we should appropriate both kinds of criticism. It’s just crucial to recognise them for what they are. Appropriating all the voices gives you some control over each of them.

To be sure, at the end of the day it’s important to see that we’re all in this together. We’re doing philosophy. And even if people don’t agree with our projects, they endorse that larger project called philosophy. So even in external criticism there must be some sort of common ground. Most of the time, I can’t see what the precise features of this common ground are, but being lost in that way makes me feel at home.

______

* Of course, there are tipping points at which internal can turn into external criticism and vice versa.

** This doesn’t mean that external criticism is hostile or not genuine in other ways. One can criticise externally out of genuine concern, assuming perhaps that an idea requires a different kind of framework or that work on a seriously flawed position might prove a waste of time for the addressee.

*** Reaching a state of control or even balance is not easy, though. It is often the most critical voices that are loudest. In such cases, it might be best to follow Hume’s advice and seek good company.