What Is an Error? Wittgenstein’s Voluntarism*

Imagine that you welcome your old friend Fred in your study. Pointing at the door, he asks you whether he should shut the window. You’re confused. Did Fred just call the door a window? He’s getting old, but surely not that old. You assume that Fred has made a simple mistake. But what kind of mistake was it? Did he make a linguistic mistake by mixing up the words? Or did he make a cognitive mistake by misrepresenting the facts and taking the door to be a window? “Fred, you meant to say ‘door’, didn’t you?” If he nods agreement, everything is fine. If he doesn’t, you will probably begin to worry about Fred’s cognitive system or conceptual scheme. You might wonder whether his vision is impaired or something worse has happened, unless it turns out that you, in turn, misread Fred’s gesture, while he did indeed mean the window opposite the door.

This example can be considered in various ways.** We usually take such mistakes to lie in an erroneous use of words rather than in a misrepresentation on part of the cognitive system, such as a hallucination. The latter case seems way more drastic. But are the cases of linguistic and cognitive mistakes related? Is one prior to the other? In what follows, I’d like to consider them through the lens of Wittgenstein’s later philosophy of mind and suggest that his account it has roots in theological voluntarism.

Let’s begin by looking at the accounts of error that suggest themselves. What kind of distinction is at work here? It seems that there are at least two possible ways of locating error:

  • linguistic errors occur on the level of behavioural interaction between language users: in this case an error is a deviation from a social practice;
  • cognitive errors occur on the level of (mental) representation: in this case an error is mismatch between a representation and a represented object.

The distinction between interaction and representation intimates two ways of thinking about minds. Representational models construe correctness and error on the relation between (mental) sign and object. Interactionist or social models construe correctness and error on the relation between (epistemic) agents. On the face of it, the representational model is the more traditional one, going back at least to Aristotle and the scholastics, before being famously reintroduced and radicalised by Descartes. By contrast, the interactionist model is taken to be relatively young, inspired by the later Wittgenstein, who attacked his own earlier representationalism and the whole tradition along with it. This historical picture is of course a bit of a caricature. But rather than adding necessary refinements, I think we should reject it entirely. Besides misconstruing much of the history of thinking about minds, it obscures commonalities that actually might help understanding Wittgenstein’s move towards the interactionist model.

What, then, might have inspired Wittgenstein’s later model? I think that Wittgenstein’s later philosophy of mind is driven, amongst other things, by two ideas, namely (a) that all kinds of mental activities (such as thinking and erring) are part of a shared practice and that (b) the rules constituting this practice have no further explanation or foundation. For illustration, think again of the linguistic error. Ad (a): Of course, calling a door a window is a case of mislabelling. But what turns this into an error is not any representational mismatch. What is amiss is not any match between utterance and object but Fred’s violation of your expectation (that he would ask, if anything, to close the door but not the window). Ad (b): This expectation is not grounded in anything further but the experienced practice itself. If you learn that people call a door a door, people should call a door a door. You begin to wonder if they don’t. There is no further explanation as to why that should be so. Taken together, these two ideas give priority to interaction over representation. Accordingly, Wittgensteinians will see error and correctness in reference to linguistic practice; not grounded in representation.

But where does this idea come from? Although Wittgenstein’s later thought is sometimes likened to that of earlier authors in early modern or medieval times, I haven’t seen that his ideas were placed in a larger tradition. Perhaps, then, straightforward philosophies of language and mind are not the best place to look. But what should we turn to? If we look for historical cues of the two ideas sketched above, we should watch out for theories that construe mental events on the model of action rather than representation. But if you think that such theorising begins only with what is commonly called ‘pragmatism’, you miss out on a lot. Let’s focus on (a) first. We should begin by giving up on the assumption that the representational model of the mind is the traditional one. Of course, representation looms large, but it is not always the crucial explanans of correct vs. erroneous thinking or speaking. Good places to start are discussions that tie error to acts of will. Why not try Descartes’ famous explanation of human error? In the Fourth Meditation, Descartes claims that error does not arise from misrepresentation as such. Rather I can err because my will reaches farther than my intellect. So my will might extend to the unknown, deviating from the true and good. And thus I am said to err and sin. Bringing together error and sin, Descartes appeals to a longstanding tradition that places error on the level of voluntary judgment and action. Accordingly, there is no sharp distinction between moral and epistemic errors. I can fail to act in the right way or I can fail to think in the right way. The source of my error is, then, not that I misrepresent objects but rather that I deviate from the way that God ordained. This is the way in which even perfect cognitive agents such as fallen angels and demons can err.

What is significant for the question at hand is that God is taken as presenting us with a standard that we can conform to or deviate from when representing objects. Thus, error is explained through deviation from the divine standard, not through a representational model. Of course, you might object, that divine standards are a far cry from social standards and linguistic rules.*** But what might have served as a crucial inspiration are the following three points: putting mental acts on a par with action, explaining error and correctness through a non-representational standard, and having a non-individualistic standard, for it is the relation of humans to God that enforces the standard on us. In this sense, error cannot be ascribed to a single individual that misrepresents an object; it must be a mind that is related to the standards set by God.

If we accept this historical comparison at least as a suggestion, we might say that divine standards play a theoretical role that is similar to the social practice in Wittgenstein. However, divine standards come in different guises. Not all philosophers who discuss error in relation to deviant acts of will are automatically committed to the thesis that the divine standards have no further foundation. Theological rationalists assume that divine standards can be justified, such that God wills the Good because it is good. By contrast, voluntarists assume that something is good because God wills it. Thus, rationalistic conceptions could allow for an explanation of error that is not ultimately explained by reference to the divine standard. In this sense, rationalism would clash with Wittgenstein’s anti-foundationalism, called (b) above, according to which rules have no further foundation over and above the practice. As Wittgenstein puts it in Philosophical Investigations, § 206: “Following a rule is analogous to obeying an order.”

How, then, does Wittgenstein see the traditional theological distinction? Given his numerous discussions of the will even in his early writings, it is clear that his work is informed by such considerations. Most striking is his remark on voluntarism reported in Waismann’s “Notes on Talks with Wittgenstein” (Philosophical Review 74 [1956]): “I think that the first conception is the deeper one: Good is what God orders. For this cuts off the path to any and every explanation ‘why’ it is good …” Here, Wittgenstein clearly sides with the voluntarists.**** Indeed, the idea of rule-following as obedience can be seen perfectly in line with the assumption that erring consists in violating a shared practice, just as the voluntarist tradition that Descartes belongs to deems erring a deviation from divine standards.

If these suggestions are pointing in a fruitful direction, they could open a path to relocating Wittgenstein’s thought in the context of the long tradition of voluntarism. They might downplay his claims to originality, but at the same time they might render both his work and the tradition more accessible.

___

* Originally posted on the blog of the Groningen Centre for Medieval and Early Modern Thought

** This is my variant of Davidson’s ketch-yawl example in his “On the very idea of a conceptual scheme”. I’d like to thank Laura Georgescu, Lodi Nauta and Tamer Nawar, who kindly heard me out when I introduced them to the ideas suggested here.

*** Thanks to Martin Kusch, who raised this objection in an earlier discussion on Facebook.

**** See David Bloor, Wittgenstein, Rules and Institutions, Routledge 2002, 126-133, who also discusses Wittgenstein’s voluntarism.

 

Abstract cruelty. On dismissive attitudes

Do you know the story about the PhD student whose supervisor overslept and refused to come to the defence, saying he had no interest in such nonsense? – No? I don’t know it either, by which I mean: I don’t know exactly what happened. However, some recurrent rumours have it that on the day of the PhD student’s defence, the supervisor didn’t turn up and was called by the secretary. After admitting that he overslept, he must indeed have said that he didn’t want to come because he wasn’t convinced that the thesis was any good. Someone else took over the supervisor’s role in the defence, and the PhD was ultimately conferred. I don’t know the details of the story but I have a vivid imagination. There are many aspects to this story that deserve attention, but in the following I want to concentrate on the dismissive attitude of the supervisor.

Let’s face it, we all might oversleep. But what on earth brings someone to say that they are not coming to the event because the thesis isn’t any good? The case is certainly outrageous. And I keep wondering why an institution like a university lets a professor get away with such behaviour. As far as I know the supervisor was never reprimanded, while the candidate increasingly went to bars rather than the library. I guess many people can tell similar stories, and we all know about the notorious discussions around powerful people in philosophy. Many of those discussions focus on institutional and personal failures or power imbalances. But while such points are doubtlessly worth addressing, I would like to focus on something else: What is it that enables such dismissive attitudes?

Although such and other kinds of unprofessional behaviour are certainly sanctioned too rarely, we have measures against it in principle. Oversleeping and rejecting to fulfil one’s duties can be reprimanded effectively, but what can we do about the most damning part of it: the dismissive attitude according to which the thesis was just no good? Of course, using it as a reason to circumvent duties can be called out, but the problem is the attitude itself. I guess that all of us think every now and then that something is so bad that, at least in principle, it isn’t worth getting up for. What is more, there is in principle nothing wrong with finding something bad. Quite the contrary, we have every reason to be sincere interlocutors and call a spade a spade, and sometimes this involves severe criticism.

However, some cases do not merely constitute criticism but acts of cruelty. But how can we distinguish between the two? I have to admit that I am not entirely sure about this, but genuine criticism strikes me as an invitation to respond, while in the case under discussion the remark about the quality of the thesis was given as a reason to end the conversation.* Ending a conversation or dismissing a view like that is cruel. It leaves the recipient of the critique with no means to answer or account for their position. Of course, sometimes we might have good reasons for ending a conversation like that. I can imagine political contexts in which I see no other way than turning my back on people. But apart from the fact that a doctoral defence shouldn’t be such an occasion, I find it suspicious if philosophers end conversations like that. What is at stake here?

First of all, we should note that this kind of cruelty is much more common than meets the eye. Sure, we rarely witness that a supervisor refuses to turn up for a defence. But anyone sitting in on seminars, faculty talks or lectures will have occasion to see that sometimes criticism is offered not as an invitation for response, but as a dismissal that is only thinly disguised as an objection. How can we recognise such a dismissal? The difference is that an opinion is not merely criticised but considered a waste of time. This and other slogans effectively end a conversation. Rather than addressing what one might find wanting, the opponent’s view will be belittled and portrayed as not being worth to be taken seriously. As I see it, such speech acts are acts of cruelty because they are always (even if tacitly) ad hominem. The conjunction of critical remarks and of ending a conversation shows that it is not merely the opinion that is rejected but that there is no expectation that the argument could be improved by continuing the conversation. In this sense, ending a conversation is owing to a severe lack of charity, ultimately dismissing the opponent as incapable or even irrational.

You would think that such behaviour gets called out quickly, at least among philosophers. But the problem is that this kind of intellectual bullying is actually rather widespread: Whenever we say that an opinion isn’t worth listening to, when we say, for instance, that analytical or continental philosophy is just completely wrongheaded or something of the kind, we are at least in danger of engaging in it.** Often this goes unnoticed because we move within circles that legitimise such statements. Within such circles we enjoy privilege and status; outside our positions are belittled as a waste of time. And the transition from calling something bad to calling something a waste of time is rather smooth, if no one challenges such a speech act.

Having said as much, you might think I am rather pessimistic about the profession. But I am not. In fact I think there is a straightforward remedy. Decouple criticisms from ending conversations! But now you might respond that sometimes a conversation cannot continue because we really do not share standards of scholarship or argument. And we certainly shouldn’t give up our standards easily. – I totally agree, but I think that rather than being dismissive we might admit that we have a clash of intuitions. Generally speaking, we might distinguish between two kinds of critical opposition: disagreements and clashes of intuition. While disagreements are opposing views that can be plotted on a common ground, clashes of intuition mark the lack of relevant common ground. In other words, we might distinguish between internal and external criticism, the latter rejecting the entire way of framing an issue. I think that it is entirely legitimate to utter external criticism and signal such a clash. It is another way of saying that one doesn’t share sufficient philosophical ground. But it also signals that the opposing view might still deserve to be taken seriously, provided one accepts different premises or priorities.*** Rather than bluntly dismissing a view because one feels safeguarded by the standards of one’s own community, diagnosing a clash respects that the opponent might have good reasons and ultimately engages in the same kind of enterprise.

The behaviour of the supervisor who overslept is certainly beyond good and evil. Why do I find this anecdote so striking? Because it’s so easy to call out the obvious failure on part of the supervisor. It’s much harder to see how we or certain groups are complicit in legitimising the dismissive attitude behind it. While we might be quick to call out such a brutality, the damning dismissive attitude is more widespread than meets the eye. Yet, it could be amended by admitting to a clash of intuitions, but that requires some careful consideration of the nature of the clash and perhaps the decency of getting out of bed on time.

_____

This post by Regina Rini must have been at the back of my mind when I thought about conversation-enders; not entitrely the same issue but a great read anyway.

**A related instance can be to call a contemporary or a historical view “weird”. See my post on relevance and othering.

*** Examples of rather respectable clashes are dualism vs. monism or representationalism vs. inferentialism. The point is that the debates run into a stalemate, and picking a side is a matter of decision rather than argument.

History without narratives? A response to Alex Rosenberg

Recently, Martin Kusch gave an intriguing keynote lecture on the development of the sociology of knowledge. I was particularly interested in Steinthal’s role, whose name I recognised from my studies in linguistics and its history. But what was striking was that the lecture combined several levels of explanation. In addition to reconstructing philosophical arguments, Martin Kusch gave detailed insights into the institutional and political events that shaped the development. In other words, the lecture provided a nuanced combination of what is sometimes called historical and rational reconstruction. During the discussion I asked whether he thought that there was one particular level which decided the course of events. “Where do you think the real action took place, in politics or philosophy?” The answer was a succinct lesson in historical methodology: The quest for one decisive level of explanation is deceptive in itself. It suggests mono-causality. In fact, all the different factors have to be seen in conjunction. Real action takes place at every level. (By the way, I think this line of argument offers one of the best reasons why philosophy is inseparable from history.) A few days ago, I was reminded of this idea when reading an interview with Alex Rosenberg who thinks that certain levels of explanation should be discarded and argues for a history without narratives, because “narrative history is always, always wrong.”

According to Rosenberg, narratives are ways of making sense of events by referring to people’s beliefs and desires. “Had she not wanted x, she would not have done y. Erroneously, she believed that y would help her in getting x.” We engage in this sort of reasoning all the time. It presupposes a certain amount of folk psychology: ascribing beliefs and desires seems to require that these items really figure in a proper chain of events. But do they even exist, one might ask. – Now we also help ourselves to such explanations in history. Stuff happens. Explaining it sometimes requires us to assume minds, especially when humans are involved. Let’s call this approach folk history. (Note that Rosenberg is targeting “theory of mind” approaches in particular, but for the application to history the specifics of these approaches don’t matter.) Now Rosenberg gave an interview detailing why we should do away with folk history:

“The problem is, these historical narratives seduce you into thinking you really understand what’s going on and why things happened, but most of it is guessing people’s motives and their inner thoughts. […] [P]eople use narratives because of their tremendous emotional impact to drive human actions, movements, political parties, religions, ideologies. And many movements, like nationalism and intolerant religions, are driven by narrative and are harmful and dangerous for humanity. […] If narrative history gets things wrong because it relies on projection and things we can’t know for sure, how should we be trying to understand history? – There are a lot of powerful explanations in history and social sciences that don’t involve narrative. They involve models and hypotheses that are familiar in structure to the kind that convey explanation in the natural sciences. For example, take Guns, Germs, and Steel, which gives you an explanation of a huge chunk of human history, and that explanation does not rely on theory of mind at all.”

Alex Rosenberg makes a number of good points: (1) Relying on inner states is guesswork. (2) We use it to feed (bad) ideologies. (3) There are other means of writing history, not involving folk history. (4) Given the choice, we should confine ourselves to the latter approach. Let’s call this latter approach naturalistic history. I think there is a lot that speaks in favour of such an approach. If you read some Spinoza, Hume, Nietzsche or Freud, you’ll find similar ideas. We assume our thinking follows all these noble patterns of inference when in fact we are driven by motives and associations unknown to us. That said, the way Alex Rosenberg presents this naturalistic approach raises a number of concerns two of which I would like to address now.

  • The first worry concerns (4), i.e. the conclusion that folk history and naturalistic history should be played off against one another. Just like we need the “intentional stance” in the philosophy of mind, we also need it in history. But that’s not the whole story. Our reference to beliefs and desires does not only figure in historical explanations. It is also the very stuff we are interested in qua being human amongst other humans, and thus it shapes the events we want to explain. I concur in causing events because I ascribe mental states to others: I don’t sing in the library because I assume that it will annoy my fellow readers. Of course you can explain much of my actions by reference to biological and other factors. But at some point such explanations would have to invoke my ascriptions. Doing away with that level would mean doing away with a crucial part of the explanans. Playing off these levels against one another is like thinking that there is ultimately just one relevant explanatory level.
  • The second worry concerns (2), i.e. the tenet that narratives are the stuff of ideologies (and thus erroneous and to be avoided). While it is true that ideologies are fed by certain narratives, I know of no way to refer to (historical) data without a narrative. The naturalistic approach is not avoiding narratives tout court; it merely avoids a certain kind of narrative. It replaces the folk historical approach with a naturalistic narrative. Pretending that this is tantamount to avoiding all narratives is to suggest that the raw data of history are just there, waiting to be picked up by the disenchanted historian. In other words, I think that Rosenberg’s suggestion falls prey to a variant of the myth of the given. To say that narratives are “always wrong”, then, seems to be a category mistake. As I see it, narratives as such are neither right nor wrong. Rather, they provide frameworks that enable us to call individual statements right or wrong.

But since I have not read the book that is advertised in the interview, I don’t yet know whether this is the whole story. But who am I to try and tell this story by referring to beliefs and other mental states expressed in that book by Alex Rosenberg?

Should we stop talking about “minor figures”?

Every now and then, I hear someone mentioning that they work on “minor figures” in the history of philosophy. For reasons not entirely clear to me, the very term “minor figures” makes me cringe. Perhaps it is the brutally belittling way of picking out the authors in question. Let’s face it, when we’re speaking of “minor figures” we don’t necessarily mean “unduly underrated” or “neglected”. At the same time, the reasons are not clear to me indeed, since I know perfectly well that especially people who work on them do anything but belittle them. Nevertheless, the use of the term indicates that there is something wrong with our historiographical and linguistic practice. In what follows, I want to have a stab at what’s wrong, first with “minor”, then with “figures”.

Let me begin by saying that I deem most of the work done on “minor figures” very important and instructive. Projects such as Peter Adamson’s “History of Philosophy without any Gaps” or Lisa Shapiro’s and Karen Detlefsen’s “New Narratives” constantly challenge our canon by providing great resources. What’s wrong with the talk of “minor figures” then? I guess the use of the term “minor” confirms the canonical figures in their role as “major figures” or even geniuses. Even if I shift the focus to some hardly known or even entirely anonymous person, the reference to them is mostly justified by being an “interlocutor” of a “major” figure. Who begins to study Walter Chatton not because of William Ockham or Burthogge not because of Locke? The context that these minors are supposed to provide is still built around an “absurdly narrow” set of canonical figures. But even if researchers might eventually study such figures “in their own right”, the gatekeeping practice among book and journal editors doesn’t seem to change anytime soon. In other words, attempts at diversification or challenging of the canon paradoxically stabilize it.

Now you might argue that there is good reason to focus on major figures. Presumably they are singled out because they write indeed the best texts, raise the most intriguing issues, present the best arguments or have the greatest impact on others. Although I don’t want to downplay the fact that most canonical authors are truly worth reading, we simply aren’t in a position to know. And you don’t even need to pick hardly known people such as Adam Wodeham or Giovanni Battista Giattini. Why not prefer Albert the Great over the notorious Aquinas? Why not read Burthogge or Zabarella in the first-year course? Really, there is nothing that would justify the relatively minor status irrespective of existing preferences.

But perhaps the central worry is not the talk of “minor”. What seems worse is the fact that we focus so much on figures rather than debates, questions or topics. Why not work on debates about intentionality or social justice rather than Plato or Sartre? Of course you might indeed have an interest in studying a figure, minor or major. But unless you have a particular biographical interest, you might, even as a dedicated historian of philosophy, be more likely to actually focus on a topic in a figure or on the debate that that person is participating in. I see two main reasons for shifting the focus from figures to debates. Firstly, philosophy does not really happen within people but between them. Secondly, the focus on a person suggests that we try to figure out the intention of an author, but unless you take such a way of speaking as a shorthand for textual analysis, your object of study is not easily available.

By the way, if we shift the focus from people to debates, we don’t need the distinction between minor and major any longer. When I studied Locke, it became natural to study figures such as Burthogge. When I studied Ockham, it became natural to study figures such as Adam Wodeham or various anonymi. But perhaps, you might argue, our reason for focussing on figures is more human: we’re interested in what people think rather than in the arguments in texts alone. When we make assumptions, we think along with people and try to account for their ideas as well as their shortcomings and inconsistencies. But even if that is true, we shouldn’t forget that people are not really ever geniuses. Their thoughts mature in dialogue, not least in dialogue with minor figures such as ourselves.

Brave questions. A response to Sara Uckelman

Sara Uckelman has great advice for new students: be brave and ask questions! Even and especially those questions that you might find silly. Why should you? “Because I can guarantee you that every question you have, someone else in the class is going to have it too, and they’re not going to be brave enough to ask, and they will be so grateful to you that you were.”

Going from my own experience as a student and professor, this is quite true. The only thing I’d like to add is that this advice applies not only to beginners but perhaps especially to advanced practitioners. The reason is that there is no such thing as a question that is both genuine and silly. Why? Because at least in philosophy nothing is ever justified by itself.

Nevertheless, asking questions is difficult. As Sara Uckelman points out, it involves bravely embracing “your ignorance and confusion”. Moreover, questions are almost a textual genre unto themselves. (See Eric Schliesser’s advice on how to develop more elaborate questions.) Therefore, I think it’s worthwhile to acually practise asking questions. Here are a few ideas how to get started:

(1) Write down your question! You don’t even need to ask it if you’re unsure. But writing it down will enable you to keep track of your concern as the discussion moves on. You can perhaps see how close your question is to other questions (which might be variants of your question). And you can still choose to leave it at that or ask it later or even after the talk or class.

(2) Figure out what kind of question you have! Back in the day, I often felt stupid because I couldn’t actually pin down what to ask for in the first place. Asking for the meaning of an unfamiliar term is fairly simple (and it’s always a good thing to ask, because terminology is often used in specific and different ways by different people). But more often than not, I just felt like saying “I don’t understand that passage at all.” If you feel like that, it might be a good start to figure out more clearly what exactly you don’t understand about it: a word, a certain argumentative move, the relation between two sentences etc. You can then begin by stating what you do understand and then move on to saying where exactly you lose track. It locates the problem, makes one feel less helpless, and will help your interlocutor.

(3) Structure your question! Sometimes you might just want to get it out and over with. But if you feel comfortable enough it might be helpful to raise a question in a more elaborate manner. I find the following parts useful:

  • target: say what the question is about
  • state the actual question
  • give a brief explanation why the question arises
  • perhaps provide a brief anticipation of possible answers (at talks this is helpful to prepare follow-up questions)

Of course, it’s not necessary to do all of those things. But bearing such a structure in mind often helped me to prevent myself from losing track of where I actually am. Sometimes even the mere act of talking might seem difficult. In such cases, this structure might help you to say some things without having to think (which is difficult when you’re nervous). So you might begin by saying “I’d like to ask a question about this … (insert term or phrase)” or by saying “I have a question. Let me explain how it arises.” Uttering such (or other) words will perhaps make you feel more at home in the space you’re inhabiting.

On relevance and othering

Do you remember talking about music during your school days? There was always someone declaring that they would only listen to the latest hits. Talking to philosophers, I occasionally feel transported back to these days: especially when someone tells me that they have no time for history and will only read the latest papers on a topic. “What do I care what Brentano said about intentionality! I’m interested in current discussions.” Let’s call this view “currentism”. I sometimes experience versions of this currentist attitude in exams. A student might present an intriguing reconstruction of a medieval theory of matter only to be met with the question: “Why would anyone care about that today?” I have to admit that I sometimes find this attitude genuinely puzzling. In what follows I’d like to explain my puzzlement and raise a few worries.

Why only “sometimes”? I say “sometimes”, because there is a version of this attitude that I fully understand. Roughly speaking, there is a descriptive and a normative version of that sentiment. I have no worries about the descriptive version: Some people just mean to say what they focus on or indicate a preference. They are immersed in a current debate. Given the constraints of time, they can’t read or write much else. That’s fine and wholly understandable. In that case, the question of why one would care might well be genuine and certainly deserves an answer. – The normative version is different: People endorsing the normative attitude mean to say that history of philosophy is a waste of time and should be abolished, unless perhaps in first-year survey courses. Now you might say: “Why are you puzzled? Some people are just more enthusiastic in promoting their preferences.” To this I reply that the puzzlement and worries are genuine because I find the normative attitude (1) unintelligible and (2) politically harmful. Here is why:

(1) My first set of worries concerns the intelligibility of this attitude. Why would anyone think that the best philosophy is being produced during our particular time slice? I guess that the main reason for (normatively) restricting the temporal scope of philosophy to the last twenty or fifty years is the idea that the most recent work is indeed the best philosophy. Now why would anyone think that? I see two possible reasons. One might think so because one believes that philosophy is tied to science and that the latest science is the best science. Well, that might be, but progress in science does not automatically carry over to philosophy. The fact that I write in the presence of good science doesn’t make me a good philosopher.

So if there is something to that idea people will ultimately endorse it for another reason: because there might be progress in philosophy itself. Now the question whether there really is progress in philosophy is of course hotly debated. I certainly don’t want to deny that there have been improvements, and I continue to hope for more of them. But especially if we assume that progress is an argument in favour of doing contemporary philosophy (and what else should we do, even if we do history!), how can someone not informed about history assess this progress? If I have no clue about the history of a certain issue, how would I know that real advancements have been made? In other words, the very notion of progress is inherently historical and requires at least some version of (whig) history. So unless someone holds the belief that recent developments are always better, I think one needs historical knowledge to make that point.

Irrespective of questions concerning progress one might still endorse current over historical philosophy because it is relevant to current concerns. So yes, why bother with medieval theories of justice when we can have theories that invoke current issues? Well, I don’t doubt that we should have philosophers focussing on current issues. But I wonder whether current issues are intelligible without references to the past. Firstly, there is the fact that our current understanding of justice or whatever is not a mere given. Rather, it is the latest stage of a development over time. Arguably, understanding that development is part of understanding the current issues. Now you might object that we should then confine ourselves to writing genealogies of stuff that is relevant today but not of remote issues (such as medieval theories of, say, matter). To this I reply that we cannot decide what does and doesn’t pertain to a certain genealogy in advance of historical studies. A priori exclusion is impossible, at least in history. Moreover, we cannot know that what we find irrelevant today is still irrelevant tomorrow. In other words, our judgments concerning relevance are subject to change and cannot be used to exclude possible fields of interest. To sum up, ideas of progress and relevance are inherently historical and require historical study.

(2) However, the historicity of relevance doesn’t preclude that it is abused in polemical and political ways. Besides worries about intelligibility, then, I want to raise political and moral worries against the normative attitude of currentism. Short of sound arguments from progress or relevance, the anti-historical stance reduces to a form of othering. Just like some people suffer exclusion and are labelled as “weird” for reasons regarding stereotypes of race or gender, people are excluded for reasons of historical difference. But we should think twice before calling a historically remote discussion less rational or relevant or whatever. Of course, there is a use of “weird” that is simply a shorthand of “I don’t understand the view”. That’s fine. What I find problematic is the unreflected dismissal of views that don’t fit into one’s preferences. But the fact that someone holds a view that does not coincide with today’s ideas about relevance deserves study rather than name-calling. As I see it, we have moral reasons to refrain from such forms of abuse.

If we don’t have reasons showing that a historical view has disadvantages over a current one, why do we call it “weird” or “irrelevant”? Here is my hunch: it’s a simple fight over resources. Divide et impera! But in the long run, it’s a lose-lose situation for all of us. Yet if you’re a politician and you manage to play off different sub-disciplines in philosophy or the humanities against one another, you can simply stand by until they’ve delegitimised each other so much that you can call all camps a farce and close down their departments.

What does it mean to be ‘actively’ researching a paper?

After reading one of Martin’s earlier posts on turning a half-baked idea into a paper, it got me thinking about the writing project that I’m currently engaged in, which one might describe as turning a fully baked idea into a paper – but that I left in the oven for too long. This might not be the best metaphor – I don’t think the idea I express in the paper is burnt or has gotten worse over time (at least I hope not), but the paper is one that was nearly finished and that I started to refine into a publishable article nearly a year ago, but that I put down to work on other projects that had more pressing deadlines. It’s now at the point where I’m returning to the paper and finally bringing it to completion. This situation has certain disadvantages to it that I thought it would be worthwhile to discuss and caution against, and it also got me thinking about what it means to be ‘actively’ working on or researching a paper.

Obviously, this kind of situation (where one has to return to and finish an already started and possibly almost finished project) is less than ideal. In an ideal world one starts a paper and continues to work on it regularly and without significant interruption until it’s finished (for my purposes let’s say a ‘finished’ paper is one that is submitted for publication). I say without significant interruption because rarely is it the case that an academic is able or even wants to work on only one paper for a length of time until it’s finished, and not work on anything else. At the very least there will be other things one is working on – other research projects, grant proposals, administrative work, teaching, etc. – and that will necessarily interrupt progress on a project to some degree. The situation I have in mind, rather, is where working on a project gets interrupted to the extent that one has to put down or stop working on a project for some period of time such that returning to it is not an easy task. In this scenario, returning to the project requires re-familiarizing oneself with the project itself (what you are trying to argue in the paper) and perhaps also the secondary literature one is engaging with. Not only this, but depending on how long the project has been sitting, one might also need to make sure that no new research has been published in the meantime that one ought to consider. There are, therefore, significant disadvantages to ‘leaving a paper in the oven’ or, perhaps better, leaving it in a folder to collect (virtual) dust for too long. Given one shouldn’t forget about a paper for too long, and also that it is unrealistic that anyone is able to work on a single project from beginning to end in a short time frame, what sort of scenario should we aim for that both avoids the disadvantages mentioned as well as counts as ‘actively’ researching a paper? What does it mean to be ‘actively’ researching a paper anyways?

I won’t try to answer all of these questions here, and I’ll focus on the last one. To start, I should clarify that, although related, I’m not here interested in what it means to be an ‘active researcher’ for any institutional purposes. However, given institutions have definitions of this, it might be interesting to look at one. Let’s take the first option that google offered me, namely the definition adopted by Dundalk Institute of Technology in Ireland: they define a ‘Research Active’ individual as “someone who conducts research on an ongoing basis and ensures it is a significant focus of their academic activity”. This definition is shared by institutions like Macquarie University as well (see here).[1] This basic definition is a good one to work with, and emphasizes what I think are the key factors when it comes to avoiding the disadvantages associated with leaving a paper in a drawer, a folder, or the oven for too long: when working on a paper we should aim to conduct research on an ongoing basis, and it should be a significant focus of ours.

The idea of working on a paper on an ongoing basis stresses that we never let a project sit or forget about it, that we’re thinking about it almost every day, and keeping up to date on the secondary literature. This ‘ongoing’ work should prevent one from having to re-familiarize oneself with one’s argument in the paper, and also with the intellectual conversation one is contributing to with the publication. I suppose these features are also stressed by the idea that working on such a project should be a ‘significant’ focus of ours, but this second feature might emphasize that the time and energy we devote to working on a project never falls below a certain threshold, i.e. that we are always giving the project an amount of attention needed to bring it to completion. I’m not sure what this threshold is, and it might even vary from individual to individual (we each have our limits when it comes to how much multi-tasking we’re capable of). But if we want to be actively researching something, we should never take on too much given our limits, and if we want to finish a project it should always be a significant or main focus on ours.

I’m sure much more can be said about what I’ve discussed here. To conclude, it’s worth highlighting that I’ve assumed that the end goal of working on a project is its eventual publication. This may not be one’s goal, but academics working at institutions will likely have this goal, and it is at least mine for the project I have in mind. It would be interesting to think about how things might change if our end goal is different, but I leave that for a different occasion.

[1] For another definition of ‘research active’ that is obviously tied to more institutional concerns, and thus that I’m not interested in here, the (now non-existent) Higher Education Funding Council for England defined someone as ‘research-active’ for contractual purposes as someone who is carrying out “research that would be appropriately assessed by the criteria used by the REF.” (see here)

 

P.S. Seeing as it’s my first post, let me take the opportunity to thank Martin for having me on the blog! I’m really looking forward to taking part.