Against leaving academia

For quite some years, newspapers and the academic blogosphere have been packed with advice for those considering leaving academia. There are practical tips of how to enter the non-academic world or pleas against the stigma that one might see in “giving up” etc. Many pieces of such advice are very helpful indeed and imparted out of the best intentions. However, I am troubled to see that there is also an ever growing number of pieces that advise leaving academia or at least imply that it is the best thing one can do. The set of reasons for this is always the same: academia is bad, bad, bad. It is toxic, full of competition, a threat to one’s health and exploitative. On a famous philosophy blog I even read that it is “unethical” to encourage students to stay in academia. In what follows, I’d like to take issue with such claims and present three reasons against leaving academia.

Given my own academic biography, I’d be the last person to underestimate the downsides of academia. Surviving, let alone “making it”, is down to sheer luck. All your merits go nowhere unless you’re in the right place at the right time. However, that does not mean (1) that we don’t need academics, (2) that academia is worse than any other place or (3) that work in academia can’t be fun. Let’s look at these points in turn.

(1) We need academics. – Believe it or not, even though politicians of certain brands, taxpayers and even one’s parents might ceaselessly claim that most academic work and the humanities in particular are useless, the contrary is true. Discourse and reflection are an integral part of democracies. Academia is designed to enable just that: research and higher education are not just some niches; they are the beating heart of democratic cultures across the globe. Of course, our actual daily practice might often look somewhat differently. But there is more than one response to the fact that the nobler ends of our work are often under threat, from inside and outside. The alternative to leaving is attempting to improve academia. That might be quite difficult. But if masses of good people keep leaving academia, it will lead to increasing corrosion and undermine our democracies. To be sure, ultimately anyone’s personal reasons are good enough, but I find the general advice in favour of leaving slightly (if often unintentionally) anti-democratic.

(2) Academia is part of the rest of the world. – Academia is often called bad names. We are living in an ivory tower and some philosophers never even leave their armchairs. I often talk to students who have been advised to pursue their “plan b” before they really got started with their studies. They unanimously seem to be told that “the world outside” or the “normal world” is better. It seems that academics have a lot of special problems that don’t exist outside or at least not in such numbers. Again, I do not wish to downplay our problems, far from it. I truly believe that there are a number of issues that need urgent attention. But then again I wonder why leaving should help with that. Many problems in academia are problems owing to (bad) working conditions and policies. But why would anyone think that these very same problems do not exist in the rest of the world? Plan b won’t lead to some sort of paradise. The conditions apply to the workforce inside and outside of ivory towers. In fact, I know quite a number of people who have non-academic jobs. By and large, the conditions don’t strike me as much different. Competition, (mental) health issues, exploitation, harassment, misogyny, bullying, you name it – all of these things abound elsewhere, too. So if you want to leave, look around first: you might find the same old same old in disguise.

(3) Academic work can be fun. – We’re often told that our kind of work causes a lot of suffering (not solely in our recipients). Again, I don’t want to downplay the fact that a lot of things we are asked to do might feel quite torturous. But when I listen to myself and other people describing what it actually is that makes it so troublesome, it is often not due to the actual work itself. Writing might be hard, for instance, but the unpleasant feelings are not owing to the writing, but to the idea of it being uncharitably received. Similarly, interacting with fellow students or after a talk in the q & a might be stressful, but as I see it, the stress is often created out of (the fear of) unpleasant standards of aggressive interaction. Imagining talking through the same stuff with an attentive friend will not trigger the same responses I guess. Again, my advice would not be leaving but working towards improving the standards of interaction.

You might still say that all of these considerations are cold comfort in view of the real suffering going on. I won’t deny that this is a possibility. In fact, academia can be full of hidden or overt cruelties and people might have very good reasons indeed to leave academia. I don’t see doing so as a failure or as wrong. What I find problematic is the current trend of advising such measures on a general basis. But of course, for some this advice might still be helpful to embrace a good decision or an inevitable step. What ultimately encouraged me to write this post today are my students, two of whom came to me this week to tell me that, contrary to their previous expectations, they found their fellow students ever so supportive, charitable and encouraging. Where they were warned to fear competition, they were actually met with the friendliest cooperation. I don’t hear this all too often, but who would want to let this hopeful generation down?

What Is an Error? Wittgenstein’s Voluntarism*

Imagine that you welcome your old friend Fred in your study. Pointing at the door, he asks you whether he should shut the window. You’re confused. Did Fred just call the door a window? He’s getting old, but surely not that old. You assume that Fred has made a simple mistake. But what kind of mistake was it? Did he make a linguistic mistake by mixing up the words? Or did he make a cognitive mistake by misrepresenting the facts and taking the door to be a window? “Fred, you meant to say ‘door’, didn’t you?” If he nods agreement, everything is fine. If he doesn’t, you will probably begin to worry about Fred’s cognitive system or conceptual scheme. You might wonder whether his vision is impaired or something worse has happened, unless it turns out that you, in turn, misread Fred’s gesture, while he did indeed mean the window opposite the door.

This example can be considered in various ways.** We usually take such mistakes to lie in an erroneous use of words rather than in a misrepresentation on part of the cognitive system, such as a hallucination. The latter case seems way more drastic. But are the cases of linguistic and cognitive mistakes related? Is one prior to the other? In what follows, I’d like to consider them through the lens of Wittgenstein’s later philosophy of mind and suggest that his account it has roots in theological voluntarism.

Let’s begin by looking at the accounts of error that suggest themselves. What kind of distinction is at work here? It seems that there are at least two possible ways of locating error:

  • linguistic errors occur on the level of behavioural interaction between language users: in this case an error is a deviation from a social practice;
  • cognitive errors occur on the level of (mental) representation: in this case an error is mismatch between a representation and a represented object.

The distinction between interaction and representation intimates two ways of thinking about minds. Representational models construe correctness and error on the relation between (mental) sign and object. Interactionist or social models construe correctness and error on the relation between (epistemic) agents. On the face of it, the representational model is the more traditional one, going back at least to Aristotle and the scholastics, before being famously reintroduced and radicalised by Descartes. By contrast, the interactionist model is taken to be relatively young, inspired by the later Wittgenstein, who attacked his own earlier representationalism and the whole tradition along with it. This historical picture is of course a bit of a caricature. But rather than adding necessary refinements, I think we should reject it entirely. Besides misconstruing much of the history of thinking about minds, it obscures commonalities that actually might help understanding Wittgenstein’s move towards the interactionist model.

What, then, might have inspired Wittgenstein’s later model? I think that Wittgenstein’s later philosophy of mind is driven, amongst other things, by two ideas, namely (a) that all kinds of mental activities (such as thinking and erring) are part of a shared practice and that (b) the rules constituting this practice have no further explanation or foundation. For illustration, think again of the linguistic error. Ad (a): Of course, calling a door a window is a case of mislabelling. But what turns this into an error is not any representational mismatch. What is amiss is not any match between utterance and object but Fred’s violation of your expectation (that he would ask, if anything, to close the door but not the window). Ad (b): This expectation is not grounded in anything further but the experienced practice itself. If you learn that people call a door a door, people should call a door a door. You begin to wonder if they don’t. There is no further explanation as to why that should be so. Taken together, these two ideas give priority to interaction over representation. Accordingly, Wittgensteinians will see error and correctness in reference to linguistic practice; not grounded in representation.

But where does this idea come from? Although Wittgenstein’s later thought is sometimes likened to that of earlier authors in early modern or medieval times, I haven’t seen that his ideas were placed in a larger tradition. Perhaps, then, straightforward philosophies of language and mind are not the best place to look. But what should we turn to? If we look for historical cues of the two ideas sketched above, we should watch out for theories that construe mental events on the model of action rather than representation. But if you think that such theorising begins only with what is commonly called ‘pragmatism’, you miss out on a lot. Let’s focus on (a) first. We should begin by giving up on the assumption that the representational model of the mind is the traditional one. Of course, representation looms large, but it is not always the crucial explanans of correct vs. erroneous thinking or speaking. Good places to start are discussions that tie error to acts of will. Why not try Descartes’ famous explanation of human error? In the Fourth Meditation, Descartes claims that error does not arise from misrepresentation as such. Rather I can err because my will reaches farther than my intellect. So my will might extend to the unknown, deviating from the true and good. And thus I am said to err and sin. Bringing together error and sin, Descartes appeals to a longstanding tradition that places error on the level of voluntary judgment and action. Accordingly, there is no sharp distinction between moral and epistemic errors. I can fail to act in the right way or I can fail to think in the right way. The source of my error is, then, not that I misrepresent objects but rather that I deviate from the way that God ordained. This is the way in which even perfect cognitive agents such as fallen angels and demons can err.

What is significant for the question at hand is that God is taken as presenting us with a standard that we can conform to or deviate from when representing objects. Thus, error is explained through deviation from the divine standard, not through a representational model. Of course, you might object, that divine standards are a far cry from social standards and linguistic rules.*** But what might have served as a crucial inspiration are the following three points: putting mental acts on a par with action, explaining error and correctness through a non-representational standard, and having a non-individualistic standard, for it is the relation of humans to God that enforces the standard on us. In this sense, error cannot be ascribed to a single individual that misrepresents an object; it must be a mind that is related to the standards set by God.

If we accept this historical comparison at least as a suggestion, we might say that divine standards play a theoretical role that is similar to the social practice in Wittgenstein. However, divine standards come in different guises. Not all philosophers who discuss error in relation to deviant acts of will are automatically committed to the thesis that the divine standards have no further foundation. Theological rationalists assume that divine standards can be justified, such that God wills the Good because it is good. By contrast, voluntarists assume that something is good because God wills it. Thus, rationalistic conceptions could allow for an explanation of error that is not ultimately explained by reference to the divine standard. In this sense, rationalism would clash with Wittgenstein’s anti-foundationalism, called (b) above, according to which rules have no further foundation over and above the practice. As Wittgenstein puts it in Philosophical Investigations, § 206: “Following a rule is analogous to obeying an order.”

How, then, does Wittgenstein see the traditional theological distinction? Given his numerous discussions of the will even in his early writings, it is clear that his work is informed by such considerations. Most striking is his remark on voluntarism reported in Waismann’s “Notes on Talks with Wittgenstein” (Philosophical Review 74 [1956]): “I think that the first conception is the deeper one: Good is what God orders. For this cuts off the path to any and every explanation ‘why’ it is good …” Here, Wittgenstein clearly sides with the voluntarists.**** Indeed, the idea of rule-following as obedience can be seen perfectly in line with the assumption that erring consists in violating a shared practice, just as the voluntarist tradition that Descartes belongs to deems erring a deviation from divine standards.

If these suggestions are pointing in a fruitful direction, they could open a path to relocating Wittgenstein’s thought in the context of the long tradition of voluntarism. They might downplay his claims to originality, but at the same time they might render both his work and the tradition more accessible.

___

* Originally posted on the blog of the Groningen Centre for Medieval and Early Modern Thought

** This is my variant of Davidson’s ketch-yawl example in his “On the very idea of a conceptual scheme”. I’d like to thank Laura Georgescu, Lodi Nauta and Tamer Nawar, who kindly heard me out when I introduced them to the ideas suggested here.

*** Thanks to Martin Kusch, who raised this objection in an earlier discussion on Facebook.

**** See David Bloor, Wittgenstein, Rules and Institutions, Routledge 2002, 126-133, who also discusses Wittgenstein’s voluntarism.

 

Abstract cruelty. On dismissive attitudes

Do you know the story about the PhD student whose supervisor overslept and refused to come to the defence, saying he had no interest in such nonsense? – No? I don’t know it either, by which I mean: I don’t know exactly what happened. However, some recurrent rumours have it that on the day of the PhD student’s defence, the supervisor didn’t turn up and was called by the secretary. After admitting that he overslept, he must indeed have said that he didn’t want to come because he wasn’t convinced that the thesis was any good. Someone else took over the supervisor’s role in the defence, and the PhD was ultimately conferred. I don’t know the details of the story but I have a vivid imagination. There are many aspects to this story that deserve attention, but in the following I want to concentrate on the dismissive attitude of the supervisor.

Let’s face it, we all might oversleep. But what on earth brings someone to say that they are not coming to the event because the thesis isn’t any good? The case is certainly outrageous. And I keep wondering why an institution like a university lets a professor get away with such behaviour. As far as I know the supervisor was never reprimanded, while the candidate increasingly went to bars rather than the library. I guess many people can tell similar stories, and we all know about the notorious discussions around powerful people in philosophy. Many of those discussions focus on institutional and personal failures or power imbalances. But while such points are doubtlessly worth addressing, I would like to focus on something else: What is it that enables such dismissive attitudes?

Although such and other kinds of unprofessional behaviour are certainly sanctioned too rarely, we have measures against it in principle. Oversleeping and rejecting to fulfil one’s duties can be reprimanded effectively, but what can we do about the most damning part of it: the dismissive attitude according to which the thesis was just no good? Of course, using it as a reason to circumvent duties can be called out, but the problem is the attitude itself. I guess that all of us think every now and then that something is so bad that, at least in principle, it isn’t worth getting up for. What is more, there is in principle nothing wrong with finding something bad. Quite the contrary, we have every reason to be sincere interlocutors and call a spade a spade, and sometimes this involves severe criticism.

However, some cases do not merely constitute criticism but acts of cruelty. But how can we distinguish between the two? I have to admit that I am not entirely sure about this, but genuine criticism strikes me as an invitation to respond, while in the case under discussion the remark about the quality of the thesis was given as a reason to end the conversation.* Ending a conversation or dismissing a view like that is cruel. It leaves the recipient of the critique with no means to answer or account for their position. Of course, sometimes we might have good reasons for ending a conversation like that. I can imagine political contexts in which I see no other way than turning my back on people. But apart from the fact that a doctoral defence shouldn’t be such an occasion, I find it suspicious if philosophers end conversations like that. What is at stake here?

First of all, we should note that this kind of cruelty is much more common than meets the eye. Sure, we rarely witness that a supervisor refuses to turn up for a defence. But anyone sitting in on seminars, faculty talks or lectures will have occasion to see that sometimes criticism is offered not as an invitation for response, but as a dismissal that is only thinly disguised as an objection. How can we recognise such a dismissal? The difference is that an opinion is not merely criticised but considered a waste of time. This and other slogans effectively end a conversation. Rather than addressing what one might find wanting, the opponent’s view will be belittled and portrayed as not being worth to be taken seriously. As I see it, such speech acts are acts of cruelty because they are always (even if tacitly) ad hominem. The conjunction of critical remarks and of ending a conversation shows that it is not merely the opinion that is rejected but that there is no expectation that the argument could be improved by continuing the conversation. In this sense, ending a conversation is owing to a severe lack of charity, ultimately dismissing the opponent as incapable or even irrational.

You would think that such behaviour gets called out quickly, at least among philosophers. But the problem is that this kind of intellectual bullying is actually rather widespread: Whenever we say that an opinion isn’t worth listening to, when we say, for instance, that analytical or continental philosophy is just completely wrongheaded or something of the kind, we are at least in danger of engaging in it.** Often this goes unnoticed because we move within circles that legitimise such statements. Within such circles we enjoy privilege and status; outside our positions are belittled as a waste of time. And the transition from calling something bad to calling something a waste of time is rather smooth, if no one challenges such a speech act.

Having said as much, you might think I am rather pessimistic about the profession. But I am not. In fact I think there is a straightforward remedy. Decouple criticisms from ending conversations! But now you might respond that sometimes a conversation cannot continue because we really do not share standards of scholarship or argument. And we certainly shouldn’t give up our standards easily. – I totally agree, but I think that rather than being dismissive we might admit that we have a clash of intuitions. Generally speaking, we might distinguish between two kinds of critical opposition: disagreements and clashes of intuition. While disagreements are opposing views that can be plotted on a common ground, clashes of intuition mark the lack of relevant common ground. In other words, we might distinguish between internal and external criticism, the latter rejecting the entire way of framing an issue. I think that it is entirely legitimate to utter external criticism and signal such a clash. It is another way of saying that one doesn’t share sufficient philosophical ground. But it also signals that the opposing view might still deserve to be taken seriously, provided one accepts different premises or priorities.*** Rather than bluntly dismissing a view because one feels safeguarded by the standards of one’s own community, diagnosing a clash respects that the opponent might have good reasons and ultimately engages in the same kind of enterprise.

The behaviour of the supervisor who overslept is certainly beyond good and evil. Why do I find this anecdote so striking? Because it’s so easy to call out the obvious failure on part of the supervisor. It’s much harder to see how we or certain groups are complicit in legitimising the dismissive attitude behind it. While we might be quick to call out such a brutality, the damning dismissive attitude is more widespread than meets the eye. Yet, it could be amended by admitting to a clash of intuitions, but that requires some careful consideration of the nature of the clash and perhaps the decency of getting out of bed on time.

_____

This post by Regina Rini must have been at the back of my mind when I thought about conversation-enders; not entitrely the same issue but a great read anyway.

**A related instance can be to call a contemporary or a historical view “weird”. See my post on relevance and othering.

*** Examples of rather respectable clashes are dualism vs. monism or representationalism vs. inferentialism. The point is that the debates run into a stalemate, and picking a side is a matter of decision rather than argument.

History without narratives? A response to Alex Rosenberg

Recently, Martin Kusch gave an intriguing keynote lecture on the development of the sociology of knowledge.* I was particularly interested in Steinthal’s role, whose name I recognised from my studies in linguistics and its history. But what was striking was that the lecture combined several levels of explanation. In addition to reconstructing philosophical arguments, Martin Kusch gave detailed insights into the institutional and political events that shaped the development. In other words, the lecture provided a nuanced combination of what is sometimes called historical and rational reconstruction. During the discussion I asked whether he thought that there was one particular level which decided the course of events. “Where do you think the real action took place, in politics or philosophy?” The answer was a succinct lesson in historical methodology: The quest for one decisive level of explanation is deceptive in itself. It suggests mono-causality. In fact, all the different factors have to be seen in conjunction. Real action takes place at every level. (By the way, I think this line of argument offers one of the best reasons why philosophy is inseparable from history.) A few days ago, I was reminded of this idea when reading an interview with Alex Rosenberg who thinks that certain levels of explanation should be discarded and argues for a history without narratives, because “narrative history is always, always wrong.”

According to Rosenberg, narratives are ways of making sense of events by referring to people’s beliefs and desires. “Had she not wanted x, she would not have done y. Erroneously, she believed that y would help her in getting x.” We engage in this sort of reasoning all the time. It presupposes a certain amount of folk psychology: ascribing beliefs and desires seems to require that these items really figure in a proper chain of events. But do they even exist, one might ask. – Now we also help ourselves to such explanations in history. Stuff happens. Explaining it sometimes requires us to assume minds, especially when humans are involved. Let’s call this approach folk history. (Note that Rosenberg is targeting “theory of mind” approaches in particular, but for the application to history the specifics of these approaches don’t matter.) Now Rosenberg gave an interview detailing why we should do away with folk history:

“The problem is, these historical narratives seduce you into thinking you really understand what’s going on and why things happened, but most of it is guessing people’s motives and their inner thoughts. […] [P]eople use narratives because of their tremendous emotional impact to drive human actions, movements, political parties, religions, ideologies. And many movements, like nationalism and intolerant religions, are driven by narrative and are harmful and dangerous for humanity. […] If narrative history gets things wrong because it relies on projection and things we can’t know for sure, how should we be trying to understand history? – There are a lot of powerful explanations in history and social sciences that don’t involve narrative. They involve models and hypotheses that are familiar in structure to the kind that convey explanation in the natural sciences. For example, take Guns, Germs, and Steel, which gives you an explanation of a huge chunk of human history, and that explanation does not rely on theory of mind at all.”

Alex Rosenberg makes a number of good points: (1) Relying on inner states is guesswork. (2) We use it to feed (bad) ideologies. (3) There are other means of writing history, not involving folk history. (4) Given the choice, we should confine ourselves to the latter approach. Let’s call this latter approach naturalistic history. I think there is a lot that speaks in favour of such an approach. If you read some Spinoza, Hume, Nietzsche or Freud, you’ll find similar ideas. We assume our thinking follows all these noble patterns of inference when in fact we are driven by motives and associations unknown to us. That said, the way Alex Rosenberg presents this naturalistic approach raises a number of concerns two of which I would like to address now.

  • The first worry concerns (4), i.e. the conclusion that folk history and naturalistic history should be played off against one another. Just like we need the “intentional stance” in the philosophy of mind, we also need it in history. But that’s not the whole story. Our reference to beliefs and desires does not only figure in historical explanations. It is also the very stuff we are interested in qua being human amongst other humans, and thus it shapes the events we want to explain. I concur in causing events because I ascribe mental states to others: I don’t sing in the library because I assume that it will annoy my fellow readers. Of course you can explain much of my actions by reference to biological and other factors. But at some point such explanations would have to invoke my ascriptions. Doing away with that level would mean doing away with a crucial part of the explanans. Playing off these levels against one another is like thinking that there is ultimately just one relevant explanatory level.
  • The second worry concerns (2), i.e. the tenet that narratives are the stuff of ideologies (and thus erroneous and to be avoided). While it is true that ideologies are fed by certain narratives, I know of no way to refer to (historical) data without a narrative. The naturalistic approach is not avoiding narratives tout court; it merely avoids a certain kind of narrative. It replaces the folk historical approach with a naturalistic narrative. Pretending that this is tantamount to avoiding all narratives is to suggest that the raw data of history are just there, waiting to be picked up by the disenchanted historian. In other words, I think that Rosenberg’s suggestion falls prey to a variant of the myth of the given. To say that narratives are “always wrong”, then, seems to be a category mistake. As I see it, narratives as such are neither right nor wrong. Rather, they provide frameworks that enable us to call individual statements right or wrong.

But since I have not read the book that is advertised in the interview, I don’t yet know whether this is the whole story. But who am I to try and tell this story by referring to beliefs and other mental states expressed in that book by Alex Rosenberg?

____

* A preprint of Martin Kusch’s paper is now available.

Should we stop talking about “minor figures”?

Every now and then, I hear someone mentioning that they work on “minor figures” in the history of philosophy. For reasons not entirely clear to me, the very term “minor figures” makes me cringe. Perhaps it is the brutally belittling way of picking out the authors in question. Let’s face it, when we’re speaking of “minor figures” we don’t necessarily mean “unduly underrated” or “neglected”. At the same time, the reasons are not clear to me indeed, since I know perfectly well that especially people who work on them do anything but belittle them. Nevertheless, the use of the term indicates that there is something wrong with our historiographical and linguistic practice. In what follows, I want to have a stab at what’s wrong, first with “minor”, then with “figures”.

Let me begin by saying that I deem most of the work done on “minor figures” very important and instructive. Projects such as Peter Adamson’s “History of Philosophy without any Gaps” or Lisa Shapiro’s and Karen Detlefsen’s “New Narratives” constantly challenge our canon by providing great resources. What’s wrong with the talk of “minor figures” then? I guess the use of the term “minor” confirms the canonical figures in their role as “major figures” or even geniuses. Even if I shift the focus to some hardly known or even entirely anonymous person, the reference to them is mostly justified by being an “interlocutor” of a “major” figure. Who begins to study Walter Chatton not because of William Ockham or Burthogge not because of Locke? The context that these minors are supposed to provide is still built around an “absurdly narrow” set of canonical figures. But even if researchers might eventually study such figures “in their own right”, the gatekeeping practice among book and journal editors doesn’t seem to change anytime soon. In other words, attempts at diversification or challenging of the canon paradoxically stabilize it.

Now you might argue that there is good reason to focus on major figures. Presumably they are singled out because they write indeed the best texts, raise the most intriguing issues, present the best arguments or have the greatest impact on others. Although I don’t want to downplay the fact that most canonical authors are truly worth reading, we simply aren’t in a position to know. And you don’t even need to pick hardly known people such as Adam Wodeham or Giovanni Battista Giattini. Why not prefer Albert the Great over the notorious Aquinas? Why not read Burthogge or Zabarella in the first-year course? Really, there is nothing that would justify the relatively minor status irrespective of existing preferences.

But perhaps the central worry is not the talk of “minor”. What seems worse is the fact that we focus so much on figures rather than debates, questions or topics. Why not work on debates about intentionality or social justice rather than Plato or Sartre? Of course you might indeed have an interest in studying a figure, minor or major. But unless you have a particular biographical interest, you might, even as a dedicated historian of philosophy, be more likely to actually focus on a topic in a figure or on the debate that that person is participating in. I see two main reasons for shifting the focus from figures to debates. Firstly, philosophy does not really happen within people but between them. Secondly, the focus on a person suggests that we try to figure out the intention of an author, but unless you take such a way of speaking as a shorthand for textual analysis, your object of study is not easily available.

By the way, if we shift the focus from people to debates, we don’t need the distinction between minor and major any longer. When I studied Locke, it became natural to study figures such as Burthogge. When I studied Ockham, it became natural to study figures such as Adam Wodeham or various anonymi. But perhaps, you might argue, our reason for focussing on figures is more human: we’re interested in what people think rather than in the arguments in texts alone. When we make assumptions, we think along with people and try to account for their ideas as well as their shortcomings and inconsistencies. But even if that is true, we shouldn’t forget that people are not really ever geniuses. Their thoughts mature in dialogue, not least in dialogue with minor figures such as ourselves.

Brave questions. A response to Sara Uckelman

Sara Uckelman has great advice for new students: be brave and ask questions! Even and especially those questions that you might find silly. Why should you? “Because I can guarantee you that every question you have, someone else in the class is going to have it too, and they’re not going to be brave enough to ask, and they will be so grateful to you that you were.”

Going from my own experience as a student and professor, this is quite true. The only thing I’d like to add is that this advice applies not only to beginners but perhaps especially to advanced practitioners. The reason is that there is no such thing as a question that is both genuine and silly. Why? Because at least in philosophy nothing is ever justified by itself.

Nevertheless, asking questions is difficult. As Sara Uckelman points out, it involves bravely embracing “your ignorance and confusion”. Moreover, questions are almost a textual genre unto themselves. (See Eric Schliesser’s advice on how to develop more elaborate questions.) Therefore, I think it’s worthwhile to acually practise asking questions. Here are a few ideas how to get started:

(1) Write down your question! You don’t even need to ask it if you’re unsure. But writing it down will enable you to keep track of your concern as the discussion moves on. You can perhaps see how close your question is to other questions (which might be variants of your question). And you can still choose to leave it at that or ask it later or even after the talk or class.

(2) Figure out what kind of question you have! Back in the day, I often felt stupid because I couldn’t actually pin down what to ask for in the first place. Asking for the meaning of an unfamiliar term is fairly simple (and it’s always a good thing to ask, because terminology is often used in specific and different ways by different people). But more often than not, I just felt like saying “I don’t understand that passage at all.” If you feel like that, it might be a good start to figure out more clearly what exactly you don’t understand about it: a word, a certain argumentative move, the relation between two sentences etc. You can then begin by stating what you do understand and then move on to saying where exactly you lose track. It locates the problem, makes one feel less helpless, and will help your interlocutor.

(3) Structure your question! Sometimes you might just want to get it out and over with. But if you feel comfortable enough it might be helpful to raise a question in a more elaborate manner. I find the following parts useful:

  • target: say what the question is about
  • state the actual question
  • give a brief explanation why the question arises
  • perhaps provide a brief anticipation of possible answers (at talks this is helpful to prepare follow-up questions)

Of course, it’s not necessary to do all of those things. But bearing such a structure in mind often helped me to prevent myself from losing track of where I actually am. Sometimes even the mere act of talking might seem difficult. In such cases, this structure might help you to say some things without having to think (which is difficult when you’re nervous). So you might begin by saying “I’d like to ask a question about this … (insert term or phrase)” or by saying “I have a question. Let me explain how it arises.” Uttering such (or other) words will perhaps make you feel more at home in the space you’re inhabiting.

On relevance and othering

Do you remember talking about music during your school days? There was always someone declaring that they would only listen to the latest hits. Talking to philosophers, I occasionally feel transported back to these days: especially when someone tells me that they have no time for history and will only read the latest papers on a topic. “What do I care what Brentano said about intentionality! I’m interested in current discussions.” Let’s call this view “currentism”. I sometimes experience versions of this currentist attitude in exams. A student might present an intriguing reconstruction of a medieval theory of matter only to be met with the question: “Why would anyone care about that today?” I have to admit that I sometimes find this attitude genuinely puzzling. In what follows I’d like to explain my puzzlement and raise a few worries.

Why only “sometimes”? I say “sometimes”, because there is a version of this attitude that I fully understand. Roughly speaking, there is a descriptive and a normative version of that sentiment. I have no worries about the descriptive version: Some people just mean to say what they focus on or indicate a preference. They are immersed in a current debate. Given the constraints of time, they can’t read or write much else. That’s fine and wholly understandable. In that case, the question of why one would care might well be genuine and certainly deserves an answer. – The normative version is different: People endorsing the normative attitude mean to say that history of philosophy is a waste of time and should be abolished, unless perhaps in first-year survey courses. Now you might say: “Why are you puzzled? Some people are just more enthusiastic in promoting their preferences.” To this I reply that the puzzlement and worries are genuine because I find the normative attitude (1) unintelligible and (2) politically harmful. Here is why:

(1) My first set of worries concerns the intelligibility of this attitude. Why would anyone think that the best philosophy is being produced during our particular time slice? I guess that the main reason for (normatively) restricting the temporal scope of philosophy to the last twenty or fifty years is the idea that the most recent work is indeed the best philosophy. Now why would anyone think that? I see two possible reasons. One might think so because one believes that philosophy is tied to science and that the latest science is the best science. Well, that might be, but progress in science does not automatically carry over to philosophy. The fact that I write in the presence of good science doesn’t make me a good philosopher.

So if there is something to that idea people will ultimately endorse it for another reason: because there might be progress in philosophy itself. Now the question whether there really is progress in philosophy is of course hotly debated. I certainly don’t want to deny that there have been improvements, and I continue to hope for more of them. But especially if we assume that progress is an argument in favour of doing contemporary philosophy (and what else should we do, even if we do history!), how can someone not informed about history assess this progress? If I have no clue about the history of a certain issue, how would I know that real advancements have been made? In other words, the very notion of progress is inherently historical and requires at least some version of (whig) history. So unless someone holds the belief that recent developments are always better, I think one needs historical knowledge to make that point.

Irrespective of questions concerning progress one might still endorse current over historical philosophy because it is relevant to current concerns. So yes, why bother with medieval theories of justice when we can have theories that invoke current issues? Well, I don’t doubt that we should have philosophers focussing on current issues. But I wonder whether current issues are intelligible without references to the past. Firstly, there is the fact that our current understanding of justice or whatever is not a mere given. Rather, it is the latest stage of a development over time. Arguably, understanding that development is part of understanding the current issues. Now you might object that we should then confine ourselves to writing genealogies of stuff that is relevant today but not of remote issues (such as medieval theories of, say, matter). To this I reply that we cannot decide what does and doesn’t pertain to a certain genealogy in advance of historical studies. A priori exclusion is impossible, at least in history. Moreover, we cannot know that what we find irrelevant today is still irrelevant tomorrow. In other words, our judgments concerning relevance are subject to change and cannot be used to exclude possible fields of interest. To sum up, ideas of progress and relevance are inherently historical and require historical study.

(2) However, the historicity of relevance doesn’t preclude that it is abused in polemical and political ways. Besides worries about intelligibility, then, I want to raise political and moral worries against the normative attitude of currentism. Short of sound arguments from progress or relevance, the anti-historical stance reduces to a form of othering. Just like some people suffer exclusion and are labelled as “weird” for reasons regarding stereotypes of race or gender, people are excluded for reasons of historical difference. But we should think twice before calling a historically remote discussion less rational or relevant or whatever. Of course, there is a use of “weird” that is simply a shorthand of “I don’t understand the view”. That’s fine. What I find problematic is the unreflected dismissal of views that don’t fit into one’s preferences. But the fact that someone holds a view that does not coincide with today’s ideas about relevance deserves study rather than name-calling. As I see it, we have moral reasons to refrain from such forms of abuse.

If we don’t have reasons showing that a historical view has disadvantages over a current one, why do we call it “weird” or “irrelevant”? Here is my hunch: it’s a simple fight over resources. Divide et impera! But in the long run, it’s a lose-lose situation for all of us. Yet if you’re a politician and you manage to play off different sub-disciplines in philosophy or the humanities against one another, you can simply stand by until they’ve delegitimised each other so much that you can call all camps a farce and close down their departments.

What is philosophy? A response to Laura Georgescu

Some years ago, I began to make a habit of telling students what I think philosophy is. Not in order to tell them the one and only Truth, but in order to clarify what I look out for in papers and interactions. So I will say something like the following: “Philosophy is a huge and on-going conversation. Thus, philosophy is not directly about phenomena; rather it deals with claims about phenomena. So you won’t ask “What is thinking or justice?” Rather you will deal with what other people or you say about thinking or justice etc.” This is normally followed up by some explanation of how to identify claims and arguments (or other kinds of evidence) in relation to such claims. Finally, I try to explain that the evaluation of a position does not mean to say that one is for or against it, but to spell out in what way and how convincingly the arguments support the claim.

Recently, I’ve grown a bit wary of saying such things. Why? Of course, I like to think about philosophy that way because it highlights the fact that it’s a conversational practice where certain rules of discourse apply. And sometimes it also stops people from doing their head in about the phenomena themselves. If you have to write a paper it’s easier to think about the things people say than to ask yourself what consciousness really is. But on the other hand it sends the message that philosophy is all about making claims. Now what’s wrong with that? In a way, not much. But then again it seems to have things the wrong way round. One might even think that we are mainly trained to question others rather than our own beliefs. But in fact a claim is something you might come to after a lot of thinking, talking and doubting yourself. A claim is not one’s starting point, or is it?

I couldn’t quite put my finger on it before reading Laura Georgescu’s recent blog post “Against Confidence in Opinions”. Go and read it! I’ll just give you the main take-home message it had for me: Like the much of the world, academic philosophy is infected with the idea that it’s a good thing to have confidence. Especially in the rightness of one’s opinions. So it’s all about defending one’s claims. But what is the outcome of that? Something most of us would claim not to find desirable: dogmatism. So there’s a cultivation of confidence and it leads to dogmatism. This nailed it for me. I realised that what I found problematic was that people were too often invested in defending their positions rather than questioning them. If you look at philosophers, you might want to distinguish two types: the one who is self-undermining and asking questions all the time as opposed to the confident one, tossing out one claim after another. The latter type seems to abound. – But as much as I like to believe in such a distinction, I doubt that it holds. So what now?

I recently said that advising to work on one’s confidence is cold comfort. Neither do I think that we can just ignore this culture. So let’s think what precisely might be undesirable about it. When I remember my student days, I remember myself admiring many of my fellow students for their confidence. They were speaking up, eloquently so, while I was running through possible formulations of an idea and remained doubtful whether I had a point at all. That feeling remained for a very long time. After the PhD and Habilitation it got better, but whenever I went out of my scholarly comfort zone, I felt I had no points to make. There is a kind of confidence that depends on having a feeling of legitimacy, and I often think getting a permanent job helps a lot with that feeling. – So now that I feel confident enough to write blog posts about what philosophy is I should start preaching that confidence is a bad thing? Doesn’t sound convincing to me. So what precisely is wrong with it?

First of all, there is a lot right with it. It helps getting through the day in all sorts of ways. But as Laura Georgescu emphasises, it’s confidence in opinions that is troublesome. How then can we prevent dogmatism without giving up on being confident entirely?

I think it might help to come back to the idea of philosophy as a conversational practice and to distinguish two kinds of conversation: an internal conversation that one has with oneself (Plato called this “thinking”) and the external conversation that one has with others. When we see external conversations happening between people, we often hear someone asking a question and someone else responding with a claim. Further questions ensue and the claim gets defended. What we observe are two types: the questioner and the one defending claims. This is what we often witness in philosophy talks, and our paper structures imitate that practice, mostly with the author as the one making claims. The upshot is that, in papers and talks, we often play just one of two or more possible roles. That might be undesirable.

However, if we focus on internal conversations we find that we do in fact both. The claims we pin down come after a lot of self-undermining back and forth. And the confidence we can muster might be the last resort to cover the relentless doubting that goes on behind our foreheads. In our internal conversations, I guess most of us are far from any kind of dogmatism.

I suppose, then, if we see reason to change the practice of coming across as dogmatic, a good start might be to bring some of that internal conversation to bear on external conversations. Rather than playing sceptic versus dogmatist, we might every now and then remember that, most of the time, we just do what Plato called thinking. Having a dialogue in which we take on all sorts of roles and propositional attitudes. Bring it on! But I guess it takes some confidence.

All interpretations of ideas in Locke are mistaken – really? A response to Kenny Pearce

I’m exaggerating, but only a bit. Earlier this year, Kenny Pearce* wrote a fine post on “Locke’s Experimental Philosophy of Ideas”, highlighting what is often forgotten: that Locke’s Essay ties in with Baconian natural history. He then goes on to argue that we should also see Locke’s account of ideas as part of that project and concludes:

“This line of interpretation has consequences for how we must understand Locke’s account of ideas. If Locke is following this kind of Baconian methodology then, although he does at various points seek to explain various phenomena, his ‘ideas’ cannot be understood as theoretical posits aiming to explain how we perceive external objects.”

If this is correct, almost all interpretations of Locke’s theory of ideas are mistaken. Locke’s account amounts to nothing more than an unsystematic catalogue of the “ideas of which we are aware”. Indeed, the whole Essay is to be seen as an “intentionally unsystematic work”. Or so Kenny Pearce claims.

I think this is a challenging approach and certainly deserves more attention. At this point, however, I would like to address just one issue, i.e. the claim that ideas are to be seen in a “natural historical” sense. Given the evidence, I think this is correct and has been overlooked too often in attempts at making sense of book II of the Essay. But I would like to add two observations that might put a wholly different spin on Locke’s account.

(1) Natural history is not simply an account of what we “are aware” of. Locke sees his natural history of ideas as one that proceeds from simple ideas to the more complex. Starting from the simple ingredients, however, is not meant to imply that we are aware of simple ideas as givens. Locke doesn’t think that our awareness starts with simple ideas. Rather, Locke starts with simple ideas for two reasons: firstly, he wants to account for the origin of ideas; secondly, he starts with simple ideas for what one might call didactical reasons: “Because observing the faculties of the Mind, how they operate about simple Ideas …, we may the better examine them and learn how the Mind abstracts, denominates, compares, and exercises its other Operations, about those which are complex …” (II, xi, 14)

(2) Perhaps more importantly, Locke explicitly finishes this natural historical account early on and begins an entirely new discussion of ideas: here, he is interested in relations between different kinds of ideas and in what I’d call their epistemic content: “Though in the foregoing part, I have often mentioned simple Ideas, which are truly the Materials of all our knowledge; yet having treated them there, rather in the way that they come into the Mind, than as distinguished from others more compounded, it will not be, perhaps amiss to take a view of some of them again under this Consideration …” (II, xiii, 1) Thus, a great part of book II is not owing to the natural historical perspective.

The upshot is that Locke introduces two different perspectives on ideas: the natural historical one, accounting for the origin, and the epistemic one, accounting for representational content. As I elaborate in a paper of mine, I think that the former perspective focuses on the causal history of ideas, while the latter is intended as a consideration of the different kinds of representational content in our episodes of thought. In other words, the former explains how ideas originate in experience, while the latter explains how we end up taking things as something, e.g. as substances, modes or relations.

If this is correct, we should indeed acknowledge Locke’s reliance on Baconian natural history. But we should also carefully consider where Locke introduces different ways of treating ideas. After all, in conjunction with the considerations on language, Locke took his account of ideas as something that would “afford us another sort of Logick and Critick, than what we have been hitherto acquainted with.” (IV, xxi, 4)

___

* Kenny Pearce regularly blogs on early modern philosophy.

 

Talking texts. Conditions of a good interpretation

It’s difficult to determine what the claim of a (philosophical) text is. And thinking about today’s topic, I feel like I haven’t even mentioned the crucial difficulty. I don’t know about you, but for me things start moving once I begin to look at relations between texts. It’s like listening to a conversation. Once you listen to different voices, each of them is more distinguishable. It’s the relation to other texts that makes the aims, claims, and arguments visible in the first place. I’d even say that figuring out the claim of a text is impossible unless we understand what the claim is responding to.

Why is that? I suppose that it has to do with a very simple fact about sincere conversations: no one will just start out by making a claim. I won’t get up in the morning and start a conversation by saying: “By the way, I think, therefore I am.” Claims are responses. They might be responses to questions, refinements or corrections of other claims. And this is why texts don’t make much sense unless we see them in relations to other texts. To put the point in a more technical fashion, claims make sense if you consider them in inferential relations, not if you solely consider them in relation to phenomena or facts. So if someone talks, say, about consciousness, you won’t be able to say much beyond that if you only think about the relation between the claim and the phenomenon (of consciousness). Only when you begin to see how it relates to a specific question, to other tenets or a competing claim will you be able to assess it.

Now you might want to raise the following objection: Surely, you will reply to me, surely you can assess a claim in relation to a question or a different claim. But why should it not be possible to see a claim in relation to phenomena? At this point, I can only hint at an answer: This relation will leave the claim underdetermined. The reason is that the phenomenon is not ‘on the same level’ as the text. It’s like making a pointing gesture into the midle distance without at least attempting a description of the kind of thing you want to point at. Of course, you are able to consider an extralinguistic phenomenon or state of affairs. Think of a red elephant! Now there are a thousand things you can say about that elephant: you can talk about anything in relation to the elephant. Only in response to a specific question can you make a claim that stands in an inferential relation. If someone asks you: “What does the elephant look like?” or “What colour does it have?”, you can claim that it is red. Only such inferential relations make claims in texts determinable.

If you read my earlier piece, you might now hold this against me: But you, Martin, listed various interpretations of Ockham’s “mental propositions”; and these were not primarily standing in relation to other texts but to the phenomenon that was assumed to be picked out by the term “mental propositions”. Sure, the interpretations might have been shaky, but they were intended to get at the extralinguistic facts that Ockham wanted to explain! – Well, although that might seem to be the case, it’s not really true. Even if these interpretations were not formed in explicit relation to other texts of Ockham’s time, they were still formed in relation to contemporary texts. Such texts might remain unmentioned as tacit presuppositions. But if I say that Ockham is or isn’t like Fodor, I compare the Summa logicae to Fodor’s Language of Thought. There is always another text. But if we want to provide accessible interpretations, it’s better to say what these texts (or presuppositions) are.

Now there are of course many possible texts that I can relate any given text to. How do I pick them? – That depends. Of course, there will be your personal associations to begin with: other texts that a given text makes you think of. “This sounds like that”, you might think without ever writing it down. Although you perhaps won’t admit what you initially thought of, keep it in mind. It might be important one day. The next question to ask is what kind of interpretation you want to give. If you are interested in current philosophical topics, think about pertinent texts. If you want to provide a historical analysis of the claim, it will be good to figure out what a text is actually responding to. Now you enter the field in which you can make true and false assertions about the text. But don’t worry. It’s so hard to assess such assertions that any false claim is better than remaining silent. (I mean that.)

But how do you go about determining the claim now? No matter whether you want to give a more philosophical or historical interpretation, it’s important to look for a point of contact. Such a point of contact is a more or less explicit way of relating to another text, either by paraphrase or direct quotation. It might be a term, a phrase or even a paragraph. A point of contact is evidence for a historian: another text has been responded to. But it is way more than that. In finding a such a point of contact you make sure that two texts (and you) share a common ground: something that is agreed on or disagreed about. There are several ways of estabishing a point of contact, but it seems sensible to begin by distinguishing at least three approaches:

(1) If you analyse a text historically, you might begin by looking for quotations or references to other texts. This gives you a first idea of what an author relates to or disagrees with. If you’re lucky you’ve now found something that the claim is a refinement of or an opposition to. So here you can begin to figure out what is being claimed.

(2) If you read secondary literature, you’ll often find that it disagrees about certain points of contact. Figure them out. If there is no clear point of contact, people might be talking past one another.

(3) If you’re more interested in the topic than the historical ties of the text, you can establish a point of contact by relating it to any text you find pertinent. Here, you might follow your initial associations and wonder why you thought of them.

In any case, by establishing a clear point of contact, you’ll provide your reader or interlocutor with an accessible piece of evidence that a discussion can focus on. Texts talk to other texts. In this sense, establishing such a focus between texts, shifting it, or making a new emphasis in an existing one under discussion is a good way to enter a debate or to begin looking at it.