Ugly ducklings and progress in philosophy

Agnes Callard recently gave an entertaining interview at 3:16 AM. Besides her lovely list of views that should count as much less controversial than they do, she made an intriguing remark about her book:

“I had this talk on weakness of will that people kept refuting, and I was torn between recognizing the correctness of their counter-arguments (especially one by Kate Manne, then a grad student at MIT), and the feeling my theory was right. I realized: it was a bad theory of weakness of will, but a good theory of another thing. That other thing was aspiration. So the topic came last in the order of discovery.”

Changing the framing or framework of an idea might resolve seemingly persisting problems and make it shine in a new and favourable light. Reminded of Andersen’s fairy tale in which a duckling is considered ugly until it turns out that the poor animal is actually a swan, I’d like to call this the ugly duckling effect. In what follows, I’d like to suggest that this might be a good, if underrated, form of making progress in philosophy.

Callard’s description stirred a number of memories. You write and refine a piece, but something feels decidedly off. Then you change the title or topic or tweak the context ever so slightly and, at last, everything falls into place. It might happen in a conversation or during a run, but you’re lucky if it does happen at all. I know all too well that I abandoned many ideas, before I eventually and accidentally stumbled on a change of framework that restored (the reputation of) the idea. As I argued in my last post, all too often criticism in professional settings provides incentives to tone down or give up on the idea. Perhaps unsurprisingly, many criticisms focus on the idea or argument itself, rather than on the framework in which the idea is to function. My hunch is that we should pay more attention to such frameworks. After all, people might stop complaining about the quality of your hammer, if you tell them that it’s actually a screwdriver.

I doubt that there is a precise recipe to do this. I guess what helps most are activities that help you tweaking the context, topic or terminology. This might be achieved by playful conversations or even by diverting your attention to something else. Perhaps a good start is to think of precedents in which this happened. So let’s just look at some ugly duckling effects in history:

  • In my last post I already pointed to Wittgenstein’s picture theory of meaning. Recontextualising this as a theory of representation and connecting it to a use theory or a teleosemantic account restored the picture theory as a component that makes perfect sense.
  • Another precendent might be seen in the reinterpretations of Cartesian substance dualism. If you’re unhappy with the interaction problem, you might see the light when, following Spinoza, you reinterpret the dualism as a difference of aspects or perspectives rather than of substances. All of a sudden you can move from a dualist framework to monism but retain an intuitively plausible distinction.
  • A less well known case are the reinterpretations of Ockham’s theory of mental language, which was seen as a theory of ideal language, a theory of logical deep structure, a theory of angelic speech etc.

I’m sure the list is endless and I’d be curious to hear more examples. What’s perhaps important to note is that we can also reverse this effect and turn swans into ugly ducklings. This means that we use the strategy of recontextualisation also when we want to debunk an idea or expose it as problematic:

  • An obvious example is Wilfried Sellars’ myth of the given: Arguing that reference to sense data or other supposedly immediate elements of perception cannot serve as a foundation or justification of knowledge, Sellars dismissed a whole strand of epistemology.
  • Similarly, Quine’s myth of the museum serves to dismiss theories of meaning invoking the idea that words serve as labels for (mental) objects.
  • Another interesting move can be seen in Nicholas of Cusa’s coincidentia oppositorum, restricting the principle of non-contradiction to the domain of rationality and allowing for the claim that the intellect transcends this domain.

If we want to assess such dismissals in a balanced manner, it might help to look twice at the contexts in which the dismissed accounts used to make sense. I’m not saying that the possibility of recontextualisation restores or relativises all our ideas. Rather I think of this option as a tool for thinking about theories in a playful and constructive manner.

Nevertheless, it is crucial to see that the ugly duckling effect works in both ways, to dismiss and restore ideas. In any case, we should try to consider a framework in which the ideas in question make sense. And sometimes dismissal is the way to go.

At the end of the day, it could be helpful to see that the ugly duckling effect might not be owing to the duck being actually a swan. Rather, we might be confronted with duck-swan or duck-rabbit.

Spotting mistakes and getting it right

“Know thyself” is probably a fairly well known maxim among philosophers. But the maxim we live by rather seems to be one along the lines of “know the mistakes of others”. In calling this out I am of course no better. What prompts me to write about this now is a recent observation, not new but clearly refreshed with the beginning of the academic year: it is the obvious desire of students to “get it right”, right from the start. But what could be wrong with desiring to be right?

Philosophers these days don’t love wisdom but truth. Now spotting the mistakes of others is often presented as truth-conducive. If we refute and exclude the falsehoods of others, it seems, we’re making progress on our way to finding out the truth. This seems to be the reason why most papers in philosophy build their cases on refuting opposing claims and why most talks are met with unwavering criticism of the view presented. Killing off all the wrongs must leave you with the truth, no? I think this exclusion principle has all sorts of effects, but I doubt that it helps us in making the desired progress. Here is why.

A first set of reasons relates to the pragmatic aspects of academic exchange: I believe that the binary distinction between getting it right or wrong is misleading. More often than not the views offered to us are neither right nor wrong. This is owing to the fact that we have to present views successively, by putting forward a claim and explaining and arguing for it. What such a process exposes is normally not the truth or falsity of the view, but a need for further elaboration: by unpacking concepts and consequences, ruling out undesired implications, clarifying assumptions etc.

Now you might object that calling a view false is designed to prompt exactly that: clarification and exploration. But I doubt that this is the case. After all, much of academic exchange is driven by perceived reputation: More often than not criticism makes the speaker revert to defensive moves, if it doesn’t paralyse them: Rather than exploring the criticised view, speakers will be tempted to use strategies of immunising their paper against further criticism. If speakers don’t retract, they might at least reduce the scope of their claims and align themselves with more accepted tenets. This, I believe, blocks further exploration and sets an incentive for damage control and conformism. If you doubt this, just go and tell a student (or colleague) that they got it wrong and see what happens.

Still, you might object, such initial responses can be overcome. It might take time, but eventually the criticised speaker will think again and learn to argue for their view more thoroughly. – I wish I could share this optimism. (And I sometimes do.) But I guess the reason that this won’t happen, or not very often, is simply this: What counts in scholarly exchange is the publicly observable moment. Someone criticised by an opponent will see themselves as challenged not only as a representative of a view but as a member of the academic community. Maintaining or restoring our reputation will seem thus vital in contexts in which we consider ourselves as judged and questioned: If we’re not actually graded, under review or in a job talk, we will still anticipate or compare such situations. What counts in these moments is not the truth of our accounts, but whether we convince others of the account and, in the process, of our competence. If you go home as defeated, your account will be seen as defeated too, no matter whether you just didn’t mount the courage or concentration to make a more convincing move.

A second set of reasons is owing to the conviction that spotting falsehoods is just that: spotting falsehoods. As such, it’s not truth-conducive. Refuting claims does not (or at least not necessarily) lead to any truth. Why? Spotting a falsehood or problem does not automatically make any opposing claim true. Let me give an example: It is fairly common to call the so-called picture theory of meaning, as presented in Wittgenstein’s Tractatus, a failure. The perhaps intuitive plausibility that sentences function as pictures of states of affairs seems quickly refuted when asking how such pictures can be said to be true or false of a supposed chunk of reality. What do you do? Step out of the picture and compare it with the proper chunk? Haha! – Refuting the picture theory, then, seems to bring us one step closer to an appropriate theory of meaning. But such a dismissal makes us overlook that the picture theory has enormous merits. Once you see it as a theory of representation and stop demanding that it also accounts for the truth and falsity of representations, you begin to realise that it can work very well when combined with a theory of use or a teleosemantic theory. (See e.g. Ruth Millikan’s recontextualisation) The upshot is that our dismissals are often resulting from overlooking crucial further assumptions that would reinstate the dismissed account.

Now you might object that an incomplete account is still a bad account. Pointing this out is not per se wrong but will eventually prompt a recontextualisation that works. In this sense, you might say, the criticism becomes part of the recontextualised account. – To this I agree. I also think that such dialogues can prompt more satisfying results. But bearing the pragmatic aspects of academic exchange in mind, I think that such results are more likely if we present our criticism for what it is: not as barking at falsehoods but attempts to clarify, complete or complement ideas.

Now you might object that the difference between barking at falsehoods and attempts to clarify can be seen as amounting just to a matter of style. – But why would you think that this is an objection? Style matters. Much more than is commonly acknowledged.

Addiction. A note on the debate about the climate crisis

It’s about five years ago that I quit smoking. There were at least two things that helped greatly in kicking the habit. Firstly, the common view had changed: smoking was no longer seen as a personal choice but as an addiction. This had repercussions on my own view and made it seem less attractive, to put it mildly. Secondly, the infrastructure had changed increasingly: in most places smoking was no longer permitted by then. When I grew up, I wouldn’t have anticipated these changes. Smoking, it seemed, had become part of my identity: at parties and other events I was one of the people who smoked. That was fine. It no longer is. – I’d like to suggest that our discussion of the climate crisis might benefit from a comparison to smoking habits: Like smoking, the climate crisis is connected to a number of harmful practices. Many societies have successfully banned smoking. So perhaps such a comparison can help us in steering towards a state in which we successfully overcome at least some of these harmful practices. Let me focus on two points:

(1) Moral problems: Smoking can be seen in relation to a number of moral problems. It obviously harms others and the smoker. But there is another problem that was often ignored: In the public debate, smokers were often attacked for choosing to smoke. But if we consider smoking an addiction, something is wrong with that accusation. An addict doesn’t simply choose between two options. If things are really bad, the smoker is compelled to smoke. What is the moral problem in that? Well, holding someone responsible who has not that much of a choice might be the wrong way of addressing the issue. – It’s at this point that I see a crucial analogy to the habits related to the climate crisis. Many things we do are so deeply ingrained that it makes sense to see them as addictions: If we treat gambling, smart phone use, drugs etc. as addictive, it might make sense to treat driving cars, eating meat and dairy products and many other habits at least as quasi-addictive. They might be said to involve rewards and to be compulsive (to some degree) rather than plainly chosen. In any case, following the discussions on the climate crisis triggers many memories of the discussions about the smoking ban. In admitting to the addictive character of habits, the public discussion could move from the current practice of blaming each other (and looking for the greatest hypocrite) to ways of thinking about overcoming the addictions involved.

(2) Motivational problems: This brings me to my second issue. Wondering whether to quit smoking, I benefitted greatly from the amended infrastructure. It’s hard to see smoking as part of your identity if it’s banned everywhere. At the same time the changed moral perception helped. I couldn’t frame myself as a youthful outcast who gets morally antagonized by the mainstream for making bad choices. Rather I could view myself as someone who needs help. In this sense, the legal and social infrastructure were a great motivational factor: with many of the social rewards gone, it was much easier to realistically project a future self without a packet of cigarettes. – The same goes of course for climate crisis related habits. Once it becomes increasingly unacceptable and impractical to drive a car or eat meat, all the social rewards dwindle.

The upshot is that I think we should stop treating people who indulge in certain climate-related habits as if they were failing personally. So long as our society and infrastructure rewards such habits, it makes more sense to see them as quasi-addictions.

Currently, we often distinguish between personal and political failures in the climate crisis. I’m not convinced that this is a good distinction. So-called personal failures are often driven by our social, cultural and technological infrastructure. If we want change, we need to stop passing blame on individuals who will only feel encouraged to look for hypocrisies. What we need is help both to amend our addictions and infrastructure. In this regard, we might benefit from looking at the successful aspects of the smoking ban.

Naturalism as a bedfellow of capitalism? A note on the reception of early modern natural philosophy

Facing the consequences of anthropogenic climate change and pollution, the idea that a certain form of scientific naturalism goes hand in hand with an exploitative form of capitalism might (or might not) have an intuitive plausibility. But does the supposed relation between naturalism and capitalism have something like a historical origin? A set of conditions that tightened it? And that can be traced back to a set of sources? In what follows, I’d like to present a few musings on this kind of question.

What does it take to write or think about a history of certain ideas? Obviously, what you try to do is to combine certain events and think something like: “This was triggered by that or this thought relies on that assumption.” You might even be more daring and say: “Had it not been for X, Y would (probably) never have occurred.” Such claims are special in that they bind events or ideas together into a narrative, often designed to explain how it was possible that some event or an idea occurred. – The philosopher Akeel Bilgrami makes such a claim when he suggests that naturalism, taken as a certain way of treating nature scientifically and instrumentally, is tied to capitalism. In his “The wider significance of naturalism” (2010), Bilgrami writes:

“[D]issenters argued that it is only because one takes matter to be “brute” and “stupid,” to use Newton’s own term, that one would find it appropriate to conquer it with nothing but profit and material wealth as ends, and thereby destroy it both as a natural and a human environment for one’s habitation.
[…] Newton and Boyle’s metaphysical view of the new science won out over the freethinkers’ and became official only because it was sold to the Anglican establishment and, in an alliance with that establishment, to the powerful mercantile and incipient industrial interests of the period in thoroughly predatory terms that stressed that nature may now be transformed in our conception of it into the kind of thing that is indefinitely available for our economic gain…”

Bilgrami’s overall story is a genealogy of naturalism or rather scientism.* The paper makes itself some intriguing observations regarding narratives and historiography. But let’s look at his claim more closely. By appealing to Newton and the victory of his kind of naturalism, it is designed to explain why we got to scientism and a certain understanding of nature. In doing so, it binds a number of highly complex events and ideas together: There is a (1) debate between “dissenters” and what he calls “naturalists”, whose ideas (2) became official, (3) “only because” they were “sold” to the Anglicans and to industrial stakeholders. Although this kind of claim is problematic for several reasons, it is quite interesting. One could now discuss why ideas about necessary connections between facts (“only because”) presuppose a questionable understanding of history tout court or seem to ignore viable alternatives. But for the time being I would like to focus on what I find interesting. For me, two aspects stand out in particular.

Firstly, Bilgrami’s thesis, and especially (3), seems to suggest a counterfactual causal claim: Had the metaphysical view not been sold to the said stakeholders, it would not have become official. In other words, the scientific revolution or Newton’s success is owing to the rise of capitalism. Both cohere in that they seem to propagate a notion of nature that is value-free, allowing nature to be exploited and manipulated. Even if that notion of nature might not be Newton’s, it is an interesting because it seems to gain new ground today: The widespread indifference to climate change and pollution for capitalist reasons suggests such a conjunction. Thus, a genealogy that traces the origin of that notion seems to ask at least an interesting question: Which historical factors correlate to the rise of the currently fashionable notion of nature?

Secondly, the narrative Bilgrami appeals to has itself a history and is highly contested. But Bilgrami neither argues for the facts he binds together, nor does he appeal to any particular sources. This is striking, for although he is not alone with his thesis, people are not exactly buying into this narrative. If you read Steven Pinker, you’ll rather get a great success story about why science has liberated us. And even proper historians readily dismiss the relation between the rise of capitalism and science as “inadequate”. This raises another interesting question: Why do we accept certain narratives (rather than others)?

This latter question seems to suggest a simple answer: We do or should accept only those narratives that are correct. As I see it, this is problematic. Narratives are plausible or implausible. But the complexity of the tenets they bind together makes it impossible to prove or refute them on ordinary grounds of evidence. Just try to figure out what sort of evidence you need to show that the Newtonian view “won” or was “sold”! You might see who argued against whom; you might have evidence that some merchants expressed certain convictions, but the correlations suggested by these words can be pulled and evidenced in all sorts of ways. Believing a narrative means to believe that certain correlations (between facts) are more relevant than others. It means to believe, for instance, that capitalism was a driving force for scientists to favour certain projects over others. But unless you show that certain supposed events did not occur or certain beliefs were not asserted, it’s very hard to counter the supposed facts, let alone the belief in their correlation.

So I doubt that we simply chose to believe in certain narratives because we have grounds for believing they are true. My hunch is that they gain or lose plausibility along with larger ideologies or belief systems that we adhere to. In this regard it is striking that Bilgrami goes for his thesis without much argument. While he doesn’t give clear sources, Bilgrami’s assumption bears striking resemblance to the claims of Boris Hessen, who wrote (in 1931):

“The Royal Society brought together the leading and most eminent scientists in England, and in opposition to university scholasticism adopted as its motto ‘Nullius in verba’ (verify nothing on the basis of words). Robert Boyle, Brouncker, Brewster, Wren, Halley, and Robert Hooke played an active part in the society. One of its most outstanding members was Newton. We see that the rising bourgeoisie brought natural science into its service, into the service of developing productive forces. … And since … the basic problems were mechanical ones, this encyclopedic survey of the physical problems amounted to creating a consistent structure of theoretical mechanics which would supply general methods for solving the problems of celestial and terrestrial mechanics.”

The claim that “the the rising bourgeoisie brought natural science into its service” is indeed similar to what Bilgrami seems to have in mind. As a new special issue on Boris Hessen’s work makes clear, these claims were widely disseminated.** At the same time, an encyclopedia from 2001 characterises Hessen’s view as “crude and dogmatically Marxist”.

Thus, the reception of Hessen’s claim is itself tied to larger ideological convictions. This might not be surprising, but it puts pressure on the reasons we give for favouring one narrative over another. While believing in certain narratives means believing that certain correlations (between facts) are more relevant than others, our choice and rejection of narratives might be driven by wider ideologies or belief systems. If this is correct, then the dismissal of Hessen’s insights might not be owing to the dismissal of his scholarship but rather to the supposed Marxism. So the question is: are the cold-war convictions still alive, driving the choice of narratives? Or is the renewed interest in Marxism already a reason for a renewed interest in Hessen’s work? In any case, in the history of interpreting Newtonian naturalism Akeel Bilgrami’s paper is striking, because it bears witness to this reception without directly acknowledging it.*** Might this be because there are new reasons for being interested in the (history of the) relation between scientific naturalism and capitalism?

____

* It’s important to note that Bilgrami uses the term naturalism in a resticted sense: “I am using the term “naturalism” in a rather restricted way, limiting the term to a scientistic form of the philosophical position. So, the naturalism of Wittgenstein or John McDowell or even P. F. Strawson falls outside of this usage. In fact all three of these philosophers are explicitly opposed to naturalism in the sense that I am using the term. Perhaps “scientism” would be the better word for the philosophical position that is the center of the dispute I want to discuss.” – This problematically restricted use of naturalism is probably owing to Margaret Jacob’s distinction between a “moderate” and “radical enlightenment”. The former movement is associated with writers like Newton and Boyle; the latter with the pantheist “dissenters” for whom nature is inseprable from the divine.

** I am very grateful to Sean Winkler, who not only edited the special issue on Hessen but kindly sent me a number of passages from his writings. I’m also grateful to all the kind people who patiently discussed some questions on Facebook (here with regard to Bilgrami; here with regard to Hessen).

*** The lines of reception are of course much more complex and, in Bilgrami’s case, perhaps more indirect than I have suggested. Bilgrami explicitly references Weber’s recourse to “disentchantment” and also acknowledges the importance of Marx for his view. Given these references, Bilgrami’s personal reception might be owing more to Weber than Hessen. That said, Merton (following Weber) clearly acknowledges his debt to Hessen. A further (unacknowledged but possible) source for this thesis is Edgar Zilsel. For more details on the intricate pathways of reception see Gerardo Ienna’s and Giulia Rispoli’s paper in the special issue referenced above.

Clarity as a political concept

“With which of the characters do you identify?” For God’s sake, with whom does the author identify? With the adverbs, obviously. Umberto Eco, Postscript to “The Name of the Rose”

Philosophers, especially those working in the analytic tradition, clearly pride themselves on clarity. In such contexts, “clarity” is often paired with “rigour” or “precision”. If you present your work amongst professional philosophers, it will not only be assessed on whether it’s original or competently argued, but also on whether it is written or presented clearly. But while it is sometimes helpful to wonder whether something can be said or presented differently, the notion of clarity as used by philosophers has a somewhat haunting nimbus. Of course, clarification can be a worthy philosophical project in itself. And it is highly laudable if authors define their terms, use terms consistently, and generally attempt to make their work readable and accessible. But often wishing to achieve clarity makes people fret with their work forever, as if (near) perfection could be reached eventually. In what follows, I’d like to suggest that there is no such thing as clarity, at least not in an objective sense. You can objectively state how many words a sentence contains, but not whether it’s clear. Rather, it is a political term, often used to police the boundaries of what some people consider canonical.

The notion of clarity thrives on a contentious distinction between content and form or style of writing. According to a fairly widespread view, content and form can come apart in that the same content can be expressed in different ways. You can say that (1) Peter eats a piece of cake and that (2) a piece of cake gets eaten by Peter. Arguably, the active and passive voices express the same content. Now my word processor regularly suggests that I change passive to active voice. The background assumption seems to be that the active voice is clearer in that it is easier to parse. (The same often goes for negations.) If we use this assumption to justify changes to or criticisms of a text, it is problematic for two reasons:

Firstly, we have to assume that one formulation really is clearer in the sense of being easier to parse or understand. Is the active voice really clearer? This will depend on what is supposed to be emphasized. Perhaps I want to emphasize “cake” rather than “Peter”. In this case, the passive voice might be the construction of choice. Although I’m not up to date in cognitive linguistics, I’d guess that semantic and pragmatic features figure greatly in this question. My hunch is that, in this sense, clarity depends on conformity with expectations of the recipients.*

Secondly, we have to assume the identity of content across different formulations. But how do you tell whether the content of two expressions is the same? Leaving worries about analyticity aside, the Peter-Cake example seems fairly easy. But how on earth are we going to tell whether Ryle presented a clearer version of what Wittgenstein or even Heidegger talked about in some of their works?! In any case, an identity claim will amount to stipulation and thus be open to criticism and revision. Again, the question whether the stipulation goes through will depend on whether it conforms to the expectations of the recipients.**

If clarity depends on the conformity with expectations, then the question is: whose expectations matter? If you write a paper for a course, you’ll have an answer to that question. If you write a paper for a journal, you’ll probably look at work that got published there. In this sense, clarity is an inherently political notion.*** Unless you conform to certain stylistic expectations, your work will be called unclear. On a brighter note, if you’re unhappy with some of the current stylistic fashions, it is helpful to bear in mind that all styles are subject to historical change.

The upshot is that stylistic moves are to be seen as political choices. That said, the fact that clarity is a political notion does not discredit it. But the idea that style is just a matter of placing ornaments on a given content is yet another way of falling prey to the notorious myth of the given, often invoked to obscure the normative dimensions.

____

* On FB, Eric Schliesser raises the objection that “conformity to expectations” is a problematic qualification in that some position might be stated clearly but lead to entirely novel insights. – I agree and would reply that conformity to expectations does not rule out surprises or novelty. Still, I would argue that the novelties ought to be presented in a manner acceptable by a certain community. – Clearly, clarity cannot merely equal “conformity to expectations”, since in this case it would be at once too permissive (in that it would include grammatically acceptable formulations whose content might remain unclear) and too narrow (in that it would exclude novelty).

** Eric Schliesser makes this point succinctly with regard to ‘formal philosophy’ when saying that “it can be easily seen that if the only species of clarity that is permitted is the clarity that is a property of formal systems, then emphasizing clarity simply becomes a means to purge alternative forms of philosophy.”

*** This is convincingly argued at length over at the Vim Blog. Go and read the whole piece! Here is an excerpt: “[The concept of clarity] creates, enforces, and perpetuates community boundaries and certain power relations within a community. … [T]here is no pragmatic distinction between the descriptive and evaluative senses of clarity. Not only is an ascription of clarity a claim about quality, but it is seemingly a claim that references objective features of the bit of philosophy. So far we have been attempting to analyze the concept of clarity by first drawing out the descriptive senses and standards—i.e. by understanding the evaluative in light of the descriptive. The better approach is the opposite. What does the word do? I propose focusing first on the impact that the word has in discourse. The assumption that clarity begins with descriptive features leads to an array of problems partly because such an approach “runs right over the knower.” Instead, first, certain bits of philosophy are called clear or unclear as a feature and consequence of the power relations of the group and world more broadly. And then second, what gets called clear or unclear becomes subject to philosophical analysis.

… There is a powerful rhetorical consequence. The ascription of clarity marks those who would stop and question it as outsiders. Those in lower positions of power will not dare to question what has been laid down as clear. It is always possible that the clarity of a putatively clear bit of philosophy can indeed be justified from shared evidence. In that case, the person who dared to speak up is revealed as someone who does not grasp the shared evidence or has not reasoned through the justification, unlike everyone who let the bit of philosophy go unchallenged. They appear unintelligent and uninformed and, in effect, deserving of their lower position of power. So, insofar as power is desirable, there is an inclination to let claims to clarity go unchallenged, thereby signaling understanding through silent consent. The immediate impulse is to assume that one is behind or uninformed.”

Why is early modern philosophy such a great success? A response to Christia Mercer

In 2008, when I was about to hand in the 580 pages of my professorial dissertation (Habilitation) on Locke’s philosophy of language, Robert Brandom came to visit Berlin for a workshop on his views on the history of philosophy. A paper (by Markus Wild) that I was particularly excited about portrayed Hume as an inferentialist, and thus countered Brandom’s more traditional reading of Hume. In the heated discussion that followed, Brandom dropped what was for me nothing short of a bomb. Faced with refined exegetical evidence, he ultimately ended the conversation by saying something like “I don’t care about these texts. My Hume is an atomist.” (I’m quoting from memory) – I was shocked, not just because of the dismissive attitude towards the efforts of the speaker; rather Brandom seemed to have dismissed an entire methodological approach that unifies a great number of scholars. This approach could be described as a nuanced combination of rational reconstruction and contextualism. Adherents of this fairly widespread way of doing history care about both historical details and the plausibility of the arguments. By dismissing any interest in historical accuracy, Brandom had just committed my 580 pages to the bin. Or so I felt.

According to an intriguing paper by Christia Mercer, Brandom’s attitude is now itself a thing of the past. The attitude in question is “rational reconstructionism”, endorsed by people who mine history for interesting arguments without caring whether the reconstructions of the arguments would be approved by the original authors.* Mercer claims that, at least among English-speaking early modernists, rational reconstructionism has been replaced by contextualism. In the light of this methodological victory, contextualism seems to have been an “obvious success” both with regard to scholarly achievements and in putting the history of early modern philosophy on the map. If my anecdata are a good indication of reality, then early modern philosophy is a lot more well off than, say, medieval philosophy: there seem to be a lot more jobs, editions, and translations coming up and out these days. If Mercer is right, then this success is owing to contextulalism, too. Mercer’s paper is a crisp reconstruction of the methodological debate, and I advise you to read it along with the astute responses by Eric Schliesser and Charlie Huenemann. In what follows, I would like to focus just on one single question: Why is early modern philosophy such a success? Is it really owing to contextualism? My hunch is that the opposite might be true: If any methodological approach is involved in its institutional success, it’s rational reconstructionism.

Why do I think so? Christia Mercer claims that rational reconstructionists and contextualists started out as opposed camps, but ended all up as contextualists for the reason that even rational reconstructionists started caring about historical accuracy. In other words, the early Jonathan Bennett is a rational reconstructionist but the later Bennett is a contextualist insofar as he cares about historical accuracy. While this might be true, I worry that Mercer’s portrait of the disagreement is flawed in one respect. Mercer reconstructs the disagreement between rational recostructionists and contextualists as a debate among historians of philosophy. As I see it, the debate is at least initially one between philosophers and historians of philosophy. Arguably, authors like Brandom and Bennett started their careers as philosophers and used history somewhat instrumentally. In fact, there is an ongiong debate as to what extent history is even part of philosophy.** Now, whatever you think about this debate, the simple fact remains that that there are more philosophers and jobs for philosophers than for historians of philosophy. Thus, I am inclined to believe that the success of early modern philosophy is owing to philosophers being interested in early modern authors. Some famous philosophers advertise their historical heroes and, before you know it, scholarship follows suit. Spinoza is now “relevant”, because a number of famous philosophers find him interesting, not because someone discovers an unknown manuscript of the Ethica in an archive.***

A related worry about Mercer’s reconstruction is that she starts out by treating rational reconstructionism and contextualism as extreme positions. While some proponents of the respective methods might be somewhat radical, most historians of philosophy seem to be working somewhere in the middle of the road where, as I said earler, both context and plausibility of arguments matter. Inside and outside early modern studies, these positions have been related to one another for decennia. Perhaps such studies have not always been published in places as prestigious as JHP, but they have informed scholarship for a long time. So again, what might seem as revolution rather strikes me as a continuation, where research and teaching agendas get increasingly refined once people are prepared to dedicate some money and journal-space to historical scholarship.

While I couldn’t agree more with the methodological pluralism that Mercer advocates, I fear its success is not a result of contextualism. Mercer rightly praises the growing number of works on non-canonical authors, translations and editorial work alongside the common interpretative efforts. But in a revolution I will only begin to believe once philosophy departments start hiring people whose area of specialisation is in translating or editing historical texts of non-canonical figures.

___

* Following Rorty’s famous categorisation, I’d think of Brandom as being invested in Geistesgeschichte rather than mere rational reconstruction.

** See for instance the papers in Philosophy and the Historical Perspective, ed. by Marcel van Ackeren with Lee Klein, OUP 2018.

*** Addendum (5 August): In a similar vein, Jessica Gordon-Roth’s and Nancy Kendrick’s paper on “Recovering early modern women writers” exposes an important problem for the rejection of rational reconstructionism, as advanced by Christia Mercer. In some contexts, such a rejection might nourish the suspicion that there is nothing rational to be reconstructed. They write: “What is impeding our progress in eradicating the myth that there are no women in the history of philosophy? […] What we argue is that so often we treat early modern women philosophers’ texts in ways that are different from, or inconsistent with, basic commitments of analytic philosophy and our practices as historians of philosophy working in the analytic tradition. Moreover, this is the case even when we consider the practices of those who take a more historiographical approach. In so doing, we may be triggering our audiences to reject these women as philosophers, and their texts as philosophical. Moreover, this is the case despite our intention to achieve precisely the opposite effect.”

Let’s get rid of “medieval” philosophy!

“Your views are medieval.” Let’s face it: we often use the term “medieval” in a pejorative sense; and calling a line of thought “medieval” might be a good way of chasing away students who would otherwise have been interested in that line of thought. In what follows, I’d like to suggest that, in order to keep what we call medieval philosophy, we should stop talking about “medieval” philosophy altogether.

While no way of slicing up periods is arbitrary, they all come with problems, as this blog post by Laura Sangha makes clear. So I don’t think that there ever will be a coherently or neatly justified periodisation of history, let alone of history of philosophy. But while other names of periods are equally problematic, none of them is as degrading. Outside academia, the term “medieval” is mainly used to describe exceptionally cruel actions or backward policies.  Often named “dark ages”, the years from, roughly, 500 to 1500 count as a period of religious indoctrination. This usage also shapes the perception in academic philosophy. Arguably, medieval philosophical thought is still seen as subordinate to theology. Historical surveys of philosophy often jump from ancient to early modern, and even specialists in history often make it sound as if the sole philosopher that existed in these thousand years had been Thomas Aquinas. This deplorable status has real-life consequences. Exceptions aside, there are very few jobs in medieval philosophy and a decreasing number of students interested in studying it.

You will rightly object that the problems described are not only owing to the name “medieval” and its cognates. I agree. First of all, the field of history of philosophy has not exactly been pampered in recent decades. Often people working on contemporary issues are asked to do a bit of history on the side or the study programmes are catered for in other fields of humanities (history, theology, languages). Secondly and perhaps more importantly, the dominant research traditions in medieval philosophy often continue to represent the field in an esoteric manner. As a student, the first thing you are likely to hear is that it is almost impossible to study medieval thought unless you read Latin (at least!), learn to read illegible manuscripts, understand outlandish theological questions (angels on a pinhead, anyone?), and know Aristotle by heart. Thirdly, most historical narratives depict medieval thought as a backward counterpoint to what is taken to be the later rise of science, enlightenment and secularisation. While the first of these three problems is beyond the control of medievalists alone, the second and third issue are to some degree in our own hands.

Therefore, we can and should present our field as more accessible. A great part of this will consist in strengthening continuities with other periods. Thus, medieval philosophy should always be seen as continuous with what is called ancient or modern or even contemporary thought. This way, we can rid ourselves not only of this embarrassment of a name (“Middle Ages”) but also of trying to indicate what is typically medieval. I’m inclined to think that, whenever we find something “typical” for that period, it will be also typical of other periods. In other words, there is nothing specifically medieval in medieval philosophy.

While there are already a number of laudable attempts to renew approaches in teaching (see e.g. Robert Pasnau’s survey of surveys), my worry is that the more esoteric strands in our field, both in terms of method and content, will be insinuated whenever we talk about “medieval” philosophy. The term “medieval” is a sticky one and won’t go away, but in combination with “philosophy” it will continue to sound like an oxymoron. What shall we say instead, though? I’d suggest that we talk about what we really do: most of us study a handful of themes or topics in certain periods of time. So why not say that you study the eleventh and twelfth centuries (in the Latin West or wherever) or the history of thought from the thirteenth to the sixteenth century? If a more philosophical specification is needed you might say that you study the history of, say, psychology, especially from the thirteenth to the seventeenth century. If you believe in the progress narrative, you might even use “pre-modern”. Or why not “post-ancient”?

By the way, if you are what is called a medievalist and you work on a certain topic, most of your work will be continuous with ancient or (early) modern philosophy. If there are jobs advertised in these areas, it’s not unlikely that they will be in your field. That might become more obvious if you call yourself a specialist in, say, the history of metaphysics from 400 to 500 AD or the history of ethics from 1300 to 1800. If this is the case, it would not seem illegitimate to apply for positions in such areas, too. – “Oh”, you might say, “won’t these periods sound outrageously long?” Then just remind people that the medieval period comprises at least a thousand years.

____

PS. I started this blog on 26 July 2018. So the blog is now over a year old. Let me take the opportunity to thank you all for reading, writing, and thinking along.