Who’s afraid of relativism?

In recent years, relativism has had a particularly bad press. Often chided along with what some call postmodernism, relativism is held responsible for certain politicians’ complacent ignorance or bullshitting. While I’m not alone in thinking that this scapegoating is due to a severe misunderstanding of relativism, even those who should know better join the choir of condemnation:

“The advance of relativism – the notion that truth is relative to each individual’s standpoint – reached what might be seen as a new low with the recent claim by Donald Trump’s senior adviser Kellyanne Conway that there are such things as “alternative facts”. (She went so far as to cite a non-existent “Bowling Green massacre” to justify Trump’s refugee travel ban, something she later described as a “misspeak”.)” Joe Humphrey’s paraphrasing Timothy Williamson in the Irish Times, 5.7. 2017

If this is what Williamson thinks, he confuses relativism with extreme subjectivism. But I don’t want to dismiss this view too easily. The worry behind this accusation is real. If people do think that truth is relative to each individual’s standpoint, then “anything goes”. You can claim anything and there are no grounds for me to correct you. If this is truth, there is no truth. The word is a meaningless appeal. However, I don’t think that the politicians in question believe in anything as sophisticated as relativism. Following up on some intriguing discussions about the notion of “alternative facts”, I believe that the strategy is (1) to lie by (2) appealing to an (invented) set of states of affairs that supposedly has been ignored. Conway did not assume that she was in the possession of her own subjective truth; quite the contrary. Everyone would have seen what she claimed to be the truth, had they cared to look at the right time in the right way. If I am right, her strategy depends on a shared notion of truth. In other words, I guess that Williamson and Conway roughly start out from the same understanding of truth. To bring in relativism or postmodernism is not helpful when trying to understand the strategy of politicians.

By introducing the term “alternative facts” Conway reminds us of the fact (!) that we pick out truths relative to our interests. I think we are right to be afraid of certain politicians. But why are we afraid of relativism? We have to accept that truth, knowledge or morality are relative to a standard. Relativism is the view that there is more than one such standard.* This makes perfect sense. That 2 plus 2 equals 4 is not true absolutely. Arguably, this truth requires the agreement on a certain arithmetic system. I think that arithmetic and other standards evolve relative to certain interests. Of course, we might disagree about the details of how to spell out such an understanding of relativism. But it is hard to see what makes us so afraid of it.

Perhaps an answer can be given by looking at how relativism evolved historically. If you look at early modern or medieval discussions of truth, knowledge and morality, there is often a distinction between divine and human concepts. Divine knowledge is perfect; human knowledge is partial and fallible. Divine knowledge sets an absolute standard against which human failure is measured. If you look at discussions in and around Locke, for instance, especially his agnosticism about real essences and divine natural law, divine knowledge is still assumed but it loses the status of a standard for us. What we’re left with is human knowledge, in all its mediocrity and fallibility. Hume goes further and no longer even appeals to the divine as a remote standard. Our claims to knowledge are seen as rooted in custom. Now if the divine does no longer serve as an absolute measure, human claims to knowledge, truth and morality are merely one possible standard. There is no absolute standard available. Nominal essences or customs are relative to the human condition: our biological make-up and our interests. The focus on human capacities, irrespective of the divine, is a growing issue, going hand in hand with an idea of relativism.  The “loss” of the absolute is thus owing to a different understanding of theological claims about divine standards. Human knowledge is relative in that it is no longer measured against divine knowledge. If this is correct, relativism emerged (also) as a result of a dissociation of divine and human standards. Why would we be afraid of that?

____

* I’m following Martin Kusch’s definition in his proposal for the ERC project on the Emergence of Relativism“It is not easy to give a neutral definition of “relativism”: defenders and critics disagree over the question of what the relativist is committed to. Roughly put, the relativist regarding a given domain (e.g. epistemology) insists that judgments or beliefs in this domain are true or false, justified or unjustified, only relative to  systems of standards. For the relativist there is more than one such system, and there is no neutral way of adjudicating between them. Some relativists go further and claim that all such systems are equally valid.”

Why would we want to call people “great thinkers” and cite harassers? A response to Julian Baggini

If you have ever been at a rock or pop concert, you might recognise the following phenomenon: The band on the stage begins playing an intro. Pulsing synths and roaring drums build up to a yet unrecognisable tune. Then the band breaks into the well-known chorus of their greatest hit and the audience applauds frenetically. People become enthusiastic if they recognise something. Thus, part of the “greatness” is owing to the act of recognising it. There is nothing wrong with that. It’s just that people celebrate their own recognition at least as much as the tune performed. I think much the same is true of our talk of “great thinkers”. We applaud recognised patterns. But only applauding the right kinds of patterns and thinkers secures our belonging to the ingroup. Since academic applause signals and regulates who belongs to a group, such applause has a moral dimension, especially in educational institutions. Yes, you guess right, I want to argue that we need to rethink whom and what we call great.

When we admire someone’s smartness or argument, an enormous part of our admiration is owing to our recognition of preferred patterns. This is why calling someone a “great thinker” is to a large extent self-congratulatory. It signals and reinforces canonical status. What’s important is that this works in three directions: it affirms that status of the figure, it affirms it for me, and it signals this affirmation to others. Thus, it signals where I (want to) belong and demonstrates which nuances of style and content are of the right sort. The more power I have, the more I might be able to reinforce such status. People speaking with the backing of an educational institution can help building canonical continuity. Now the word “great” is conveniently vague. But should we applaud bigots?

“Admiring the great thinkers of the past has become morally hazardous.” Thus opens Julian Baggini’s piece on “Why sexist and racist philosophers might still be admirable”. Baggini’s essay is quite thoughtful and I advise you to read it. That said, I fear it contains a rather problematic inconsistency. Arguing in favour of excusing Hume for his racism, Baggini makes an important point: “Our thinking is shaped by our environment in profound ways that we often aren’t even aware of. Those who refuse to accept that they are as much limited by these forces as anyone else have delusions of intellectual grandeur.” – I agree that our thinking is indeed very much shaped by our (social) surroundings. But while Baggini makes this point to exculpate Hume,* he clearly forgets all about it when he returns to calling Hume one of the “greatest minds”. If Hume’s racism can be excused by his embeddedness in a racist social environment, then surely much of his philosophical “genius” cannot be exempt from being explained through this embeddedness either. In other words, if Hume is not (wholly) responsible for his racism, then he cannot be (wholly) responsible for his philosophy either. So why call only him the “great mind”?

Now Baggini has a second argument for leaving Hume’s grandeur untouched. Moral outrage is wasted on the dead because, unlike the living, they can neither “face justice” nor “show remorse”. While it’s true that the dead cannot face justice, it doesn’t automatically follow that we should not “blame individuals for things they did in less enlightened times using the standards of today”. I guess we do the latter all the time. Even some court systems punish past crimes. Past Nazi crimes are still put on trial, even if the system under which they were committed had different standards and is a thing of a past (or so we hope). Moreover, even if the dead cannot face justice themselves, it does make a difference how we remember and relate to the dead. Let me make two observations that I find crucial in this respect:

(1) Sometimes we uncover “unduly neglected” figures. Thomas Hobbes, for instance, has been pushed to the side as an atheist for a long time. Margaret Cavendish is another case of a thinker whose work has been unduly neglected. When we start reading such figures again and begin to affirm their status, we declare that we see them as part of our ingroup and ancestry. Accordingly, we try and amend an intellectual injustice. Someone has been wronged by not having been recognised. And although we cannot literally change the past, in reclaiming such figures we change our intellectual past, insofar as we change the patterns that our ingroup is willing to recognise. Now if we can decide to help changing our past in that way, moral concerns apply. It seems we have a duty to recognise figures that have been shunned, unduly by our standards.**

(2) Conversely, if we do not acknowledge what we find wrong in past thinkers, we are in danger of becoming complicit in endorsing and amplifying the impact of certain wrongs or ideologies. But we have the choice of changing our past in these cases, too. This becomes even more pressing in cases where there is an institutional continuity between us and the bigots of the past. As Markus Wild points out in his post, Heidegger’s influence continues to haunt us, if those exposing his Nazism are attacked. Leaving this unacknowledged in the context of university teaching might mean becoming complicit in amplifying the pertinent ideology. That said, the fact that we do research on such figures or discuss their doctrines does not automatically mean that we endorse their views. As Charlotte Knowles makes clear, it is important how we relate or appropriate the doctrines of others. It’s one thing to appropriate someone’s ideas; it’s another thing to call that person “great” or a “genius”.

Now, how do these considerations fare with regard to current authors? Should we adjust, for instance, our citation practices in the light of cases of harassment or crimes? – I find this question rather difficult and think we should be open to all sorts of considerations.*** However, I want to make two points:

Firstly, if someone’s work has shaped a certain field, it would be both scholarly and morally wrong to lie about this fact. But the crucial question, in this case, is not whether we should shun someone’s work. The question we have to ask is rather why our community recurrently endorses people who abuse their power. If Baggini has a point, then the moral wrongs that are committed in our academic culture are most likely not just the wrongs of individual scapegoats who happen to be found out. So if we want to change that, it’s not sufficient to change our citation practice. I guess the place to start is to stop endowing individuals with the status of “great thinkers” and begin to acknowledge that thinking is embedded in social practices and requires many kinds of recognition.

Secondly, trying to take the perspective of a victim, I would feel betrayed if representatives of educational institutions would simply continue to endorse such voices and thus enlarge the impact of perpetrators who have harmed others in that institution. And victimhood doesn’t just mean “victim of overt harassment”. As I said earlier, there are intellectual victims of trends or systems that shun voices for various reasons, only to be slowly recovered by later generations who wish to amend the canon and change their past accordingly.

So the question to ask is not only whether we should change our citation practices. Rather we should wonder how many thinkers have not yet been heard because our ingroup keeps applauding one and the same “great mind”.

___

* Please note, however, that Hume’s racism was already criticised by Adam Smith and James Beattie, as Eric Schliesser notes in his intriguing discussion of Baggini’s historicism (from 26 November 2018).

** Barnaby Hutchins provides a more elaborate discussion of this issue: “The point is that a neutral approach to doing history of philosophy doesn’t seem to be a possibility, at least not if we care about, e.g., historical accuracy or innovation. Our approaches need to be responsive to the structural biases that pervade our practices; they need to be responsive to the constant threat of falling into this chauvinism. So it’s risky, at best, to take an indiscriminately positive approach towards canonical and non-canonical alike. We have an ethical duty (broadly construed) to apply a corrective generosity to the interpretation of non-canonical figures. And we also have an ethical duty to apply a corrective scepticism to the canon. Precisely because the structures of philosophy are always implicitly pulling us in favour of canonical philosophers, we need to be, at least to some extent, deliberately antagonistic towards them.”

In the light of these considerations, I now doubt my earlier conclusion that “attempts at diversifying our teaching should not be supported by arguments from supposedly different moral status”.

*** See Peter Furlong’s post for some recent discussion.

Why do we share the vulgar view? Hume on the medical norms of belief*

We tend to think that beliefs are opinions that we form in the light of certain evidence. But perhaps most beliefs are not like that. Perhaps most beliefs are like contagious diseases that we catch. – When philosophers talk like that, it’s easy to think that they are speaking metaphorically. Looking at debates around Hume and other philosophers, I’ve begun to doubt that. There is good reason to see references to physiology and medical models as a genuine way of philosophical explanation. As I hope to suggest now, Hume’s account of beliefs arising from sympathy is a case in point.

Seeing the table in front of me, I believe that there is a table. Discerning the table’s colour, I believe that the table is brown. It is my philosophical education that made me wonder whether what I actually perceive might not be the table and a colour but mental representations of such things. Taking things to be as they appear to us, without wondering about cognitive intermediaries, that is what is often called the vulgar view or naïve realism. Now you might be inclined to think that this view is more or less self-evident or natural, but if you look more carefully, you’ll quickly see that it does need explaining.

As far as I know there is no historical study of the vulgar view, but I found various synonyms for this view or its adherents: Ockham, for instance, speaks of the “layperson” (laicus), Bacon, Berkeley and Hume of the “vulgar view” or “system”, Reid and Moore of “common sense”. When it is highlighted, it is often spelled out in opposition to a “philosophical view” such as representationalism, the “way of ideas” or idealism. Today, I’d like to briefly sketch what I take to be Hume’s account of this view. Not only because I like Hume, but because I think his account is both interesting and largely unknown. As I see it, Hume thinks that we adhere to the vulgar view because others around us hold it. But why, you might ask, would other people’s views affect our attitudes so strongly? If I am right, Hume holds that deviating from this view – for instance by taking a sceptical stance – will be seen as not normal and make us outsiders. Intriguingly, this normality is mediated by our physiological dispositions. Deviation from the vulgar view means deviation from the common balance of humours and, for instance, suffering from melancholy.** In this sense, the vulgar view we share is governed by medical norms, or so I argue.

The vulgar view is often explicitly discussed because it raises problems. If we want to explain false beliefs or hallucinations, it seems that we need to take recourse to representations: seeing a bent stick in water can’t mean to see a real stick, but some sort of representation or idea. Why? Because reference to the (straight) stick cannot explain why we see it as bent. Since the vulgar view doesn’t posit cognitive representations, it cannot account for erroneous perceptions. What is less often addressed, however, is that the vulgar view or realism is not at all plain or empirical in nature. The vulgar view is not a view that is confirmed empirically; rather it is a view about the nature of empirical experience. It’s not that we experience that objects are as they appear. So the source of the vulgar view cannot be given in experience or any empirical beliefs. Now if this is correct, we have to ask what it is that makes us hold this view. There is nothing natural or evident about it. But if this view is not self-evident, why do we hold it and why is it so widespread?

Enter Hume: According to Hume, most of the beliefs, sentiments and emotions we have are owing to our social environment. Hume explains this by referring to the mechanism of sympathy: “So close and intimate is the correspondence of human souls, that no sooner any person approaches me, than he diffuses on me all his opinions, and draws along my judgment in a greater or lesser degree.” (Treatise 3.3.2.1) Many of the beliefs we hold, then, are not (merely) owing to exposure to similar experiences, but to the exposure to others. Being with others affords a shared mentality. In his Essay on National Character, Hume writes: “If we run over the globe, or revolve the annals of history, we shall discover every where signs of a sympathy or contagion of manners, none of the influence of air or climate.” What is at stake here? Arguing that sympathy and contagion explain the sociological and historical facts better, Hume dismisses the traditional climate theory in favour of his account of sympathy. Our mentalities are not owing to the conditions of the place we live in but to the people that surround us.***

Now how exactly is the “contagion” of manners and opinions explained? Of course, a large part of our education is governed by linguistic and behavioural conventions. But at the bottom, there is a physiological kind of explanation that Hume could appeal to. Corresponding to our mental states are physiological dispositions, temperature of the blood etc., the effects of which are mediated through the air via vapours which, in turn, affect the imagination of the recipient. Just like material properties of things affect our sense organs, the states of other bodies can affect our organs and yield pertinent effects. When Hume speaks of the “contagion” of opinion, it is not unlikely that he has something like Malebranche’s account in mind. According to this account opinions and emotions can be contagious and spread just like diseases.

In the Search after Truth, Malebranche writes: “To understand what this contagion is, and how it is transmitted from one person to another, it is necessary to know that men need one another, and that they were created that they might form several bodies, all of whose parts have a mutual correspondence. … These natural ties we share with beasts consist in a certain disposition of the brain all men have to imitate those with whom they converse, to form the same judgments they make, and to share the same passions by which they are moved.” (SAT 161) The physiological model of sympathetic contagion, then, allows for the transmission of mental states allueded to above. This is why Hume can claim that a crucial effect of sympathy lies in the “uniformity of humours and turn of thinking”. In this sense, a certain temperament and set of beliefs might count as pertinent to a view shared by a group.

Of course, this mostly goes unnoticed. It only becomes an issue if we begin to deviate from a common view, be it out of madness or a sceptical attitude:  “We may observe the same effect of poetry in a lesser degree; and this is common both to poetry and madness, that the vivacity they bestow on the ideas is not derive’d from the particular situations or connexions of the objects of these ideas, but from the present temper and disposition of the person.” (T 1.3.10.10)

The point is that the source of a certain view might not be the object perceived but the physiological dispositions which, in turn, are substantially affected by our social environment. If this is correct, Hume’s account of sympathy is ultimately rooted in a medical model. The fact that we share the vulgar view and other attitudes can be explained by appealing to physiological interactions between humans.

As I see it, this yields a medical understanding of the normality we attribute to a view. Accordingly, Hume’s ultimate cure from scepticism is not afforded by argument but by joining the crowd and playing a game of backgammon. The supposed normality of common sense, then, is not owing to the content of the view but to the fact that it is widespread.

____

* This is a brief sketch of my Hume interpretation defended in my book on Socialising Minds: Intersubjectivity in Early Modern Philosophy, the manuscript of which I’m currently finalising. – Together with Evelina Miteva, I also co-organise a conference on “Medicine and Philosophy”. The CFP is still open (till December 15, 2018): please apply if you’re interested.

** Donald Ainslie makes a nice case for this in his Hume’s True Scepticism, but claims that Hume’s appeal to humoral theory might have to be seen as metaphorical. — I realise that proper acknowledgements to Humeans would take more than one blog post in itself:) Stefanie Rocknak’s work has been particularly important for getting to grips with Hume’s understanding of the vulgar view. – Here, I’m mainly concerned with the medical model in the background. Marina Frasca-Spada’s work has helped with that greatly. But what we’d need to understand better still is the medical part in relation to the notion of imagination, as spelled out in Malebranche, for instance. Doina Rusu and Koen Vermeir have done some great work on transmission via vapours, but the picture we end up with is still somewhat coarse-grained, to put it mildly.

*** I am grateful to Evelina Miteva for sharing a preliminary version of her paper on Climata et temperamenta, which provides a succinct account of the medieval discussion.  Hume should thus be seen as taking sides in an ongoing debate about traits and mentalities arising from climate vs. arising from sympathy.