Ugly ducklings and progress in philosophy

Agnes Callard recently gave an entertaining interview at 3:16 AM. Besides her lovely list of views that should count as much less controversial than they do, she made an intriguing remark about her book:

“I had this talk on weakness of will that people kept refuting, and I was torn between recognizing the correctness of their counter-arguments (especially one by Kate Manne, then a grad student at MIT), and the feeling my theory was right. I realized: it was a bad theory of weakness of will, but a good theory of another thing. That other thing was aspiration. So the topic came last in the order of discovery.”

Changing the framing or framework of an idea might resolve seemingly persisting problems and make it shine in a new and favourable light. Reminded of Andersen’s fairy tale in which a duckling is considered ugly until it turns out that the poor animal is actually a swan, I’d like to call this the ugly duckling effect. In what follows, I’d like to suggest that this might be a good, if underrated, form of making progress in philosophy.

Callard’s description stirred a number of memories. You write and refine a piece, but something feels decidedly off. Then you change the title or topic or tweak the context ever so slightly and, at last, everything falls into place. It might happen in a conversation or during a run, but you’re lucky if it does happen at all. I know all too well that I abandoned many ideas, before I eventually and accidentally stumbled on a change of framework that restored (the reputation of) the idea. As I argued in my last post, all too often criticism in professional settings provides incentives to tone down or give up on the idea. Perhaps unsurprisingly, many criticisms focus on the idea or argument itself, rather than on the framework in which the idea is to function. My hunch is that we should pay more attention to such frameworks. After all, people might stop complaining about the quality of your hammer, if you tell them that it’s actually a screwdriver.

I doubt that there is a precise recipe to do this. I guess what helps most are activities that help you tweaking the context, topic or terminology. This might be achieved by playful conversations or even by diverting your attention to something else. Perhaps a good start is to think of precedents in which this happened. So let’s just look at some ugly duckling effects in history:

  • In my last post I already pointed to Wittgenstein’s picture theory of meaning. Recontextualising this as a theory of representation and connecting it to a use theory or a teleosemantic account restored the picture theory as a component that makes perfect sense.
  • Another precendent might be seen in the reinterpretations of Cartesian substance dualism. If you’re unhappy with the interaction problem, you might see the light when, following Spinoza, you reinterpret the dualism as a difference of aspects or perspectives rather than of substances. All of a sudden you can move from a dualist framework to monism but retain an intuitively plausible distinction.
  • A less well known case are the reinterpretations of Ockham’s theory of mental language, which was seen as a theory of ideal language, a theory of logical deep structure, a theory of angelic speech etc.

I’m sure the list is endless and I’d be curious to hear more examples. What’s perhaps important to note is that we can also reverse this effect and turn swans into ugly ducklings. This means that we use the strategy of recontextualisation also when we want to debunk an idea or expose it as problematic:

  • An obvious example is Wilfried Sellars’ myth of the given: Arguing that reference to sense data or other supposedly immediate elements of perception cannot serve as a foundation or justification of knowledge, Sellars dismissed a whole strand of epistemology.
  • Similarly, Quine’s myth of the museum serves to dismiss theories of meaning invoking the idea that words serve as labels for (mental) objects.
  • Another interesting move can be seen in Nicholas of Cusa’s coincidentia oppositorum, restricting the principle of non-contradiction to the domain of rationality and allowing for the claim that the intellect transcends this domain.

If we want to assess such dismissals in a balanced manner, it might help to look twice at the contexts in which the dismissed accounts used to make sense. I’m not saying that the possibility of recontextualisation restores or relativises all our ideas. Rather I think of this option as a tool for thinking about theories in a playful and constructive manner.

Nevertheless, it is crucial to see that the ugly duckling effect works in both ways, to dismiss and restore ideas. In any case, we should try to consider a framework in which the ideas in question make sense. And sometimes dismissal is the way to go.

At the end of the day, it could be helpful to see that the ugly duckling effect might not be owing to the duck being actually a swan. Rather, we might be confronted with duck-swan or duck-rabbit.

Spotting mistakes and getting it right

“Know thyself” is probably a fairly well known maxim among philosophers. But the maxim we live by rather seems to be one along the lines of “know the mistakes of others”. In calling this out I am of course no better. What prompts me to write about this now is a recent observation, not new but clearly refreshed with the beginning of the academic year: it is the obvious desire of students to “get it right”, right from the start. But what could be wrong with desiring to be right?

Philosophers these days don’t love wisdom but truth. Now spotting the mistakes of others is often presented as truth-conducive. If we refute and exclude the falsehoods of others, it seems, we’re making progress on our way to finding out the truth. This seems to be the reason why most papers in philosophy build their cases on refuting opposing claims and why most talks are met with unwavering criticism of the view presented. Killing off all the wrongs must leave you with the truth, no? I think this exclusion principle has all sorts of effects, but I doubt that it helps us in making the desired progress. Here is why.

A first set of reasons relates to the pragmatic aspects of academic exchange: I believe that the binary distinction between getting it right or wrong is misleading. More often than not the views offered to us are neither right nor wrong. This is owing to the fact that we have to present views successively, by putting forward a claim and explaining and arguing for it. What such a process exposes is normally not the truth or falsity of the view, but a need for further elaboration: by unpacking concepts and consequences, ruling out undesired implications, clarifying assumptions etc.

Now you might object that calling a view false is designed to prompt exactly that: clarification and exploration. But I doubt that this is the case. After all, much of academic exchange is driven by perceived reputation: More often than not criticism makes the speaker revert to defensive moves, if it doesn’t paralyse them: Rather than exploring the criticised view, speakers will be tempted to use strategies of immunising their paper against further criticism. If speakers don’t retract, they might at least reduce the scope of their claims and align themselves with more accepted tenets. This, I believe, blocks further exploration and sets an incentive for damage control and conformism. If you doubt this, just go and tell a student (or colleague) that they got it wrong and see what happens.

Still, you might object, such initial responses can be overcome. It might take time, but eventually the criticised speaker will think again and learn to argue for their view more thoroughly. – I wish I could share this optimism. (And I sometimes do.) But I guess the reason that this won’t happen, or not very often, is simply this: What counts in scholarly exchange is the publicly observable moment. Someone criticised by an opponent will see themselves as challenged not only as a representative of a view but as a member of the academic community. Maintaining or restoring our reputation will seem thus vital in contexts in which we consider ourselves as judged and questioned: If we’re not actually graded, under review or in a job talk, we will still anticipate or compare such situations. What counts in these moments is not the truth of our accounts, but whether we convince others of the account and, in the process, of our competence. If you go home as defeated, your account will be seen as defeated too, no matter whether you just didn’t mount the courage or concentration to make a more convincing move.

A second set of reasons is owing to the conviction that spotting falsehoods is just that: spotting falsehoods. As such, it’s not truth-conducive. Refuting claims does not (or at least not necessarily) lead to any truth. Why? Spotting a falsehood or problem does not automatically make any opposing claim true. Let me give an example: It is fairly common to call the so-called picture theory of meaning, as presented in Wittgenstein’s Tractatus, a failure. The perhaps intuitive plausibility that sentences function as pictures of states of affairs seems quickly refuted when asking how such pictures can be said to be true or false of a supposed chunk of reality. What do you do? Step out of the picture and compare it with the proper chunk? Haha! – Refuting the picture theory, then, seems to bring us one step closer to an appropriate theory of meaning. But such a dismissal makes us overlook that the picture theory has enormous merits. Once you see it as a theory of representation and stop demanding that it also accounts for the truth and falsity of representations, you begin to realise that it can work very well when combined with a theory of use or a teleosemantic theory. (See e.g. Ruth Millikan’s recontextualisation) The upshot is that our dismissals are often resulting from overlooking crucial further assumptions that would reinstate the dismissed account.

Now you might object that an incomplete account is still a bad account. Pointing this out is not per se wrong but will eventually prompt a recontextualisation that works. In this sense, you might say, the criticism becomes part of the recontextualised account. – To this I agree. I also think that such dialogues can prompt more satisfying results. But bearing the pragmatic aspects of academic exchange in mind, I think that such results are more likely if we present our criticism for what it is: not as barking at falsehoods but attempts to clarify, complete or complement ideas.

Now you might object that the difference between barking at falsehoods and attempts to clarify can be seen as amounting just to a matter of style. – But why would you think that this is an objection? Style matters. Much more than is commonly acknowledged.

How to respond to a global warming skeptic?

Have you ever discussed global warming with someone who’s skeptic about it? Is hard isn’t it? It doesn’t really matter how many studies, graphs or papers you show him/her, they will have no effect. So, what can we do when facts don’t seem to matter? Here’s a proposal.

Some people in the scientific community see this problem as a no-brainer. They just assume that it can be resolved by facts: “if there is a GW skeptic show him some graphs and figures. If he’s not convinced, then he’s just an idiot or he is lying to you.”. But that is not the case: more data won’t do anything, since many skeptics are acting as Greek Skeptics: they are not only dubious about the facts presented, they are dubious about the structure of knowledge itself[1].

Greek Skeptics developed many  arguments to prove that, since knowledge is not possible, we need to suspend judgement about everything we know. One of my favourite arguments is Agrippa’s Trilemma (rebranded later as Munchaussen Trilemma). This argument claims that we can’t know anything at all since everything we claim to know needs a justification. Therefore, if I claim P, the skeptic would ask: how do you know P? To which I must respond with Q. Then he will ask: how do you know Q? To which I must respond with R. Then the skeptic will keep going. Finally, I will have only three options:

  1. Justify ad infinitum: P because Q because R because S, and so on.
  2. Stop at an unjustified premise: P because Q because R.
  3. Reason in circles: P because Q because R because P. [2]

Now, this is exactly the kind of argument the GW skeptic uses. Imagine the following dialogue between a global warming believer (GWB) and a global warming skeptic (GWS)

GWB: C02 driven global warming is happening.

GWS: How do you know?

GWB: because I read it on the IPCC report.

GWS: How do you know that is true?

GWB: Because it shows a consensus of the leading scientists in the field.

GWS: How do you know that consensus is real and not fabricated?

GWB: Because there are many scientific practices, journals and institutions behind it.

GWS: Hou do you know those institutions aren’t corrupt.

Etc…

As we can see, the dialogue can keep on going forever. The skeptic can always ask for a new justification and the believer will fall into one of the three outcomes predicted by Agrippa. The GW skeptic will go home with the idea that he defeated the believer and will reinforce his skepticism.

 So, what can the believer do? The traditional epistemological answers to this problem have been two: foundationalism (option b of the trilemma) and coherentism (option c of the trilemma). I won’t try any of these solutions since I believe that this is not an epistemological but, rather, a practical and argumentative problem[3]. The right answer, then, is to use a presumption.

What is a presumption? That is indeed a good question. “Presumption” is a term borrowed from the legal field, so it is clear what they mean in that field, not so much outside of it. The Cambridge dictionary defines it as “the act of believing that something is true without having any proof”. In the legal field it is totally necessary to believe certain things without any proof. For instance: everyone is presumed to be innocent. That means that nobody needs to prove his/her innocence in any way, is the accuser the one who has to prove guilt.

Outside the legal field, argumentation theory has also used the concept of presumptions (see Walton 1996). The reason to do it is to resolve a fatal flaw in assertions. An assertion is any statement I present whose truth I believe. If I say: “the door is open”, “god exists” or “global warming is happening”, those are assertions as long as I believe them to be true.

The flaw of assertions is the following: whenever I use an assertion I have the burden of proof to prove it. Therefore, if I say, “global warming is happening”, I’m saying something like: “I’m justified in believing that global warming is happening.” Therefore, my interlocutor has the right to ask: “how do you know that?”. And the only way in which I can answer is by using a new assertion that will give me, again, the burden of proof. So, the interlocutor will ask again: “how do you know that”, and so on.

The conclusion is simple: if it is true that any party making an assertion takes the burden of proof, then the interlocutor can always ask: “how do you know that?”. We get Aggrippa’s trilemma all over again.  

But here comes a presumption to save the day. Presumptions shift the burden of proof. So, if a presumption is in place, the one who has to provide a proof is not the one who makes an assertion, but the one who doubts it. The relevant question here is the following: is there a presumption in favour of someone asserting that global warming is real? I say it is, at least in most cases: There’s an authority presumption in place.

People usually get confused over fallacies and legitimate ways of reasoning. One of these cases is the use of arguments from authority. Arguments from authority are perfectly valid, as long as the authority cited is actually an authority on the field. If not, it is a fallacy called “ad verecundiam”.

Compare these cases:

  • I believe in global warming because the IPCC says so.
  • I believe in global warming because my mother says so.

While (1) is a perfectly valid argument from authority, (2) is a fallacy ad verecundiam, since my mother is no authority on climate science[4].

In conclusion, my claim is the following: when I use an assertion like (1) I’m not only using a valid argument, but I’m also using a presumption: since the IPCC is an authority in climate science and I have no expertise to doubt its findings, we can presume that what they say is true.

That doesn’t mean that the conclusion is undoubtfully true, nor that (1) cannot be defeated. It only says that, as long as the interlocutor is not also an authority on the field, we need to believe what the authority says. Then, since the burden of proof has been shifted is not the one who makes assertion (1) the one who must prove it, is the counterpart the one who must provide grounds for criticism.

Given so, the dialogue between the skeptic and the believer would look like this:

GWB: C02 driven global warming is happening.

GWS: How do you know?

GWB: because I read it on the IPCC report.

GWS: How do you know that is true?

GWB: They are an authority on the field, what grounds do you have to doubt them? Are you a climate scientist?

 

REFERENCES

Klein, Peter (2008). Contemporary Responses to Agrippa’s Trilemma. In John Greco (ed.), “The Oxford Handbook of Skepticism”. Oxford University Press.

Walton, D. (1996) “Argumentation Schemes for Presumptive Reasoning” (Studies in Argumentation Theory). London: Routledge


[1] The SEP has a nice introduction to Greek scepticism: https://plato.stanford.edu/entries/skepticism-ancient/

[2] See Klein 2008 for contemporary answers to this problem.

[3] The bigger picture is the following intuition: “you can’t defeat a skeptic with theoretical arguments, only with practical ones”.

[4] (1) could also be fallacious if the one asserting is a climate scientist. In that case she should read the original papers, not blindly trust a source.