Surprises. (The fancy title would have been: “On the phenomenology of writing”)

It’s all worked out in my head: the paper is there, the literature is reviewed, I’ve found a spot where my claim makes sense without merely repeating what has been said, my argument is fairly clear, the sequence of sections seems fine. But there is a problem: the darn thing still needs to be written! What’s keeping me? The deadline is approaching. Why can’t I just do it? Everyone else can do it! Just like that: sit down, open the file and write down what’s in your head.

I’ve had sufficiently many conversations and looks at “shit academics say” to know that I am not even alone in this. But it’s both irritating and surprising that it keeps recurring. So why is it happening? I know there are many possible explanations. There is the shaming one: “You’re just lazy; stop procrastinating!” Then there is the emphatic one: “You’re overworked; take a break!” And there is probably a proper scientific one, too, but I’m too lazy to even google it. Without wanting to counter any of these possible explanations, it might be rewarding to reflect on personal experience. So what’s going on?

If I am really in the state described in the first paragraph (or at least close to it), I have some sort of awareness of what I would be doing were I to write. This experience has two intriguing features:

  • One aspect is that it is enormously fast: the actual thinking or the internal verbalisations of what I would (want to) write pass by quickly. They are passing much faster than the actual writing would or eventually will take. (There are intriguing studies on this phenomenon; see for instance Charles Fernyhough’s The Voices Within)* This pace of the thoughts is often pleasing. I can rush through a great amount of stuff, often enlarged by association, in very little time. But at the same time this experience might be part of the reason why I don’t get to write. If everything seems (yes, seems!) worked out like that, it feels as if it were done. But why should I do work that is already done? The idea of merely writing down what I have worked out is somewhat boring. Of course, this doesn’t mean that the actual writing is boring. It just means that I take it to be boring. This brings me to the second aspect:
  • The actual writing often comes with surprises. Yes, it’s often worked out in my head, but when I try to phrase the first sentence, I notice two other things. Firstly, I’m less sure about the actual formulations on the page than I was when merely verbalising them internally. Secondly, seeing what I write triggers new associations and perhaps even new insights. It’s as if the actual writing were based on a different logic (or dynamic), pushing me somewhere else. Then it feels like I cannot write what I want to write, at least not before I write down what the actual writing pushes me to write first: Yes, I want to say p, but first have to explain q. Priorities seem to change. Something like that. These surprises are at once frustrating and motivating or pleasing. They are frustrating, because I realise I hadn’t worked it out properly. I just thought I was done. Now the real work begins. On the other hand, they are motivating because I feel like I’m actually learning or understanding something I had not seen before.

I don’t know whether this experience resonates with you, but I guess I need such surprises. I can’t sit down and merely write what I thought out beforehand. And I don’t know whether that is the case because I want to avoid the boredom of merely ‘writing it up’ or whether ‘merely writing it up’ is actually not doable. Not doable because it wouldn’t convince me or surprise me or perhaps even because it is impossible or whatever. The moral is that you should start writing early. Because if you’re a bit like me, part of the real work only starts during the actual writing. Then you face the real problems that aren’t visible when all the neat sentences are just gliding through your head. – Well, I guess I should get going.

___

* And this post just came out today.

Too much overview, too much leadership

Imagine you’re part of a group of about twenty people that has the task of ordering a number of pictures into a sequence within fifteen minutes. Each participant gets a picture and may look at it but not show it to anyone else. Instead of showing it, you can describe your picture to others. While you’re told that the task is called “zoom”, you are not given any clue about the aspects by means of which the pictures connect. So it may be the colour, the figure or shape or some other motif that makes your picture connect to another one. Now what do you do?

This task was part of a course on philosophy and leadership that I had the pleasure to co-teach a few weeks ago.* The students participating in this course were truly excellent. As often is the case with good students, I think I learn a lot more than I teach. This little task was particularly rewarding in that it taught me a lot about group dynamics in a setting that is crucial for philosophy: We might have goals but no clear idea what the goal states would look like. We want to get on, but we don’t really know where we stand or where we go. Put in Wittgensteinian terms: “I don’t know my way about.”** What can we do in such situations? – What happened in the task set in the course can be told quickly: At the beginning one student took the lead and advised all of us to form small groups, compare brief descriptions of pictures and rearrange the groups such that those with fitting items find each other. We quickly formed about three groups. Most of the remaining time was now devoted to figuring out how the loose ends of each group could connect. At the end, we did form one sequence. But it was clear that the sequence only made sense as a chain of overlaps or family resemblances. There was no overarching structure or idea of the goal state. Remembering my participation in the task, there are two things that stand out in particular: Firstly, I realised (yet again) that we never see the whole picture. Secondly, it became glaringly clear that attempts at creating an overview will lead nowhere. What does that mean for interaction in groups?

  • Every perspective matters: You have a task but you have no idea of the eventual goal state. I think this is a situation you encounter fairly often in life. But how do you deal with that in a group? Since you don’t know what kind of information is relevant for achieving the goal, everyone’s perspective matters equally. You must listen carefully, and to everyone. It’s fine to jump to conclusions and move on, but only if you’re ready to revise your conclusions in the light of new descriptions. The situation reminded me of being a mode in a Spinozist world: no one has a picture of the whole and those who pretend otherwise will fail. So we constantly depend on each other’s perspectives in order to form a larger and more apt understanding of the whole. No one’s knowledge can be replaced or overruled by anyone else’s. Most interestingly, the group accepted no ideas or instructions that went against these insights. People who tried to collect an overview and instruct others from a seemingly general idea were simply ignored, no matter how much they raised their voice.
  • Overviews are hardly ever helpful: Of course, over time the group accumulates a joint understanding of the situation. Everyone knows more about how their pictures could relate to others. The groups become more stable in their sequence and thus patterns emerge, now knowable by others. In this situation, some people tried to take the lead and collect a list of features on the blackboard. As noted, they were ignored relentlessly. But it also became clear that those overviews would not have been helpful. Partly because they would not be as informative as the direct interactions between participants; partly because it was never clear whether any individual or we already knew the features that mattered for the whole sequence. Accordingly, the very attempt of leading the group was not helpful. The only sensible kind of leadership consisted in facilitating the interaction of the group members.

That said, this task does of course have a dimension of clarity that is absent from most situations in our lives. There is a hidden leader, i.e. the person setting the task. And there is a beginning and a correct result at the end. In any case, given that most situations are unclear, it felt important to be reminded of these insights: every perspective matters, because we never know what will matter eventually; and the most important aspect of leadership, if any, is the facilitation of exchanges in the group and between sub-groups. Anything else is too much.

___

* I’d like to express my gratitude again to my co-instructor Maren Drewes of beispielsweise. She introduced the task in which I took part as a participant. Here is a course description.

** See Wittgenstein, Philosophical Investigations, § 123. The German original reads: “Ich kenne mich nicht aus.”

 

Do rejections of our claims presuppose that we are abnormal?

Discussions about meaning and truth are often taken as merely theoretical issues in semantics. But as soon as you consider them in relation to interactions between interlocutors, it’s clear that they are closely related to our psychology. In what follows, I’d like to suggest that people questioning our claims might in fact be questioning whether we are normal people. Sounds odd? Please hear me out. Let’s begin with a well known issue in semantics:

Imagine you’re a linguist, studying a foreign language of a completely unknown people. You’re with one of the speakers of that language when a white rabbit runs past. The speaker says “gavagai”. Now what does “gavagai” mean?

According to Quine, who introduced the gavagai example, the expression could mean anything. It might mean: “Look, there’s a rabbit” or “Lovely lunch” or “That’s very white” or “Rabbithood instantiated”. The problem is that you cannot determine what “gavagai” means. Our ontology is relative to the target language we’re translating into. And you cannot be sure that the source language carves up the world in the same way ours does. Now it is crucial to see that this is not just an issue of translation. The problem of indeterminacy starts at home: meaning is indeterminate. And this means that the problems of translations also figure in the interaction between speakers and hearers of the same language.

Now Davidson famously turns the issue upside down: we don’t begin with meaning but with truth. We don’t start out by asking what “gavagai” means. If we assume that the speaker is sincere, we’ll just translate the sentence in such a way that it matches what we take to be the truth. So we start by thinking: “Gavagai” means something like “Look, there’s a rabbit”, because that’s the belief we form in the presence of the rabbit. So we start out by ascribing the same belief to the speaker of the foreign language and translate accordingly. That we start out this way is not optional. We’d never get anywhere, if we were to start out by wondering what “gavagai” might or might not mean. Rather we cannot but start out from what we take to be true.

Although Davidson makes an intriguing point, I don’t think he makes a compelling case against relativism. When he claims that we translate the utterances of others into what we take to be true, I think he is stating a psychological fact. If we take someone else to be a fellow human being and think that she or he is sincere, then translating her or his utterances in a way that makes them come out true is what we count as normal behaviour. Conversely, to start from the assumption that our interlocutor is wrong and to translate the other’s utterances as something alien or blatantly false, would amount an abnormal behaviour on our part (unless we have reason to think that our interlocutor is seriously impaired). The point I want to make is that sincerity and confirmation of what we take to be true will correlate with normality.

If this last point is correct, it has a rather problematic consequence: If you tell me that I’m wrong after I have sincerely spoken what I take to be the truth, this will render either me or you as abnormal. Unless we think that something is wrong with ourselves, we will be inclined to think that people who listen to us but reject our claims are abnormal. This is obvious when you imagine someone stating that there is no rabbit while you clearly take yourself to be seeing a rabbit. When the “evidence” for a claim is more abstract, in philosophical debates for instance, we are of course more charitable, at least so long as we can’t be sure that we both have considered the same evidence. Alternatively, we might think the disagreement is only verbal. But what if we think that we both have considered the relevant evidence and still disagree? Would a rejection not amount to a rejection of the normality of our interlocutor?

What’s behind the veil of perception? A response to Han Thomas Adriaenssen

Imagine you’re using glasses, would you think that your grasp of reality is somehow indirect? I guess not. We assume that glasses aid our vision rather than distort or hinder it. The fact that our vision is mediated by glasses does not make it less direct than the fact that our vision is mediated through our eyes. Now imagine your perception is mediated by what early modern philosophers call “ideas”. Does it follow that our grasp of reality is indirect? Many philosophers think that it is. By contrast, I would like to suggest that this is misleading. Ideas make our perceptions no less direct than glasses.

Both early modern and contemporary crictics often take the “way of ideas” as a route to scepticism. The assumption seems to be that the mediation of perception through ideas makes our thoughts not about reality but about the ideas. Han Thomas Adrianssen’s recent book is original in that it tells the story of this and related debates from Aquinas to the early modern period. In celebration of receiving the JHP book prize, Han Thomas gave a brief interview that aptly summarises the common line of criticism against ideas or the assumption of indirect perception related to them:

“Okay. So you explore the philosophical problem of ‘perception and representation’ from Aquinas to Descartes; what exactly is the problem?

HTA: ‘Right. So it goes like this: what is going on in your mind when you see something in your environment? Take this chair, for instance. When you see this chair, what’s happening in your mind? One answer is that you form a kind of pictorial representation of the chair. You start drawing a mental image for yourself of the thing in front of you, and you label it: ‘chair’. … But then there is a worry: if this is how it works – if this is how we access the environment cognitively – then that means there is a sort of interface between us and reality. A veil of perceptions, if you will. So what we’re thinking about is not the chair itself, but our picture of the chair.– But that can’t be right!”

Besides summarising the historical criticisms, Han Thomas seems to go along with their doubts. He suggests that metaphors trick us into such problematic beliefs: the “mental image metaphor” comes “naturally”, but brings about “major problems”.

While I have nothing but admiration for the historical analysis presented, I would like to respond to this criticism on behalf of those assuming ideas or other kinds of representational media. Let’s look at the chided metaphor again. Yes, the talk of the mental image suggests that what is depicted is remote and behind the image. But what about the act of drawing the image? Something, presumably our sense organs are exposed to the things and do some ‘drawing’. So the drawing is not done behind a veil. Rather the act of drawing serves as a transformation of what is drawn into something that is accessible to other parts of our mind.* Thus, we should imagine a series of transformations until our minds end up with ideas. But if you think of it in those terms the transformation is not akin to putting something behind a veil. Rather it is a way of making sensory input available. The same goes for glasses or indeed our eyes. They do not put something behind a veil but make it available in an accessible form. My point is that the metaphor needs to be unpacked more thoroughly. We don’t only have the image; we have the drawing, too.

Following Ruth Millikan’s account of perception,** I would like to argue that the whole opposition of indirect vs direct perception is unhelpful. It has steered both early modern and 20th-century debates in epistemology in fruitless directions. Sense perception is direct (as long as it does not involve inferences through which we explicitly reason that the presence of an idea means the presence of a represented thing). At the same time sense perception is indirect in that it requires means of transformation that make things available to different kinds of receptors. Thus, the kind of indirectness involved in certain cognitive vehicles does not lead anymore to scepticism than the fact that we use eyes to see.

What early modern philosophers call ideas are just cognitive vehicles, resulting from transformations that make things available to us. If an analogy is called for I’d suggest relating them, not to veils, but to glasses. If we unpack the metaphor more thoroughly, what is behind the veil is not only the world, but our very own sense organs making the world available by processing them through media accessible to our mind. If that evokes sceptical doubts, such doubts might be equally raised whenever you put your glasses on or indeed open you eyes to see.

___

* As Han Thomas himself notes (in the book, not the interview), many medieval authors do not think that representationalism leads to scepticism, and endorse an “epistemic optimism”. I guess these authors could be reconstructed as agreeing with my reply. After all, some stress that species (which could be seen as functionally equivalent to ideas) ought to be seen as a medium quo rather than that which is ultimately cognised.

** Ruth Millikan even claims that language is a means of direct perception: “The picture I want to leave you with, then, is that coming to believe, say, that Johnny has come in by seeing that he has come in, by hearing by his voice that he has come in, and by hearing someone say “Johnny has come in,” are normally equivalent in directness of psychological processing. There is no reason to suppose that any of these ways of gaining the information that Johnny has come in requires that one perform inferences. On the other hand, in all these cases it is likely that at least some prior dedicated representations must be formed. Translations from more primitive representations and combinations of these will be involved. If one insists on treating all translation as a form of inference, then all these require inference equally. In either event, there is no significant difference in directness among them. ”

On taking risks. With an afterthought on peer review

Jumping over a puddle is both fun to try and to watch. It’s a small risk to take, but some puddles are too large to cross… There are greater risks, but whatever the stakes, they create excitement. And in the face of possible failure, success feels quite different. If you play a difficult run on the piano, the listeners will equally feel relief when you manage to land on the right note in time. The same goes for academic research and writing. If you start out with a provocative hypothesis, people will get excited about the way you mount the evidence. Although at least some grant agencies ask for risks taken in proposals, risk taking is hardly ever addressed in philosophy or writing guides. Perhaps people think it’s not a serious issue, but I believe it might be one of the crucial elements.

In philosophy, every move worth our time probably involves a risk. Arguing that mistakes or successes depend on their later contextualisation, I already looked at the “the fine line between mistake and innovation.” But how do we get onto that fine line? This, I think, involves taking a risk. Taking a risk in philosophy means saying or doing something that will likely be met with objections. That’s probably why criticising interlocutors is so widespread. But there are many ways of taking risks. Sitting in a seminar, it might already feel risky to just raise your voice and ask a question. You feel you might make a fool of yourself and lose the respect of your fellow students or instructor. But if you make the effort you might also be met with the admiration for going through with an only seemingly trivial point. I guess it’s that oscillation between the possibility of failure and success that also moves the listeners or readers. It’s important to note that risk taking has a decidedly emotional dimension. Jumping across the puddle might land you in the puddle. But even if you don’t make it all the way, you’ll have moved more than yourself.

In designing papers or research projects, risk taking is most of the time rewarded, at least with initial attention. You can make an outrageous sounding claim like “thinking is being” or “panpsychism is true”. You can present a non-canonical interpretation or focus on a historical figure like “Hume was a racist” or “Descartes was an Aristotelian”. You can edit or write on the work of a non-canonical figure or provide an uncommon translation of a technical term. This list is not exhaustive, and depending on the conventions of your audience all sorts of moves might be risky. Of course, then there is work to be done. You’ve got to make your case. But if you’re set to make a leap, people will often listen more diligently than when you merely promise to summarise the state of the art. In other words, taking a risk will be seen as original. That said, the leap has to be well prepared. It has to work from elements that are familiar to your audience. Otherwise the risk cannot be appreciated for what it is. On the other hand, mounting the evidence must be presented as feasible. Otherwise you’ll come across as merely ambitious.

Whatever you do, in taking a risk you’ll certainly antagonise some people. Some will be cheering and applauding your courage and originality. Others will shake their heads and call you weird or other endearing things. What to do? It might feel difficult to live with opposition. But if you have two opposed groups, one positive, one negative, you can be sure you’re onto something. Go for it! It’s important to trust your instincts and intuitions. You might make it across the puddle, even if half of your peers don’t believe it. If you fail, you’ve just attempted what everyone else should attempt, too. Unless it’s part of the job to stick to reinventing the wheel.

Now the fact that risks will be met with much opposition but might indicate innovation should give us pause when it comes to peer review. In view of the enormous competition, journals seem to encourage that authors comply with the demands of two reviewers. (Reviewer #2 is a haunting meme by now.)  A paper that gets one wholly negative review will often be rejected. But if it’s true that risks, while indicative of originality, will incur strong opposition, should we not think that a paper is particularly promising when met with two opposing reviews? Compliance with every possible reviewer seems to encourage risk aversion. Conversely, looking out for opposing reviews would probably change a number of things in our current practice. I guess managing such a process wouldn’t be easier. So it’s not surprising if things won’t change anytime soon. But such change, if considered desirable, is probably best incentivised bottom-up. And this would mean to begin in teaching.

The fact, then, that a claim or move provokes opposition or even refutation should not be seen as a negative trait. Rather it indicates that something is at stake. It is important, I believe, to convey this message, especially to beginners who should learn to enjoy taking risks and listening to others doing it.

The idea of “leaving” social media might be a category mistake. A response to Justin E. H. Smith

I guess you all know at least variants of this situation: You go to a talk; you really enjoy it and look forward to the discussion. But then there is that facepuller again, lining up to be the first to ask a question. And not only is he (yes, it’s invariably a “he”) going on forever, dulling the mood and offending the speaker, he also makes his question sound so decisively threatening that everyone after him will come back to his destructive point. That’s that. There were great ideas in the talk, but this bully managed to make it all about himself again. This is just annoying, but imagine this guy is around you for a whole conference or you even work with him. Or he is your teacher or supervisor. Whatever the situation, that guy is a pain, and he manages to make everything about himself, spoiling most of the fun for everyone else involved. But sure enough, the greatest disappointment is yet to come: when it’s promotion time, that guy won’t be sanctioned. No, he gets the top job. – Social media are a bit like that. We all could have a nice time, but then that guy joined and everything turned nasty. And then it turns out that Facebook (or some such company) are not banning him; they even pay this bully. – In what follows I want to suggest that it doesn’t make sense to leave social media, especially if you are interested in ameliorating the situation or countering such behaviour.

Let me begin with a plea: Don’t leave me alone with that guy! But sadly the number of people who are leaving social media, especially Facebook, seems to be increasing. I think they all have good reasons, just like they all have good reasons to find the world a nasty place: social media are full of bullshitting bullies, the people who run Facebook or Twitter are no saints either, while users are turned into hapless addicts withering away in their echo chambers. In other words, ordinary people are involved. And that guy is around all the time. My point is: the reasons for leaving social media apply to the world outside social media as well. But this is not because we would be dealing with two realms (inside and outside), but because the technological patterns that pervade social media pervade our lives anyway.

So what are social media? I guess a cynically inclined person might say that they are a by-product of accumulating data. In an intriguing blog post, Justin Smith spells out the dystopian idea of the “tech companies’ transformation of individuals into data sets”. One conclusion he draws strikes me as particularly important: that this transformation destroys “human subjecthood”. The point he makes is that you’re not treated as an individual, but as an item fitting certain patterns of predictions along the lines of: “customers who liked philosophical blog posts were also interested in Martin …”. The result is that, as a social media participant, you might be incentivised to present yourself in line with certain predictions. So should you sell yourself as the wholesome lefty package or is it better to add on some grumpy edges?

Thinking about Justin Smith’s post back and forth, it hit me like a hammer. If his observations about the transformation into data sets are apt, then the very idea of “leaving” social media might rest on a category mistake. In Gilbert Ryle’s illustration, a guest at Oxford University walks around all the colleges and the library, and then asks “But where is the university?” The visitor mistakenly assumes that the university is one of the university buldings. People wanting to leave social media might be doing something similar. They stop using Facebook or Twitter and perhaps also switch on their computers or smartphones less frequently. Thus, they assume that social media are one of the various media or technological items on offer. Leaving Facebook, then, would mean to stop using that and choose a different medium, such that you might return to writing postcards, letters or just go for a walk and talk at the people who come your way. But this is not possible. The technology at work in Facebook is not something you can choose to abandon; that technology organises a great part of our lives. It’s spread across every transaction that involves data accumulation. The technology at work is not like one of the colleges; it’s the university!

The technology of data accumulation is pervasive, ingrained not only in our way of shopping with Amazon, but the result of a long-standing practice of economising our interactions. Of course you can stop using Facebook or the internet altogether. But if I understand the people who want to leave correctly, they have moral or more personal reasons for doing so. This means they don’t just dislike Facebook; they reject the way of interaction incentivised by the pertinent technology. On the whole, they want to avoid the ills going along with it. But now compare cars: Of course you can stop using a car, but you cannot “leave” or “stop using” the car industry.

You all know that there is an obvious objection. Someone’s got to take the first step. And if we all stop using Facebook, then … Then what? I tell you what: then we’ll all start using Bumbook or something else instead. (But Bumbook will be driven by the same technological patterns, not by our good intentions.) Or we just give up on the benefits of using media altogether. I’m not convinced by either option. – The technology of data accumulation is systemic; like public transport or education it runs through and affects society as a whole. If it is not working properly or subject to constant (political) abuse, it requires a collective effort to ameliorate the situation. Echoing an idea from Leopold Hess, social media are too important to be privatised.

So if we want to minimise the influence of that guy, we shouldn’t tolerate his behaviour. Leaving the room and thus leaving the floor to the bullies won’t help. All the more because you cannot leave social media in the same way that you can leave a room. Of course, sometimes leaving the room is all you want, and it might cure your headache. But it won’t do much else. Countering the ills of social media is a collective political task. Not leaving but getting involved even more might help. What are the most important skills in this? Listening and reading carefully.