You don’t get what you deserve. Part II: diversity versus meritocracy?

“I’m all for diversity. That said, I don’t want to lower the bar.” – If you have been part of a hiring committee, you will probably have heard some version of that phrase. The first sentence expresses a commitment to diversity. The second sentence qualifies it: diversity shouldn’t get in the way of merit. Interestingly, the same phrase can be heard in opposing ways. A staunch defender of meritocracy will find the second sentence (about not lowering the bar) disingenuous. He will argue that, if you’re committed to diversity, you might be disinclined to hire the “best candidate”. By contrast, a defender of diversity will find the first sentence disingenuous. If you’re going in for meritocratic principles, you will just follow your biases and ultimately take the properties of “white” and “male” as a proxy of merit. – This kind of discussion often runs into a stalemate. As I see it, the problem is to treat diversity and meritocracy as an opposition. I will suggest that this kind of discussion can be more fruitful if we see that diversity is not a property of job candidates but of teams, and thus not to be seen in opposition to meritocratic principles.

Let’s begin with a clarification. I assume that it’s false and harmful to believe that we live in a meritocracy. But that doesn’t mean that meritocratic ideas themselves are bad. If it is simply taken as the idea that one gets a job based on their pertinent qualifications, then I am all for meritocratic principles. However, a great problem in applying such principles is that, arguably, the structure of hiring processes makes it difficult to discern qualifications. Why? Because qualifications are often taken to be indicated by other factors such as prestige etc. But prestige, in turn, might be said to correlate with race, gender, class or whatever, rather than with qualifications. At the end of the day, an adherent of diversity can accuse adherents of meritocracy of the same vices that she finds herself accused of. So when merit and diversity are taken as being in opposition, we tend to end up in the following tangle:

  • Adherents of diversity think that meritocracy is ultimately non-meritocratic, racist, sexist, classist etc.
  • Adherents of meritocracy think that diversity is non-meritocratic, racist, sexist, classist etc.*

What can we do in such a stalemate? How can the discussion be decided? Something that typically gets pointed out is homogeneity. The adherent of diversity will point to the homogeneity of people. Most departments in my profession, for instance, are populated with white men. The homogeneity points to a lack of diversity. Whether this correlates to a homogeneity of merit is certainly questionable. Therefore, the next step in the discussion is typically an epistemological one: How can we know whether the candidates are qualified? More importantly, can we discern quality independently from features such as race, gender or class? – In this situation, adherents of diversity typically refer to studies that reveal implicit biases. Identical CVs, for instance, have been shown to be treated as more or less favourable depending on the features of the name on the CV. Meritocratists, by contrast, will typically insist that they can discern quality objectively or correct for biases. Again, both sides seem to have a point. We might be subject to biases, but if we don’t leave decisions to individuals but to, say, committees, then we can perhaps correct for biases. At least if these committees are sufficiently diverse, one might add. – However, I think the stalemate will get passed indefinitely to different levels, as long as we treat merit and diversity as an opposition. So how can we move forward?

We try to correct for biases, for instance, by making a committee diverse. While this is a helpful step, it also reveals a crucial feature about diversity that is typically ignored in such discussions. Diversity is a feature of a team or group, not of an individual. The merit or qualification of a candidate is something pertaining to that candidate. If we look for a Latinist, for instance, knowledge of Latin will be a meritorious qualification. Diversity, by contrast, is not a feature, to be found in the candidate. Rather, it is a feature of the group that the candidate will be part of. Adding a woman to all-male team will make the team more diverse, but that is not a feature of the candidate. Therefore, accusing adherents of diversity of sexism or racism is fallacious. Trying to build a more diverse team rather than favouring one category strikes me as a means to counter such phenomena.

Now if we accept that there is such a thing as qualification (or merit), it makes sense to say that in choosing a candidate for a job we will take qualifications into account as a necessary condition. But one rarely merely hires a candidate; one builds a team, and thus further considerations apply. One might end up with a number of highly qualified candidates. But then one has to consider other questions, such as the team one is trying to build. And then it seems apt to consider the composition of the team. But that does not mean that merit and diversity are opposed to one another.

Nevertheless, prioritising considerations about the team over the considerations about the candidates are often met with suspicion. “She only got the job because …” Such an allegation is indeed sexist, because it construes a diversity consideration applicable to a team as the reason for hiring, as if it were the qualification of an individual. But no matter how suspicious one is, qualification and diversity are not on a par, nor can they be opposing features.

Compare: A singer might complain that the choir hired a soprano rather than him, a tenor. But the choir wasn’t merely looking for a singer but for a soprano. Now that doesn’t make the soprano a better singer than the tenor, nor does it make the tenor better than the soprano. Hiring a soprano is relevant to the quality of the group; it doesn’t reflect the quality of the individual.

____

* However, making such a claim, an adherent of meritocracy will probably rely on the assumption that there is such a thing as “inverted racism or sexism”. In the light of our historical sitation, this strikes me as very difficult to argue, at least with regard to institutional structures. It’s seems like saying that certain doctrines and practices of the Catholic Church are not sexist, simply because there are movements aiming at reform.

Fit. A Note on Aristotle’s Presence in Academia

Since the so-called Scientific Revolution and the birth of modern science, our Western approach towards the world became quantitative. The precedingly dominant qualitative Aristotelian worldview of the Scholastics was replaced by a quantitative one: everything around us was supposed to be quantifiable and quantified. This, of course, seems to affect nowadays academia, too. We often hear “do this, it will be one more line in your CV!” 

Many will reply “This is not true, quality matters just as much!” Yes, it (sometimes) matters in which journal one publishes; it has to be a good journal; one needs to make sure that the quality of the article is good. And how do we know that the journal is good or not? Because of its ranking. So if you thought I will argue that this is Aristotle’s presence in Academia… you were wrong. The criterion is still quantitative. Of course, we trust more that an article in a respectable (i.e., highly ranked) journal is a good one, but we all know this is not always the case. 

Bringing into discussion the qualitative and quantitative distinction is crucial for assessing job applications and the ensuing hiring process. While it used to be easier for those in a position of power to hire whom they want, it has become a bit more difficult. Imagine you really want to hire someone because he (I will use this pronoun for certain reasons) is brilliant. But his brilliance is not reflected in his publications, presentations, teaching evaluations, grants (the latter because he did not get any)… You cannot even say he is a promising scholar, since that should be visible in something. At the same time, there are a lot of competing applications with an impressive record. So what can one do? Make use of the category ‘better fit’, ‘better fit’ for the position, ‘better fit’ for the department.[1] But when is someone a ‘better fit’, given that the job description did not mention anything to this effect? When their research is in line with the department? No, too much overlap! When it complements the existing areas of research? No, way too different!

And here is where Aristotle comes into the picture. It is not the research that has to fit, but the person. And we know from Aristotle and his followers that gender, race and nationality are the result of the (four elemental) qualities. Who can be more fit for a department mostly composed of men from Western Europe than another man from Western Europe? As a woman coming from Eastern Europe, I have no chance. And Eastern Europe is not even the worst place to come from in this respect. 

There is a caveat though. When more people who fit in the department apply, the committee seeks refuge in positing some ‘occult qualities’ to choose the ‘right’ person. ‘Occult’ in the Scholastic sense means that the quality it is not manifest in any way in the person’s profile.[2]

How much is this different from days when positions were just given away on the basis of personal preference? The difference lies in the charade.[3] The difference is that nowadays a bunch of other people, devoid of occult qualities, though with an impressive array of qualities manifest in their CVs and international recognition, spend time and energy to prepare an application, get frustrated, maybe even get sick, just so that the person with the ‘better fit’ can have the impression that he is much better than all the rest who applied.

So when are we going to give up the Aristotelian-Scholastic elementary and occult qualities and opt for a different set of more inclusive qualities?


[1] Aristotle probably put it in his Categories, but it got lost.

[2] I am rather unfair with this term, because the occult qualities were making themselves present through certain effects.

[3] The Oxford dictionary indeed defines charade as “an absurd pretence intended to create a pleasant or respectable appearance.”

On being a first-generation student

First off: the following is not to be taken as a tale of woe. I am grateful for whatever life has had on offer for me so far, and I am indebted to my teachers – from primary school to university and beyond – in many ways. But I felt that, given that Martin invited me to do so, I should probably provide some context to my comment on his recent post on meritocracy, in which I claimed that my being a first-generation student has had a “profound influence on how I conceive of academia”. So here goes.

I am a first-generation student from a lower-middle-class family. My grandparents on the maternal side owned and operated a small farm, my grandfather on the paternal side worked in a foundry, and his wife – my father’s mother – did off-the-books work as a cleaning woman in order to make ends meet.

When I got my first job as a lecturer in philosophy my monthly income already exceeded that of my mother, who has worked a full-time job in a hospital for more than thirty years. My father, a bricklayer by training, is by now divorced from my mother and declutters homes for a living. Sometimes he calls me in order to tell me about a particularly good bargain he stroke on the flea market.

My parents did not save money for my education. As an undergraduate I was lucky to receive close to the maximum amount of financial assistance afforded by the German Federal Law on Support in Education (BAföG) – still, I had to work in order to be able to fully support myself (tuition fees, which had just been introduced when I began my studies, did not help). At the worst time, I juggled three jobs on the side. I have work experience as a call center agent (bad), cleaning woman (not as bad), fitness club receptionist (strange), private tutor (okay), and teaching assistant (by far the nicest experience).

Not every middle-class family is the same, of course. Nor is every family in which both parents are non-academics. Here is one way in which the latter differ: There are those parents who encourage – or, sometimes, push – their children to do better than themselves, who emphasize the value of higher education, who make sure their children acquire certain skills that are tied to a particular habitus (like playing the piano), who provide age-appropriate books and art experiences. My parents were not like that. “Doing well”, especially for my father, meant having a secure and “down-to-earth” job, ideally for a lifetime. For a boy, this would have been a craft. Girls, ostensibly being less well-suited for handiwork, should strive for a desk job – or aim “to be provided for”. My father had strong reservations about my going to grammar school, even though I did well in primary school and despite my teacher’s unambiguous recommendation. I think it never occurred to him that I could want to attend university – academia was a world too far removed from his own to even consider that possibility.

I think that my upbringing has shaped – and shapes – my experience of academia in many ways. Some of these I consider good, others I have considered stifling at times. And some might even be loosely related to Martin’s blogpost about meritocracy. Let me mention a few points (much of what follows is not news, and has been put more eloquently by others):

  • Estrangement. An awareness of the ways in which the experiences of my childhood and youth, my interests and preferences, my habits and skills differ from what I consider a prototypical academic philosopher – and I concede that my picture of said prototype might be somewhat exaggerated – has often made me feel “not quite at home” in academia. At the same time, my “professional advancement” has been accompanied by a growing estrangement from my family. This is something that, to my knowledge, many first-generation students testify to, and which can be painful at times. My day-to-day life does not have much in common with my parents’ life, my struggles (Will this or that paper ever get published?) must seem alien, if not ridiculous, to them. They have no clear idea of what it is that I do, other than that it consists of a lot of desk-sitting, reading, and typing. And I think it is hard for them to understand why anyone would even want to do something like this. One thing I am pretty sure of is that academia is, indeed, or in one sense at least, a comparatively cozy bubble. And while I deem it admirable to think of ways of how to engage more with the public, I am often unsure about how much of what we actually do can be made intelligible to “the folk”, or justified in the face of crushing real-world problems.
  • Empathy. One reason why I am grateful for my experiences is that they help me empathize with my students, especially those who seem to be afflicted by some kind of hardship – or so I think. I believe that I am a reasonably good and well-liked teacher, and I think that part of what makes my teaching good is precisely this: empathy. Also, I believe that my experiences are responsible for a heightened sensibility to mechanisms of inclusion and exclusion, and privilege. I know that – being white, having grown up in a relatively secure small town, being blessed with resilience and a certain kind of stubbornness, and so on – I am still very well-off. And I do not want to pretend that I know what it is like to come from real poverty, or how it feels to be a victim of racism or constant harassment. But I hope that I am reasonably open to others’ stories about these kinds of things.
  • Authority. In my family of origin, the prevailing attitude towards intellectuals was a strange mixture between contempt and reverence. Both sentiments were probably due to a sense of disparity: intellectuals seemed to belong to a kind of people quite different from ourselves. This attitude has, I believe, shaped how I perceived of my teachers when I was a philosophy student. I noticed that our lecturers invited us – me – to engage with them “on equal terms”, but I could not bring myself to do so. I had a clear sense of hierarchy; to me, my teachers were authorities. I did eventually manage to speak up in class, but I often felt at a loss for words outside of the classroom setting with its relatively fixed and easily discernable rules. I also struggled with finding my voice in class papers, with taking up and defending a certain position. I realize that this struggle is probably not unique to first-generation students, or to students from lower-class or lower-middle-class backgrounds, or to students whose parents are immigrants, et cetera – but I believe that the struggle is often aggravated by backgrounds like these. As teachers, I think, we should pay close attention to the different needs our students might have regarding how we engage with them. It should go without saying, but if someone seems shy or reserved, don’t try to push them into a friendly and casual conversation about the model of femininity and its relation to sexuality in the novel you recently read.
  • Merit. Now, how does all this relate to the idea of meritocracy? I think there is a lot to say about meritocracy, much more than can possibly be covered in a (somewhat rambling) blogpost. But let me try to point out at least one aspect. Martin loosely characterizes the belief in meritocracy as the belief that “if you’re good or trying hard enough, you’ll get where you want”. But what does “being good enough” or “trying hard enough” amount to in the first place? Are two students who write equally good term papers working equally hard? What if one of them has two children to care for while the other one still lives with and is supported by her parents? What if one struggles with depression while the other does not? What if one comes equipped with “cultural capital” and a sense of entitlement, while the other feels undeserving and stupid? I am not sure about how to answer these questions. But one thing that has always bothered me is talk of students being “smart” or “not so smart”. Much has been written about this already. And yet, some people still talk that way. Many of the students I teach struggle with writing scientific prose, many of them struggle with understanding the assigned readings, many of them struggle with the task of “making up their own minds” or “finding their voice”. And while I agree that those who do not struggle, or who do not struggle as much, should, of course, be encouraged and supported – I sometimes think that the struggling students might be the ones who benefit the most from our teaching philosophy, and for whom our dedication and encouragement might really make a much-needed difference. It certainly did so for me.

You don’t get what you deserve. Part I: Meritocracy in education vs employment relations

The belief that we live in a meritocracy is the idea that people get what they deserve. At school you don’t get a good grade because of your skin colour or because you have a nice smile but because you demonstrate the required skills. When I was young, the idea helped me to gain a lot of confidence. Being what is now called a first-generation student, I thought I owed my opportunity to study to a meritocratic society. I had this wonderfully confident belief that, if you’re good or trying hard enough, you’ll get where you want. Today, I think that there is so much wrong with this idea that I don’t really know where to start. Meritocratic beliefs are mostly false and harmful. In the light of our sociological knowledge, still believing that people get what they deserve strikes me as on a par with being a flat earther or a climate change denialist. At the same time, beliefs in meritocratic principles are enormously widespread and deep-rooted, even among those who should and do know better. In what follows, I attempt to make nothing more than a beginning to look at that pernicious idea and why it has so much currency.

Perhaps one of the greatest problems of meritocratic ideas is that they create a normative link between possibly unrelated things: There is no intrinsic relation between displaying certain qualities, on the one hand, and getting a job, on the other hand. Of course, they might be related; in fact, displaying certain qualities might be one of the necessary conditions for getting the job. But the justification structure suggested by meritocratic beliefs clearly obscures countless other factors, such as being in the right place at the right time etc. Here are two variants of how this plays out:

  • “I’m not good enough.” – This is a common conclusion drawn by most people. That is, by those, who don’t get the job or grant or promotion they have applied for. If there is one job and a hundred applicants, you can guess that a large amount of people will think they were not good enough. Of course, that’s nonsense for many reasons. But if the belief is that people get what they deserve, then those not getting anything might conclude to be undeserving. A recent piece by a lecturer leaving academia, for instance, contends that part of the problem is that one always has to show that one is “better than the rest”, insinuating that people showing just that might get the job in the end. But apart from the fact that the imbalance between available jobs and applicants pushes such demands to absurd heights, the question arises whether any employer could be sufficiently good to be able to recognise the enormously refined qualities of the applicants.
  • “My qualities are not recognised.” –  The more confident applicants among us might thus draw quite another conclusion, namely that they are good enough, but that their qualities are simply not seen. The counterfactual behind this reasoning seems to be the following: Had my prospective employer seen how good I am, she would have hired me. As I see it, both kinds of reasoning are fallacious in that they construe the relation between performance and getting the job / grant etc. too tightly. Of course, most people know that. But this knowledge does not prevent one from going along with the fallacious reasoning. Why is that? Well, my hunch is that meritocratic beliefs are deeply ingrained in our educational system and spill over to other contexts, such as employment relations. Let me explain.

Most education systems hold a simple promise: If you work hard enough, you’ll get a good grade. While this is a problematic belief in itself, it is a feasible idea in principle. The real problem begins with the transition from education to employment relations in academia. If you have a well performing course, you can give all of your thirty students a high grade. But you can’t give thirty applicants for the same position the job you’ve advertised, even if all the applicants are equally brilliant. Now the problem in higher education is that the transition from educational rewards to employment rewards is often rather subtle. Accordingly, someone not getting a job might draw the same conclusion as someone not getting a good grade.

It is here that we are prone to fallacious reasoning and it is here that especially academic employers need to behave more responsibly: Telling people that “the best candidate” will get the job might too easily come across like telling your first-year students that the best people will get a top grade. But the job market is a zero sum game, while studying is not. (It might be that there is more than just one best candidate or it might be impossible for the employer to determine who the best candidate is.) So a competition among students is of a completely different kind than a competition between job candidates. But this fact is often obscured. An obvious indicator of this is that for PhD candidates it is often unclear whether they are employees or students. Yet, it strikes me as a category mistake to speak about (not) “deserving” a job in the same way as about deserving a certain grade or diploma. So while, at least in an ideal world, a bad grade is a reflection of the work you’ve done, not getting a job is not a reflection of the work you’ve done. There is no intrinsic relation between the latter two things. Now that doesn’t mean that (the prospect of doing) good work is not a condition for getting a job, it just means that there is no relation of being deserving or undeserving.

Or to put the same point somewhat differently, while not every performance deserves a good grade, everyone deserves a job.

Notes on the ethics of contagion. A reply to Martin Lenz

In his previous post about the ethics of contagion, Martin Lenz treats the issue of responsibility in the current pandemic. Given how hyperconnected the world is in which we live, everyone might infect an indefinite number of other people and thus turn into a superspreader. Now more than ever we are seeing that individual actions truly make the difference, and so we all need to act as if we were potentially harmful to everyone else in the world.

This situation demands us to take a collective responsibility. Accordingly, we must comply with the rules and advise other people to do the same. Not only that, but we must help one another to take necessary precautions. In other words, we must create supportive environments, namely ones in which we “mutually enable each other in taking necessary precautions” and “in which we can comply without harming ourselves”.

Of course, to comply with any preventive norm or social rule, we need what we have called a ‘supportive environment’. But cooperation among individuals is possible in a social group only when rationality is present1. While this would be highly desirable, the risk of a full collective compliance is conformity, which might have negative outcomes for individual agency. In fact, if a social group drifts away from rational patterns, then it is likely that forms of herd behaviour emerge among its members. For instance, when someone does not take sufficient precautions, people blame him/her for deviating from the current norms of his/her country. Collective blaming, shaming and other moral judgments are forms of herd behaviour too and may have serious consequences for individuals and social life. They are already a signal of the fact that a social group is drifting away from rational patterns of behaviour.

One way to avoid cognitive bias or falling into other traps of conformity is to doubt and hesitate. In this time, doubting about our immediate beliefs and being hesitant about judging others are perhaps the first steps each of us can take to create a supportive environment. Thus, asking ourselves ‘Was that person able to comply with the rules?’ before calling out noncompliance might prevent us to undertake a course of action which has effects that might be mostly unpredictable and even very unpleasant for third parties. (No matter if some effects were beyond our intentions: if they are directly dependent upon our actions, we are at least partly responsible for them anyway). In this way, we can still keep a reasonable attitude, which is also healthy for social life in general.

I am comfortable with this opinion and I do agree with it. However, is it enough to account for an ethics of contagion? I think Lenz’s position is lacking something in its characterization of moral responsibility, for it focuses only on what individual people ought to do. In my opinion, an ethical perspective should pay attention not only to individual agency, but also to the factors that although independent from the will are nevertheless determinant for individual decision making. The aim is to see whether people are always fully responsible for whatever they do, and eventually if we can attribute a part of responsibility to the social setting they belong to. For this sake, I will borrow some notions from social ontology, and I will use them as a key tool for widen the concept of responsibility.

As Lenz himself rightly puts it, “it is vital that universities and indeed other institutions follow policies that enable individuals to act in compliance with preventive measures”. Why is it vital? Because social environments are not always constituted only by relationships among individuals, like Facebook groups or other meet-up phenomena, which emerge out only from random interactions. Rather, social environments may be more complex. For the sake of simplicity, I shall call those environments complex social environments (CSE). Examples of CSE are corporations, social, religious or political institutions, and the Modern State. Thus, by CSE I define a social environment that has a structure not reducible to the sum of the atomic behaviour of its members or of relations among them (like an aggregation of parts), and such that the environment can be considered a unity and identified as a single entity or an individual. Another feature of CSE is that they are heterogeneous, namely they include agents with different powers and interests (some individuals have the power to act on the structure, in virtue of their role or function in the CSE).

This leads to two preliminary points. First, if something in the structure of a CSE does not allow for mutual support, its members will mostly fail in cooperative tasks. That would be the case simply because the CSE under consideration is intrinsically not functional to cooperation among individuals. (We can intuitively understand the structure of a CSE as what designs the limits and conditions for individual agency and personal freedom within the CSE itself). Second, if CSE are single entities or individuals, it means that we can attribute to them responsibility for the collective conduct undertaken by their members. Speaking from a juridical point of view, CSE are a persona like human beings. (Corollary: CSE do not interact only with their members, but – as individuals – also with other CSE).

Going back to our notion of supportive environment, under which conditions we may then deem a CSE supportive? Some conditions are mental and primarily related to individual agency. For instance, acting cooperatively presupposes that people perceive themselves already as a unity or as belonging to the same community. In other words, people must recognize themselves as members of the same CSE. It also implies that people look at others sharing the same environment each time as the person next to them and not as a third man. There must be some degrees of sympathy among CSE members.

Other conditions are related to what up to now I have called the structure of a CSE, and it is exactly here that rationality plays the most important part. Given that it is the most relevant case to our discussion, in the list of structural conditions below I will consider only the Modern State as a CSE:

a) Fair information. Politicians, scientists, intellectuals, media and public figures in general must employ a truthful and honest communication, being informative without aiming to trigger emotive reactions in the audience. In that sense, conversational maxims (Grice 1975) seem to me to be still valid.

b) Unity of decision. There must be a certain amount of coordination among the different political actors at play. In a situation of prolonged emergency and uncertainty, it is generally advisable that local administrations follow the central government.

c) Rationality of law. Social norms and regulations introduced for a pandemic must be scientifically grounded, clear, avoid ambiguities and grey zones.

Italy failed to meet the conditions to become a supportive environment. In what follows I will try to explain why it is the case and I will treat Italy purely as an example of CSE. This might sound as an attack to Italy, but that is not my intention. There are other countries facing similar (if not worse) problems – think about the current situation in China, Hungary, Brazil or the US. I am talking about Italy only because it is the social environment I know better.

a) From the beginning of the global health crisis, in Italy there has been an increasing amount of misinformation and leading politicians superspreading fake news on Covid19. Furthermore, the way Italian media have informed about Covid19 related facts rapidly spread fear and panic among the population, as a nocebo effect [https://non.copyriot.com/pandemie-kriegstagebuecher-neurosenlehre/].

b)  Many Italian politicians showed quite a spectacular way to make people comply with restrictions. Most importantly, arbitrary decisions lead to vertical political conflicts between government and local administrations, as well as horizontal ones among local administrations themselves.

c) As a country that has already been badly affected by the last economic crisis, the harsh lockdown had devastating psychological effects among the population. It is also still a matter of debate whether some restrictions2 were necessary and other decisions would not have been better to be taken.

Compared to other European countries, it seems that Italy has been seriously hit by a wave of terror and irrationality. People’s favourite scapegoats have been runners, those practicing sports or simply whoever was taking a walk or was seen outside on the streets. In such a hostile and repressive environment, the decision to hire a corps of 60.000 volunteers, patrolling public spaces and reporting noncompliance to authorities, sounded threatening even to me. Luckily, this truly Orwellian scenario seems to have been reconsidered, and the volunteers will only be employed for avoiding the formation of crowds and for public utility purposes. Nevertheless, there are plenty of cases of irrational and herd behaviour to confirm the overall negative impact that lockdown and unfair Covid19 information had in Italy. Not much data has been gathered and not enough research has been carried out yet, but to give more evidence to this point, I shall point out some examples and divide them in two groups.

Aggressions:

  • One of my Facebook contacts was stopped by a couple in an SUV, while he was going for a run in the evening. The couple threatened to beat him in case they would have seen him again hanging around outside.
  • A runner destroyed his neighbours’ car with a baseball bat, because every time he was going out for a run, they were repeatedly yelling offenses, filming and threatening him to report his “illegal and irresponsible” behaviour to the police.
  • Someone threw a bucket full of water on a woman while biking, without knowing she was simply a pharmacist coming back from work.

Herd behaviour:

  • People started spontaneously organizing in chats and social media groups to share information about infected people in their village or neighbourhood. Their aim was to avoid alleged infected people and, eventually, report their deviant behaviour to fellow citizens and authorities. Unfortunately, I had first-hand experience of this, for it has been the case in my hometown and in some other towns in the surrounding area as well. Even worse, in Vasto, a town in the Region of Abruzzo, someone wrote and spread a list with the personal information of many members of the Roma community; people labelled Roma as superspreaders and in turn attacked the major because he condemned this reprehensible action3.
  • The anti-establishment and Covid19 denial movement “orange gilets” organized demonstrations in several Italian cities (the two main ones in Milan May 30th, and Rome June 2nd). In a few days, thousands of people gathered without keeping any social distance or wearing face masks. The orange gilets claim that the Covid19 virus has been created to weaken the Italian economy and allow foreign countries to take the control over Italy. Thus, they demand at the same time the resignation of the government, the creation of a new constituent assembly, and an Italexit. In particular, during the Rome demonstration one of their main activists stated that the Italian Prime Minister Giuseppe Conte together with Bill Gates wanted to turn us all into “small robots”: by obliging everyone to vaccine against Covid19, they would inject mercury in our veins and thus connect our bodies directly to 5G; in this way, they would be able to control us remotely and, if they want, even to kill us just by heating up our body temperature.
  • On June 2nd, Matteo Salvini and the other leaders of the opposition organized a public demonstration in Rome, to protest the government and celebrate the anniversary of the Italian republic together. In this occasion too, thousands of people gathered disregarding the very basic safety rules, while politicians were only caring about selfies with their supporters.

In this post I have explored some conditions under which an environment might be called supportive. Indeed, in complex social environments those conditions are structural and do not substantially depend on individual agency. Quite on the contrary, the outcomes of individual agency are largely dependent upon these conditions (or the lack of thereof). The structure of a social environment explains the collective conduct of its members. This means that, if structural conditions make the social environment hostile and repressive, its members will not tend to act cooperatively and instead forms of herd behaviour will emerge. Therefore, part of the responsibility for the collective conduct can be attributed also to the environment itself, insofar that its structural conditions are a matter of human decisions anyway. (In a Modern State, politicians formulate and apply restrictions on different levels). For example, the case of Italy shows that the lack of those conditions does not stop compliance itself, rather it opens compliance to conformity, instead of cooperation, and creates a hostile and repressive environment, as opposed to a supportive one. Concerning an ethics of contagion, good information, politics and administration are the fundamental blocks to build a properly supportive environment, that would allow compliance with rules without fostering herd behaviour and encourage both cooperation and mutual help practices.

_______________________________________________________

1 One may object that the human being is a social animal. But even then, the fact that human beings are social animals means that they tend to live in groups with members of the same species. It does not entail per se that human beings are also cooperative by nature.
In the context of an ethics of contagion, by rationality I understand the capacity of deliberating on solid epistemic grounds. By rational (patterns of) behaviour I understand those relying on self-determination and awareness, without being affected by bias and external constraints of the sort. As I have argued above, during a pandemic a rational pattern of behaviour also consists in being able to doubt about our immediate beliefs and hesitate before making moral judgements.

2 Here I may think of the prohibition of sports activities, the obligation to stay within the area of 200m surrounding your house, or the obligation to always wear a mask outside of your house, whatever the place and the occasion (even if you are alone lying on a beach or sitting in a park on your own). But the list might not be limited to.

3 This very episode sadly reminds about the accusations addressed to Jews, of being the superspreaders of both leprosy and the black plague epidemies in France during the 13th and 14th centuries. Remarkably, in both cases Jews were accused to spread the virus in conspiracy with the Sultan and Muslims (Ginzburg 1991: 33-86). More broadly, as Nicolas Guilhot rightly argues, pandemics are the perfect environment for rumours, fake news and conspiracy theories to spread.

PS. This post is inspired by a previous Facebook discussion on the ethics of contagion and by The Metaphysics of Online Groups. Herd Behavior and Polarization, a research side-project in social ontology I am running. I am grateful to Martin Lenz for the former (as well as for the invitation to contribute in the debate). For the latter, I should thank Tommaso Ostillio and Giulio Sciacca. Last but not least, I am indebted to Anouk Hogers for important suggestions.

The impotence of hierarchy

Want to know a secret? There is this recurrent fear that many people in leadership positions told me about: “Now that I am in this position, I fear that people around me won’t speak their mind anymore, that they won’t dare criticising me. For all I know, they might think I am a fool but never tell me.” I think the first time it was my PhD supervisor who told me, and he even told me that this was also the worst fear of his supervisor. So there is a chain of fear passed on down the line. If I ask my students to be frank, I could also add that my supervisor … It’s a bit of a sad story, because we know how it goes. Back in the day, I wasn’t very open to my supervisor, and the times I tried, I often regretted it. – These fears are woven into the fabric of our hierarchies. Understandable as they might be, they are dangerous. The can preclude open discussion and correction. Given that I’m spending much of my time in universities, I am struck by how often I encounter this. In what follows, I’d like to look at a few instances and ask whether there are any remedies.

Before walking through some examples, let’s begin by looking at the phenomenon. Power imbalance is often portrayed as unidirectional state. The boss or supervisor has power; the employees or students dependent on the boss fear the boss. But as I see it, the fear has a reciprocal structure: You are afraid to criticise your boss because he or she might reproach you for doing so. Knowing your fear, the boss is afraid that you will hide your criticisms. This might spiral into a number of refined and uncomfortable assumptions. “I’ll rather tell him something nice about himself.” – “She only said that because she wants to divert attention from her criticism.” – “He doesn’t take me seriously.” – “She doesn’t take me seriously.” Mutual mistrust might follow.* If this kind of setting is large enough, the mistrust might translate into full-blown conspiracy theories. But I think the problem, at root, is not the hierarchy itself. The problem is that we often treat a hierarchical position as a personal rather than an institutional feature. But your boss is not your boss because he or she is a better whatever, but because the design of our institutions requires this function.** In this sense, hierarchy is owing to ways of dividing labour. However, while some contexts might require hierarchical division of labour, certain processes cannot function in the presence of hierarchy. Collective deliberation, for instance, is not possible if someone in the collective intervenes qua greater power. If my thoughts are taken to carry more weight because I’m a professor rather than a student, then we do not need any discussion. Or do we? Let’s look at some instances then:

  • Deliberation in science. – It’s often noted that the current corona crisis makes our shortcomings obvious. So let’s begin with policy design in the corona crisis. Given the complexity of the problems in this crisis, you would expect that decision makers listen to a number of voices. But in the Netherlands, for instance, the opposite seems to be true: “There is no discussion … Because there is a crisis, it is not allowed to have a discussion.” These are the words of Microbiologist Alex Friedrich. Rather than following the guidelines of the government, he caused quite some stir by speaking up against the Dutch strategy and partly changed the course of action by demanding more testing in the north. His point is that scientific advice is too hierarchical and focused on too few voices. Instead, it should be run like a “jam session” where everyone speaks up. I guess you don’t have to be a jazz lover to appreciate the fact that you are more likely to hit on crucial facts when you listen to as many people and disciplines as possible. But the example shows that collective deliberation is still obstructed rather than enhanced (see also here).
  • Transitions in the university. – Borrowing a quote from a British colleague, the president of our university recently noted that implementing change in universities were like ‘reorganising a graveyard: “You don’t get much support from the inside”.’ The idea seems to be that changes like the current transitions to online teaching require an “external shock”. While shock might indeed be an occasion for change, I think that the comparison to the graveyard has clear limitations. I doubt that successful transition works without calling on the employees who actually do the implementing. So when we plan the details of this transition, I am sure our success will depend on whether we will listen carefully to the experiences and insights “the inside” has to offer. Indeed, the digital infrastructure that we now rely on increasingly provides great tools to implement change with the necessary transparency and participation of everyone involved. Sometimes this comes with comic relief: At the mercy of advanced technology, hierarchies of seniority are quickly turned upside down.
  • Hierarchy in teaching. – As I have noted earlier, my status as a professor should not enhance the status of what I have to say. And yet we all know that when I enter the lecture hall, the institutional powers put me in a special position, whether we like it or not. The fact that I am grading the very students who might criticise me seems to settle the intellectual hierarchy, too. Can this be evaded? Should it be? As I see it, the hierarchical power of professors over students is limited to the educational tasks they set within the institutional setting they share. I can give you a task and then tell to what extent you solved it well, whether you drew on all relevant resources etc. But an educational task, to be dealt with in an exam or essay, is different from the complex problems that confront us. Once we go beyond the confinement of exercises, students are fellow philosophers and citizens, where hierarchy should no longer apply. For the reasons noted above, the hierarchy might still be effective. But it is to our detriment if we allow it to happen.

Hierarchy, taken as a personal trait, then, obstructs true deliberation, diversity and learning. In an ideal setting, archangels could openly learn from the devil’s criticism. That said, it’s hard to figure out how we can evade the traps and fears that hierarchies foster. But we should be weary whenever discussion is closed with reference to hierarchical position. It harms all sides, those with more and less powers. But of course it’s hard to bypass something so deeply ingrained in our system. Yet, if someone politely asks you to shut up and listen, it might be best to go along and listen. In the same vain, those with more power should seek, not shun, advice from everyone. Acquiring knowledge and finding solutions, even if governed by good methods, is an accidental and collective process. You might have no idea what you’re missing. So keep asking around and encourage others. It’s always an institution, not you, that grants the power over people. The more power you exercise over them, the more likely it is that people refrain from telling you uncomfortable truths.

____

* A perfect illustration is Paul Watzlawick’s “Story of the Hammer”.

** However, one might doubt whether hierarchies really obtain because of a functional division of labour. The economist Stephen Marglin famously argues that “the capitalist organization of work came into existence not because of superior efficiency but in consequence of the rent-seeking activities of the capitalist.” (I wish to thank Daniel Moure for pointing me to the work of Marglin, especially to the seminal paper “What do bosses do?”)

What’s it like to be (with) a superspreader? A note on the ethics of contagion

We’re used to the trope that our personal actions don’t make much of a difference. Arguably, in tackling climate change it’s not my choice to take an individual flight that makes things better or worse. In the current pandemic, however, nothing could be further from the truth. If I happen to be infectious, taking a flight these days might turn me into a superspreader, setting off a chain of infections that might harm a great amount of people. While we normally have to adapt to the world, the potential of spreading a virus like that has the uncanny effect that the (social) world, suffering infection, ‘adapts to us’, the one spreading. Of course, there are good reasons to avoid labelling individual people as superspreaders, but the fact remains that my individual behaviour might contribute to large-scale infections. The possibility of spreading the virus makes a number of very common habits doubtful and raises a number of moral questions. If I am contagious, then I should take precautions so as not to harm others. Therefore it’s not surprising that we find ourselves confronted with the recurrent advice to wash our hands and stay at home. However, even if the precautions to be taken are individual actions, they require a supportive social setting and compliance. If my employer, for instance, coerces me to work without taking precautions, the blame should be placed accordingly. Thus, new kinds of responsibilities emerge. In what follows, I’d like to consider some aspects of such responsibilities.

Being harmful. – In spreading the coronavirus, we cause harm. The idea of being alerted to the fact that one was responsible for such a spreading is enormously unpleasant, to say the least. While we might not want to attribute moral responsibility to a spreader, we will deem it epidemiologically important to track such a patient. So while such a spreading might not count as a (voluntary) action because it is not intended, it requires us to see ourselves as a cause of harm. That said, being involved in such an event might count as a case of (bad) moral luck and can hardly be dissociated from moral considerations.* Now the assumption that we are merely involuntary causes in such events no longer holds once we know that we are in a pandemic. In this case, I ought to take precautions. A failure to do so would strike me as morally blameworthy. So if I neglect hygiene measures (and end up spreading the virus), I am behaving irresponsibly and blameworthily. However, and this is the point I want to highlight, my fellow citizens and those responsible for living and working conditions in particular also have moral duties. Collectively, we might be said to have the duty to mutually enable each other to take necessary precautions. Now what does this amount to?

The moral status of spreading and spreading advice. – Advice such as “wash your hands!” and “stay at home (whenever possible)” is certainly helpful and ought to be followed in our current pandemic. Yet, it is a double-edged sword. On the one, hand it promotes risk aversion. If people comply, they might indeed prevent spreading and thus create a safe environment. On the other hand, it can be stigmatising. Given that at least staying at home comes at quite a price for some, we must bear in mind that calling out noncompliance might stigmatise and harm others, too. With the lifting of the lockdowns, we not only see people prematurely hasten ‘back to normal’, we also see a growing divide between those complying and those not complying with the restrictions. This divide is not helpful for either side. As I see it, people can only comply successfully with restrictions in a supportive environment. While it is true that individual actions can make a lot of a difference, individuals must have a chance to balance their compliance with the costs that arise. For a tenured professor like me, for instance, it’s easy to stay at home. But that is worlds apart from asking compliance of a shop assistant, who might be sacked if she fails to expose herself to hoards of potentially infectious customers, frowning at her for not wearing her mask correctly. So while everyone needs to consider themselves as a potential cause of harm to others, we need to create an environment in which we enable such considerations. Calling out others will more likely provide an incentive to shift the blame.

What is a supportive environment? – No matter what strategy (if any) your government is following, we need to comply with certain restrictions, if we want to prevent harming people through spreading the coronavirus. A supportive environment is one in which we can comply without harming ourselves. If we ignore the needs of children for a moment, one of the greatest factors that make compliance with lockdown restrictions difficult is the fact that we have to work and rely on other people’s work (to supply for our needs). Thus, we need to make sure that our working conditions allow for compliance. It is here that see a great number of difficulties. Let’s briefly highlight two issues:

  • The right to protect yourself from harm. – Compliance can only be demanded insofar as people can comply without harming themselves, be it economically or with regard to their health. Now from the very beginning of the corona crisis it was obvious that a number of people seem to have no effective means to protect themselves. If it is true that being indoors with other people is one of the greatest risks of infection, then care workers are exposed to an enormous danger of infection. Indeed, recent data shows that most COVID-19 deaths are occurring in this profession. If confined spaces are problematic, then what about schools, shops, public transport and the like? (While it might be ok to go shopping, it’s queite another question whether shop assistants are really well protected.) Perhaps with the exception of the medical sector, this seems to be largely in keeping with ongoing class issues. As I see it, our societies and the respective employers in particular are often not providing a safe working environment. But how can we expect compliance if we allow for such an amount of disregard within social settings, firms and institutions?
  • The duty to prevent spreading. – Is there a moral duty to prevent spreading? I guess, there is if we can do it without harming ourselves. This is why many people (myself included) consider the lifting of many lockdown measures as premature, especially in that they seem to incentivise outright blameworthy behaviour of crowds gathering to protest for individual liberties or whatever. But besides such aberrations there are more subtle cases. In mid-March, for instance, a colleague returning from a high-risk area was reproached for potentially causing “panic” among students by asking them to keep a safe distance. While I think that causing panic would be bad, I also think that universities ought to prioritise providing a safe environment. Thus, it is vital that universities and indeed other institutions follow policies that enable individuals to act in compliance with preventive measures. While most universities have now moved their education online, they are partly pushing for moving back on campus asap. While eventually returning to in-person education seems sensible and desirable, such moves might also incentivise an unhealthy competition, incentivising premature steps.

So in many cases we might witness (or at least have witnessed) that people are not complying with preventive measures. However, when judging the morality of harmful behavior, we must ask whether they are acting under conditions that allow for compliance in the first place. As much as I am upset to see people behaving in ways that might harm others or themselves, when passing moral judgement of their individual actions, we must bear in mind that the responsibility for enabling preventive behaviour is a collective sort of responsibility. Although I could cause enormous harm by spreading the virus, my actions to prevent such harmful actions have to rely on collective support. In a nutshell, it’s probably more appropriate to blame certain groups, firms and institutions than individuals. Both taking precautions and easing restrictions should be implemented such that these actions allow for mutual support.

––––

* Which reminds me of an intriguing discussion of Adam Smith on the so-called piacular by Eric Schliesser.

PS. Many thanks to Justin Weinberg for suggesting an important revision in my phrasing.

Cavendish’s Triumvirate and the Writing Process

I’m working through Margaret Cavendish’s Observations upon Experimental Philosophy (1666) at the moment. It’s not the first time (in fact, I taught a course on it after Christmas), but her writing is dense and is neither as systematic as someone like Descartes nor as succinct as someone like Berkeley. But the pay-off is a philosophy rich full of insights that genuinely does seem to be, if not ahead of its time (I don’t want to be accused of anachronism), then idiosyncratic to its immediate historical context in some striking ways. For example, I’m reading Cavendish alongside Keith Allen’s A Naïve Realist Theory of Colour (OUP, 2016), and there are clear signs that she had thought deeply about phenomena such as colour constancy (whereby we take objects to have remained the same colour even though a different coloured light is shining on them) and metamerism (objects with different microphysical qualities that appear to be the same colour) that are central to contemporary perception debates (Colin Chamberlain has written a great article on Cavendish’s atypical philosophy of colour). As far as I am aware, these aren’t issues that her contemporaries (Hobbes, Descartes, Berkeley, et al) were much preoccupied with. And while reading and working through Cavendish’s philosophy is a bit like trying to untangle a charger cable that’s been kept in a box in a drawer too long – each time you think you’ve untangled all the knots another one appears – it tends to be rewarding, even if it is near impossible to pin down exactly what she thinks about any given issue ‘X’.

Perhaps because of the inevitable struggle that comes with defending an interpretation of Cavendish’s philosophy, I’m also thinking a lot about the trials and tribulations of the writing process (it may also be because I have literally nothing else to do). For a long time, I’ve thought that one of the best pieces of writing advice came from Daniel Dennett who, in various platforms (including a keynote he gave here in Dublin last September) has encouraged writers to ‘blurt something out, and then you have something to work with’. I’ve regurgitated this advice to students several times, and it chimes well with me because I find it much easier to shape and mould a pre-existing block of text, than to face the task of squeezing something out of the ether (or my brain – wherever it comes from) and onto the page. Like Leibniz, I prefer a block to chip away from than a Lockean blank page. With that in mind, I’ve started to wonder whether a particular aspect of Cavendish’s metaphysics might provide us with a nice model for the writing process.

Perhaps one of the most interesting, and remarkable, aspects of Cavendish’s system of nature is her claim that all parts of nature contain what she calls a “triumvirate” of matter (note: Cavendish is a materialist, even the mind is composed of material substance in her system). She claims that each and every part of nature is made up of three kinds of matter: (1) rational matter, (2) sensitive matter, and (3) inanimate matter. Even if you could pick out an atomistic unit (although she rejects atomism herself), she thinks, you would find varying degrees of all three kinds of matter. Inanimate matter is matter as we would ordinarily think of it, bulky stuff that weighs the other kinds of matter down and does the important job of filling up space (a job I’ve gotten very good at myself during lockdown). Cavendish compares inanimate matter to the bricks and mortar used to build a house. Continuing this analogy, she suggests that sensitive matter plays the role of the team of builders, moving inanimate matter around and getting it to take up particular shapes and forms. The variety of ways that inanimate matter is put together, she thinks, explains the variety of things in the natural world around us. What’s more, if there were no sensitive matter to move inanimate matter around, she claims, the world would be entirely homogenous. Finally, she compares rational matter to the architect responsible for it all. For the sensitive matter wouldn’t know what to do with all the inanimate matter if it wasn’t told what to do by someone with a plan. In the section of the Observations entitled ‘An Argumental Discourse’ (one of the strangest philosophical dialogues out there, between two ‘halves’ of her own mind who are ‘at war’) she sums up the triumvirate of matter like so:

as in the exstruction of a house there is first required an architect or surveyor, who orders and designs the building, and puts the labourers to work; next the labourers or workmen themselves; and lastly the materials of which the house is built: so the rational part… in the framing of natural effects, is, as it were, the surveyor or architect; the sensitive, the labouring or working part; and the inanimate, the materials: and all these degrees are necessarily required in every composed action of nature.

Observations upon Experimental (Cambridge Texts Edition, edited by Eileen O’Neill (2001)) pp. 24

This is, then, a top-down approach to understanding both orderliness and variety of things in nature. It’s all possible, Cavendish thinks, because there’s an ‘architect’ (the rational part of a thing in nature) that devises a plan and decides what to do the with bulky mass of inanimate matter. (Another note: Cavendish is a vitalist materialist or what we might retrospectively call a panpsychist: she thinks that every part of nature, from grains of sand to plants, animals, and people, has life and knowledge of things in the world around it.)

Right, so how does all this relate to the writing process? I don’t quite know whether this is intended to be a helpful normative suggestion, or just a descriptive claim, but I suggest that Cavendish’s triumvirate might provide a model for thinking about how writing works. In this case, the role of bulky, cumbersome inanimate matter is played by the words on the page you’ve managed to ‘blurt out’, to use Dennett’s technical terminology. Or, perhaps it’s the thoughts/ ideas you’ve still got in your head. Either way, it’s a mass of sentences, propositions, textual references, and so on, that you’ve got to do something with (another tangled charger cable, if you will). What options have you got? Well, structure and presentation are important – and while these are facilitated by your word processor (for example), they constitute a kind of medium between your thought and the words on the page. So I’d suggest that presentation, structure, perhaps even the phrasing of individual sentences, is what plays the role of sensitive matter: Cavendish’s labourers or workmen.

Finally, there’s the role of rational matter: the architect or surveyor who’s plan the sensitive matter is just waiting to carry out. I actually think this may be the hardest comparison to draw. It would be easy to simply say ‘you’ are the architect of your writing, but once you’ve taken away the words/ ideas as well the as the way they are presented or structured, it’s hard to know exactly what’s doing the work or what’s left (just ask Hume). Last year, I saw Anna Burns, author of the brilliant Milkman, give a talk where she was asked about her writing process. Her answer, which in the mouth of another could have sounded pompous or pretentious, was honest and revealing: she had literally nothing to say. She couldn’t explain what the real source of her writing was and, even more remarkably, she wasn’t particularly interested. In any case, there’s something that’s grouping together, or paying selective attention to, some ideas or notions and advocating that they should become a piece of writing. Whatever that is, I suggest it plays the role of rational matter: Cavendish’s architect.

How might this be helpful to writers? I’m not sure it can in any practical way, but I find it helpful when I hit upon a nice description of something I’ve grappled with or when it seems that someone is describing my own experiences (it’s one of the reasons I like reading both philosophy and fiction). Perhaps Cavendish’s triumvirate model can be useful in this way. It may also, and I have begun to think in these terms myself, provide you with a measure of where you are in the writing process. Am I still sourcing the bricks and mortar? Are the labourers at work? Or are they waiting for instructions from the architect? Sometimes, it’s helpful to know where you are, because it lets you take stock of what there is still to do – and, in keeping with Cavendish’s analogy, who’s going to do it.

Precarity and Privilege. And why you should join a union, today

Reflecting on the personal impact of the corona crisis, a close friend remarked that things didn’t change all that much, rather they became obvious. I then began to hear variations of that idea repeatedly. If you live in a complicated relationship, that might very well show right now. If you have made difficult decisions, their consequences might be more palpable now. If you live in a precarious situation, you will feel that clearly now. On the other hand, there might be good stuff that you perhaps hardly noticed, but if it’s there, it will carry you now. On social media, I sense a lot of (positive) nostalgia. People remember things, show what mattered then and now. Things become obvious. The crisis works like a magnifying glass.

This effect also shows how well we are prepared. As an adolescent, I used to smile at my parents for storing lots of food cans in their basement. Of course, most of us also laugh at people rushing to hoard toilet paper, but how well prepared are we for what is coming? Perhaps you think that if we’re lacking things and certain habits now, this is owing to individual failures or laziness. But if we experience precariousness, hardly any of that is an individual fault. Habits need collective stabilisation and consolidation to persist. That said, I’m not going to focus on the state of your basement or hygiene measures. Rather, I’m worried about the question of how well we are politically prepared. Many people around me are facing really dire situations. And our political preparation (or lack thereof) leaves us with very few means to address them properly. So what can be done? I’ll begin with some general considerations and try to finish with some practical advice.

If we look around, we see that a lot can be done. Slowing down the economy like that without immediate chaos ensuing is a huge success. But very soon, people will start telling each other that certain things “cannot” be done, because they are “too difficult”, “too expensive” or “against the rules”. While a lot of good is happening, the bargaining and gaslighting has already begun. Being a highly competitive culture, academia has a notorious problem with collective action (role models in the UK who have been on strike for enormous amounts of time notwithstanding). But this crisis requires collective measures, both in terms of hygiene and in terms of politics.

What’s the problem? Precarious employment (not only) in academia has been a growing factor for a long time. As I see it, this jeopardizes not only political but also academic goals, because it leads to an unwelcome dissociation of teaching and research. But at the present moment, this precarity might turn into something much worse. We already see furloughs and dismissals especially of people on fixed term contracts and the flimsy justifications rolling in on a fairly large scale. At the same time, we witness what we have already seen in the medical sector. We lack transnational policies and thus people are being treated very differently, depending on where they happen to work and what sort of contract they have. Add to this that many ad hoc measures, such as online teaching, are now used as a pretext to introduce lasting changes that may be detrimental to both employment conditions and educational standards. So the precarity and educational standards might worsen to a tipping point where education might become largely disposable. Indeed, mass education is of course disposable already, unless you have democratic tendencies.

What can be done? The first thing I find striking is that, while people continuously talk about online teaching and other means of continuing work, hardly anyone addresses the question of precarious employment. Given the current firings and freezing of hirings, we know that the job market is going to be brutal. If you are, say, an international postdoc or teaching fellow whose contract runs out after the summer, it will be very difficult to find or even seek employment. While I see people readily exchanging advice on zooming, I’ve seen hardly anything so far on how to address this problem. The exceptions to this rule are labour unions and some employee organisations some of which are currently collecting data and push for measures. (You know of more exceptions? Please spread the news widely!)* Now let me ask you: Are you a member of a union? No? You’re no exception. In the various places I worked during and after my PhD, I have never been encouraged to join a union. It’s almost as if there were no awareness that there is such a thing as the representation of employees’ interests. In fact, I guess it’s worse, and it’s something I’ve not only noticed in academia but also in much of the growing freelance and start-up culture. Going from my own experience, I’d say that people always have been and still are (more or less subtly) discouraged from joining such organisations. So when employees encounter difficulties in their employment, they typically will be portrayed as not being tough enough for the job. You are overworked? Well, if you don’t blame yourself already, you’ll likely be shamed into avoiding publicity. Being overworked is still portrayed as a personal lack of stamina, to be addressed not by collective industrial action but by courses on time management or mindfulness. This way, failing to secure (permanent) employment can still be blamed on the individual rather than on the way higher education is run.

The individualisation of such problems does not only affect people’s (mental) health, it also keeps people away from engaging in collective action. In turn, this means that unions etc. will remain weak because they can easily be portrayed as not representing anyone. If people keep blaming themselves, the unions don’t have a case for building an argument in favour of better employment conditions. I see this as one of the main reasons why we are politically not well prepared for addressing economic problems in this crisis. So what should we do now?

Trying to collect ideas, I have written to a number of friends and colleagues who kindly provided me with suggestions. Let me briefly summarise some crucial points:

  • Generally, permanent / tenured people should take it upon them to make a move. We should be aware that people on fixed term contracts are vulnerable and should not be expected to lobby for their interests alone.
  • Try to see who is in or is likely to get into trouble and talk about the situation. Bring it up with your colleagues and managers whenever the opportunity arises. If you have department meetings or exchanges with funding agencies such as the ERC, ask what can be or is done to ameliorate the situation.
  • Join a union and encourage others to do so, too. In the Netherlands, the unions are taking it upon them to make a case for employees in precarious positions.
  • As I see it, it would be good for universities to invest in staff rather than reduce numbers. Wherever possible contracts should be extended, as is in fact done by various funding bodies.
  • If there are no financial resources for staff, measures should be taken to reallocate budgets, especially travel and overhead funding for the extension of contracts or hires.
  • Universities in Austria and Switzerland have created hardship funds for employees facing precarious situations. This should be done proactively, as people in vulnerable positions might feel discouraged to come forward.

These are just some ideas. I’d be grateful to hear more. But to my mind, the most important point is that we need to pursue such steps in a collective effort. Right now, these steps should be taken because we are in an emergency. Ensuring stability is what is required for providing a safe working environment.

Ultimately, taking measures of solidarity is also about helping academia to survive beyond this crisis. Whenever recession hits, education is often considered disposable. If we were to allow for the reduction of staff without resistance, it would just signal that academia could do with even fewer people and resources. Dictatorships would certainly welcome this. The way we treat our colleagues and students will contribute to determining the political system that we’ll find ourselves in after the crisis.

____

* Of course, there have been initiatives addressing the adjunct crisis. But I havent’t noticed that precarity has been an issue of great public concern in this crisis, even less so among tenured academics, as a recent piece by Emma Pettit notes:

“While tenured professors have typically stood by silently as their nontenured colleagues advocated for themselves on the national stage, they have watched their own kind dwindle. Positions are remaining unfilled. Tenure lines are getting pruned. There’s still the same service work to do, but fewer people to do it, and those who remain shoulder the burden.

And today, as a global pandemic has devastated budgets and led college leaders to freeze hiring and furlough even tenured professors, the cause seems especially urgent.

The structural changes that preceded the pandemic helped set the stage for those austerity measures, and manufactured a growing — if uneven, slow, some would say glacial — recognition among the tenured that relying on contingent labor hurts everyone, activists and higher-education researchers say. …
How much tenured professors have cared, historically, about their contingent colleagues, is difficult to measure. Everyone knows the caricature: the older, typically white, typically male full professor whose non-tenure-track colleagues escape his vision, who still believes merit rises to the top and those who fail to land tenure-track jobs lack work ethic, intelligence, or both. …
Even if tenured professors might not pay attention to the adjuncts who walked their hallways, they couldn’t help but notice the fates of their graduate students, who were being sent into a bottlenecked academic-jobs market to compete for slimmer pickings. They started to connect the dots.”

Will the future be like the past? Making sense of experiences in and of the corona crisis

The world is a different place now. But what does that mean? In keeping with my previous posts, I want to think about the way we experience this situation. Binge-scrolling through expert advice, curves and numbers is important for assessing the situation and deliberating about forms of collective action. But at the same time, it is essential to understand one another and ourselves within this situation, to return from the third-person talk to the second and first person perspective. Thus, a crucial part of our thinking should be devoted to the various meanings of our experience. I speak of “meanings” in the plural for two reasons. On the one hand, I think our experiences of the situation vary quite a lot, such that the events we undergo mean different things for different people. So your social and economical situation, for instance, matters greatly in how you will feel and how your expectations take shape. Would I feel as balanced as I do, if I worked, say, in a bar? Or as a postdoc who is facing that my contract is running out soonish? Even if we’re likely facing an enormous global recession, the current stability still affects my being. On the other hand, and this is perhaps surprising, I have noticed that my very own experiences have different meanings even to me. Let me explain: I have now been staying mostly inside (with family) for a bit more than three weeks. Given that I often suffer from anxieties, I would have expected that the growing corona crisis would make me feel bad. But while I have clearly lost a sense of normality, this doesn’t exactly trouble me. I feel ok, perhaps even slightly more balanced than in the months before. For a while, I thought that’s quite surprising. But then I realised that this is true of a number of people. In fact, this morning I read an article according to which some psychologists report that a significant number of patients with depression or anxiety disorders find that their situation improved, paradoxically so. How can we make sense of such experiences? Is there a way of explaining the eerily positive attitude some of us have in this crisis? I’m no psychologist. But as a historian of philosophy I know something about the ways in which we relate to our histories and biographies. My hunch is that this kind of experience is partly determined by our beliefs about how much the future will resemble the past. While trying to explain this hunch a bit more, I’ll say how this might help in assessing conflicts between people with different ways of experiencing the crisis. Will the future resemble the past then? As we will see, this is not a question of (future) facts but of values.

Speaking to various people about the corona crisis, it seems that most conversation partners fall into one of two categories: (1) those who believe that we’ll be “going back to normal” at some point and (2) those who believe that the future will be fairly different from the past. Let’s call them continuists and discontinuists respectively. Continuists think that the future resembles the past, even after this crisis. Accordingly, they will try and prepare for the time after the crisis in much the same way they have pursued their goals before. By contrast, discontinuists assume that the future is not only uncertain but likely different from the status quo of the past. Accordingly, they cannot prepare by pursuing the same goals by the same means. They will expect having to adjust their means or even their goals.
The question whether historical events are continuous with past events or mean a disruptive change is hotly debated, because whether or not you see continuity or change depends what criteria you focus on. But for now I’m less interested in the theoretical issue. Rather, I’m wondering how our pertinent beliefs affect our experience. A wise friend of mine once said that our beliefs about the future shape the present, for instance, in that such beliefs guide our current actions. If that’s correct, then continuists and discontinuists will be preparing for different future scenarios. Of course, the question which future scenario is more likely is a rather pressing one. What (else) will this virus do to us? Will the economy break down completely? Will we have civil unrests, wars over resources? Like you, I’m interested in these things, but lacking relevant knowledge I have nothing to say about them. What I want to address here is how being a continuist or discontinuist relates to your experience of the current situation.

Now how does having one or the other attitude affect your experience? As a continuist who retains your goals you will likely want to stick to your strategies and go back to normal if possible. The current restrictions (contact restrictions or lockdowns) will probably feel rather disruptive. By contrast, a discontinuist might welcome the disruption as way of preparing for an uncertain future. So my guess is that there is a correlation between being a discontinuist and having a more positive attitude towards the disruptive measures. Let’s illustrate this idea with an example. A controversial issue that arises for many people around me is productivity. While some people readily give tips on how to successfully remain productive at the home office and quickly switch to things like online teaching, others see these outbursts of productivity as a problematic distraction from more pressing issues. They worry, for instance, that the switch to online teaching will worsen the standing of academic teaching or the exploitation on the job market.
My idea is that we can pair up the conflicting approaches towards productivity with attitudes about (dis)continuity. While a continuist will remain productive, a discontinuist will be suspicious of such productivity as it seems likely to be jeopardised by the changes ahead. This doesn’t mean that the discontinuist will stop being productive tout court. It just means that the discontinuist will likely want to prepare for adjusting the means or even the goals, rather than keep going as before.

As this example shows, there is not only a difference but also a conflict between continuists and discontinuists. If you currently google the keywords “coronavirus” and “productivity” and look at the headlines, you’re clearly listening in on a fierce dispute. Should you work on improving your productivity? Or should you redirect your focus on different priorities? Continuists often seem to experience the restrictions as if their lives have been put on hold. The crisis might be very disruptive, but by and large the goals remain intact. This might also be mirrored in different attitudes of students: If you are an ambitious student and a continuist, your priority might still be to pass your exams well and quickly. If your university cancels the regular classes and exams (rather than running them online), you will likely be annoyed or worried. By contrast, discontinuists seem to experience the restrictions as the onset or emergence of a new situation; they will likely try to adjust their goals in line with hopes or guesses about the outcome. If you are an ambitious student and a discontinuist, your priority might be to understand and prepare for the new situation. Your focus or interests might change and you might appreciate a pertinent adjustment of teaching rather than the pursuit of former goals.

As I see it, this kind of conflict is often misrepresented. It often seems to be presented as a quest for the right way of responding to the crisis. Thus, depending on the predominant attitude around you, you might see your own response as a failure. Surrounded by continuists, the discontinuist will feel like being not sufficiently productive. Surrounded by discontinuists, the continuist will feel like insufficiently adapting to the new situation that will arise. However, as I see it the conflict between these two stances is not about the facts of the crisis or the predictable future but about values. Let me explain.

As I see it, the question whether there is a continuity after the crisis is not one that could be established by looking at current or estimated future facts. It would be fallacious to think that there is a definite cut off point that distinguishes continuity from discontinuity. In other words, whether a crisis like this allows for going “back to normal” or is a pervasive disruption is not an empirical question. If the crisis has very dire consequences, you can still claim that we’re going back to a “very impoverished normal”. If the crisis is not too disruptive, you can still claim the world is altered, if mainly by the prospect of the crisis returning. So it is the other way round: First you claim that there is a continuity or discontinuity, and then you quote empirical facts for support.

If this is correct, what is it then that makes the difference between continuists and discontinuists? As I said it’s a question of values. If you largely accept the norms of the status quo before the crisis you will evaluate the predicted situation as a deviation from these norms and find points of impoverished continuity. However, the discontinuist will see the norms of the former status quo as undermined. In fact, this is what allows for seeing discontinuity. So the future scenarios discontinuists see are ones in which new norms are established. They will be what we often call a “new normal”, for better or worse. Such a new normal might include, for instance, the restrictions that we anticipated in view of anthropogenic climate change and the Paris Agreement. Seen in this light the current measures taken against the corona crisis might appear as being in line with new norms to be consolidated.

What does this mean for the eerily positive attitude that some of us experience? Once you recognise that the belief in discontinuity is a matter of value, it’s plausible to assume that what empowers (some) people is the necessitated change of norms during lockdown. So while it might be right that the positive attitude correlates with former states of anxiety or depression, it would be dangerous to confine this to a psychological question of individuals. We shouldn’t overlook the societal values going hand in hand with such empowerment. Seen in line with societal values, the disruption of the status quo is not merely destructive. It holds the possibility to establish norms more in line with what many of us might desire in light of the challenges we face, for instance, with regard to climate change. It doesn’t mean that this possibility will become true. But as long as we’re not hit by total disaster, there is hope.