When reading texts with lots of general remarks and little attention to detail, I often wonder whether it’s produced by ChatGPT or some other LLM. I don’t like this kind of suspicion, especially in the context of teaching and evaluating. Not least because it primarily targets the author rather than the text: Has the author used an LLM and hence tried to cheat? So rather than assessing the text, I am incentivised to make a moral judgment. This readjusts my attitude as a reader in a crucial way. Rather than trying to enjoy the flow of the text or get into the argument, I wonder about the honesty and sincerity of the writer. While there is currently much discussion about cheating with LLMs, the unease that the suspicion causes me brings quite another worry to the fore: my own classism. Am I really worried to be cheated on or that the poor souls relying on AI are not learning to think for themselves? Or am I not rather mainly worried that these bullshitting texts produced by AI are soon indistinguishable from the products of my authentic intellectual labour? Let me explain.
Tacitly cultivating classism. – Being what is called a first-gen academic, one might say I’ve earned my cultural capital the hard way. I still remember how I mind-numbingly practiced philosophical terminology at the age of thirteen, enjoing the cluelessness of my parents when I put it to use. Looking back, I think of myself as impertinent and cruel. Intellectualism doesn’t come across as thuggish as brute anti-intellectualism. But I would say that, wouldn’t I? More to the point, my intellectualism paved a way that seems now to be threatened by the fact that text production can be outsourced just like other kinds of labour. Intellectual work of certain kinds is indistinguishable from work outsourced to LLMs. Being annoyed by people’s use of LLM’s I don’t feel consciously threatened. But I do wonder whether it’s this class aspect that creeps into my judgement of those users.
What kind of work do we actually grade as instructors? – My hunch is, then, that what is behind my suspicion against certain writers who might have used LLMs is owing to a certain classism or class anxiety. If people can outsource intellectual work at least to a certain degree, I might end up suspecting (tacitly) that these people don’t belong where they claim to be. Now you might respond that part of this suspicion is fair in that it targets fraud etc. Yet, I’m not sure it is fair. Of course, when dealing with straightforward cheating, our responses might be justified. But most cases are not that straightforward, or so I suppose at least. Just consider the teaching context: We might say we’re distinguishing students who “have done the work” from those who didn’t. But making such a distinction seems to rely on the fact that some students actually “do the work” in relation to one’s class. Yet, what if we’re merely rewarding those students who have learned intellectual skills to produce great texts long before they set foot in our classes? In other words, we might not reward intellectual skills developed as taught by us but intellectual skills as picked up long before. So what are you grading in such cases? The things that people learned in your class or the things that people bring along? If you’re perhaps not actually assessing people’s progress in your course, then the question arises what’s so salient about the distinction between someone well-educated long before and someone making up for an earlier disadvantage by using tools like LLMs to improve their work.
AI use between shaming and rewarding. – My point is not to appeal to such classism to silence justified criticism of naïve integration of AI into teaching contexts (here is a pertinent open letter I co-signed). But classism is a real thing; and “AI shaming” seems to be a new way of exercising the related kind of gatekeeping. Now that people start noticing that AI shaming is on the rise, it doesn’t mean it’s just part of an arsenal of arguments in favour of Tech Bros (as this thread insinuates). The stigma of using AI for one’s work is as real as the problem of cheating and related vices. But that doesn’t mean AI usage is exhausted by this. The world we live in will increasingly reward using AI. As an instructor I’m primarily faced with downsides when students use it to cheat, but as soon as we’re not acting as professionals ourselves we might become quite dependent on the benefits of AI. Just step outside your comfort zone and hand over the task of reformulating a text with a pertinent perspective! Having drafted a couple of legal documents, for instance, I have found that ChatGPT is a helpful tool. Of course, I still need to check on points, but the Legalese produced by this device is of real help. But relying on such help will be shamed by the next best expert in legal matters. And then it’s me who is at the receiving end of AI shaming.
From texts to their producers. – If we take the class perspective seriously, AI is not only presenting us with challenges but with contrary assessments ranging from worries about fraud, on the one hand, to worries about inappropriate gatekeeping, on the other. So how can we respond to this situation? My hunch is that we first need to acknowledge that this technology changes our reading culture. For a very long time, at least since the critical philological work of the 19th century, we have learned to see texts as something objective in that they can be seen independently from their producers or authors (or the layers of production of texts). As Daniel Martin Feige noted, digitalization involves a striking return of the author (see part three of his Kritik der Digitalisierung). With the constant possibility of text production through LLMs, we will focus even more on the author and marks of authenticity again, whether we like it or not. But this doesn’t mean that we need to resign ourseves to constant suspicion.
Authentic versus bullshitting texts. – Turning to texts themselves, the crucial question for us will be whether such texts are authentic and genuine expressions by an author or bullshitting texts. In educational contexts, we have known long before the advent of LLMs that our grading systems incentivise bullshitting, with or without LLMs. So I’d repeat that we educators need to focus on actually reading rather than going for quick judgments. This would not merely mean assessing whether someone is cheating but to reflect on what we expect and on whether our expectations are mainly pertaining to class markers, as seems to be the case in many instances. The bottom line seems to be this: Our worry should not be about the use of AI or AI-prompted texts, but about bullshitting texts. This might still mean that our current reading culture (where we treat texts as something objective) might come to an end. But so be it.
Die Nutzung von LLMs in der wissenschaftlichen Forschung bzw. der akademischen Schreibarbeit stellt die Wissenschaftsgemeinde in Lehre und Forschung in der Tat vor Probleme. Aber jedem Problem wohnt auch eine CHANCE inne, nämlich die, aus der Lösung bzw. dem Umgang mit diesem Problem gestärkt hervorzugehen. Das könnte z.B. bedeuten, bei der Evaluierung und Wertschätzung von akademischen Beiträgen lange bewährte, alte Methoden wieder stärker zu reaktivieren (Kontextualisierung mit früheren Beiträgen des Autors,Intensivierung des Lehrenden-Studierenden Kontakts), die eben nicht ‘aufgehoben’ i.S.v. nicht mehr gültig, sondern von konserviert waren und jederzeit wieder anwendbar und weiterhin zielführend sind.
Es reicht eben nicht aus, eine Arbeit nur ‘zu lesen’ – um ihre Seriosität einschätzen zu können. Vielmehr sollte immer der individuelle Autor und seine wissenschaftliche Vita mit in’s Kalkül gezogen werden. Kein etablierter Wissenschaftler wagt es, nicht “eigen durchdachte” (Schreib)Ergebnisse der Gemeinde anzubieten. Hier ist die soziale Regulation und die Gefahr eines Reputationsverlustes groß.
Bei Studierenden und Promovierenden, die noch auf keine Publikationshistorie zurückblicken können, liegt m.E. die Chance der LLMs in der schon oben angedeuteten Wiederbelebung und Intensivierung des Betreuungskontaktes durch die Lehrenden, also im intensiven Begleiten der Jungakademiker und kontinuierlichen Austausch.
Der bewußte Umgang mit AI verschafft sowohl Studierenden, Jungakademikern und etablierten Forschern in gleichem Maße Synergie- und Effizienzvorteile, wie z.B. bei der Daten- und Literaturrecherche, der Methodenauswahl, der systematischen Vorarbeit und auch bei der “Verständnisdurchdringung” von Sachverhalten. Gesparte Zeit, die nun ‘umgemünzt’ werden kann in die eigentliche kreative, gedankliche und tatsächliche Weiterentwicklung der Wissenschaft. Und darauf kommt es ja an!
Dass AI auch zu Täuschungszwecken mißbraucht werden kann, liegt auf der Hand. Aber Scharlatane gab es schon zu jeder Menschenszeit und es sind in der Regel jene selber, die am meisten unter ihrer eigenen Scharlatanerie leiden, auch wenn es nicht offenbar wird. Daher macht es mehr Sinn, die Chancen für die Weiterentwicklung der Wissenschaften in der AI zu sehen. Das Glas ist immer ein bisschen mehr voll, als es leer ist!
LikeLiked by 1 person