Suppose you witness an accident and have to report it to the police. “Well, the red car drove with high speed towards me and then took a sharp left turn”, you exclaim. You try hard to find the right words to capture the precise sequence of events and people you’ve noticed. Being a bit fussy by nature, you keep correcting yourself. “To begin with, it seemed as if the driver wanted to run me over.”, is the formulation you eventually settle on. – Now imagine you try to refine your impressions using ChatGPT. Obviously, there is always room for improving on style and grammar. But can you expect ChatGPT (or any LLMs or AI) to improve on the accuracy of your factual statements? No. But what does it mean if it is used to that end anyway?
Given the way ChatGPT works, it has no experience of the world in any sense. Rather, it generates sentences by predicting the most likely sequence of words based on the input it receives and the vast amount of text it was trained on. Thus, it cannot improve on the content of any statements relying on experience. While this is no surprise, the repercussions it has for teaching contexts deserve careful attention because such contexts thrive on the correction of statements. Especially now that people not only use this device to hide their plagiarisms, but also to “decide” all sorts of questions. I’ve been wondering repeatedly what precisely it is that goes wrong in teaching contexts with the use of AI and now I begin to think that it comes down to a loss of corrigibility, a loss of understanding what corrigibility even means. Put in a nutshell, seeing that this device improves the form of (written) texts in amazing dimensions, it makes us blind to the fact that it impoverishes our relation to empirical content. In what follows, I’d like to explore this loss with regard to teaching philosophy.
What is corrigibility? Corrigibilty means that a statement or text can be corrected. If I state that it’s raining you can correct me by pointing out that it’s in fact not raining. We offer and receive corrections all the time. We improve the way of phrasing something we’ve seen by finding more adequate phrases. We can differentiate between grammatical and stylistical corrections as corrections of form as opposed to content, but often the two are difficult to keep apart. The phrase “it’s raining” is formally correct when used among English language speakers, but what makes it correct for these users is how it’s applied to a shared experience of the world (in which it happens to rain). If I ask you to refine your phrasing, suggesting for instance that it’s really pouring and not just raining, I can mean at once to pay attention to your experience and the conventional way of expressing such an experience in the English language. When you think about your experience and modes of expression, you’ll likely involve linguistic sources (your language conventions, literature, the sociolect your audience is supposedly expecting etc.) as well as non-linguistic sources (whatever you can gather from other sense-modalities). Most importantly, you’ll involve relations of applying linguistic resources to non-linguistic experiences. In other words, we relate linguistic conventions to (non-linguistic) facts. ChatGPT, by contrast, doesn’t do that. Having no relation to the world, it is confined to linguistic resources; it has no other sense modalities and it has no way of relating linguistic to non-linguistic facts. In other words, while it can improve on formulations, it cannot be corrected. Put in Wittgensteinian terms, whatever seems correct to ChatGPT is correct – and that means that there is no sense of distinguishing between correct and incorrect. (There is an intriguing piece about learning this the hard way.) Thus, we shouldn’t even say that it’s “hallucinating” when it’s “making things up”. There is no meaningful distinction for this device between hallucinating and getting things right in the first place.
Now I doubt that I’m spreading any news here. So why is this worth saying? Because both the language of ChatGPT and of the merchants of this technology constantly suggests that this device is learning, being corrected and improved. Yes, it’s being improved at what it does already, but it’s not improved in any other sense. This lingo tricks many of us into thinking that the improvement is of the kind that we are familiar with. Just like AI is now increasingly taken to be a meaningful, sexy or caring interlocutor, it tricks many of us into assuming that it could “learn” by being “corrected”. But learning, for humans, always involves a relation to the world. The great confusion about ChatGPT, then, is that it would be improved in any way that we would try to improve our own way of expressing ourselves.
How does this affect teaching (philosophy)? There are many pieces about the decline of the humanities in the face of ChatGPT and related devices. Given how this technology diffuses our sense of authorship and our reading culture, I’m inclined to think that our whole way of cherishing text production and reading will go out of fashion and become a nerdy niche. Just like long electric guitar solos or keyboard solos, which seemingly were ubiquitous in the 70s and 80s, are now a thing for a few nerds on youtube. So as I see it, the problem is not that students are faking texts; the problem is that most texts are considered irrelevant. Along with the skills and features that go into their production. Being able to write good texts is already irrelevant in world where so-called leaders get by without even glancing at their briefings. But let’s stick to the current story. My hunch is that the loss of corrigibility ingrained in ChatGPT is the outcome of a larger trend that was clearly recognised in Harry Frankfurt’s On Bullshit as early as 1986: Once you realise that you can convince without sticking to techniques of truth-evaluation, you can disregard truth altogether. After all, the question is not in what way ChatGPT is incorrigible. We can figure that out quickly. The question is why are we letting ourselves be corrected by a device that is incorrigible.
But that’s a question for nerds. Mastering long written texts, let alone writing them, then, doesn’t seem to hold much of a promise for anything now. This is not just because students have incentives to fake their work; it’s because there are hardly any incentives to produce such work in the first place. Why do you need to learn to play the piano if you have keyboards with automatic accompaniment? Of course, you might get sick of their sounds quickly. But who cares if that’s all that’s on offer?

So again: the problem is not cheating; it’s irrelevance. Writing this, I feel like a fossil decrying the loss of its natural habitat. And that’s probably what it is: An old man whining that no one recognises the beauties hidden in the art he cherishes. So what? So what indeed?
So what’s left for teachers? If you don’t worry too much about plagiarized texts, you might adjust your energy towards getting people to think, not by by analysing texts, but by coming up with good prompts for ChatGPT or by enhancing your techniques of video editing. In other words, while certain products (such as well-written essays) will simply be done by ChatGPT in the future, you can support students in improving “their” work by focussing on helping them to use this and the AI devices to come as a good tool. The remaining question is, though, what this tool is good for, once we admit that writing texts is irrelevant?
