Dear friends of Daily Philosophy,
Things are getting a bit stressful around here, with exams still under way, grading deadlines looming, my knee replacement surgery right after Christmas, and the holidays themselves. This is why I cannot promise a more regular delivery of these messages until my life quiets down again, hopefully in the first week of the new year. I’m still trying to bring you a weekly post on average, so you’ll get another one this coming weekend — but the timing may be a bit erratic and unpredictable.
In the past week, I published two videos on topics that you might find interesting. One was a reply to a Johnny Harris video about Atlantis. I admire Johnny Harris very much, but sometimes he’s a bit too dogmatic about science. I am (was) a biologist myself, before I became a computer programmer and then a philosopher of sorts, so I’m very much in favour of science and scientific methods, but the belief that science is in the possession of unquestionable and eternal truths I always find silly. Science, like every other human endeavour, is a messy affair shaped by errors, fabrications, corrections, stubbornness, publications, retractions, personal favours, social pressures, legal restrictions and a myriad other factors — and only a (variable) part of that is the Real Scientific Truth (capitalised) as we like to imagine it. And even that gets revised all the time as new discoveries change our paradigms. All this will come as no surprise to philosophers of science, but the general population is often indoctrinated to either completely reject scientific truth or to embrace it like God’s word — and both are wrong. This is what I tried to argue for in this video:
The second video that I’d like to recommend is my discussion of Hermann Hesse’s Buddhist enlightenment tale Siddhartha from the 1920s. Today not very popular, Hesse was very much “in” in the 1960s, when counter-establishment movements discovered his romantic tales of individuals in solitary search of their true identities. The first part of the series is about the “Romantic Spirit” as we find it in Hesse’s work; the second and third are about Siddhartha, the book, itself. This short series will end next week with a fourth and last part that will conclude the tale. This whole video series is based on a series of posts here in this newsletter:
And here’s the latest video:
So, let’s get on with today’s post! As opposed to the carefully researched, worded and edited articles of our many contributors, this one is just a bunch of half-baked thoughts that I had yesterday when I read the news while cooking. See it as an invitation to comment, and to tell me what I got wrong, and how we can make better sense of the problem than I do here.
Studying in times of AI
You will have noticed in the past that I’m interested in AI. I’m not only teaching the philosophy of AI and AI ethics at my university, I’m also using generative AI to create the images in this newsletter and on the Daily Philosophy webpages. I’m trying to teach my students the difference between good and bad AI use and I’m using ChatGPT and other such systems regularly to get ideas for my own lectures, to find examples and tutorial questions for particular points, and to get fresh ideas for exam questions that I could ask. I believe that today’s AI can be a great tool if used responsibly, and catastrophic if used badly.
We all know that students are already massively using AI to write their term papers. There is nothing surprising about that. I have found a few very effective ways to combat this, so if you are interested, drop a comment below and I will write a follow-up to this article and show you a few ways how you can force the students to actually do their work and learn something, despite AI.
But yesterday, a reader forwarded an email to me. A university asked their instructors to be aware of the danger that students might use AI glasses to cheat in exams. AI glasses, in case you have not been following the latest trends, have become very fashionable lately, and are increasingly hard to identify.
Here is one example:
And here is another:
At present, these glasses have three main functions: they can project the contents of your computer screen into your eyes, letting you interact with a computer or phone screen without one being present in the real world; or you can use the built-in camera to film what’s happening around you, sometimes without the outside world being aware of it; or, finally, you can access an AI chat like ChatGPT, Gemini or Meta AI, and have them elaborate on what the glasses are seeing at any moment.
None of these are easy to use for cheating in exams at this time. Yes, students could overlay their phone screen onto their visual field, but they’d still need to operate the phone itself in order to make the screen display the right page of the book or the lecture notes. While it’s certainly possible to have a student quietly and stealthily handle a phone they are not looking at, it requires quite a bit of criminal energy, determination and nerves, and I’m not sure that many would be up to trying it in an official examination setting.
The camera function could document the exam questions for posterity (perhaps in order to make them available to others after the exam), but is of no use in cheating in a particular exam.
Finally, the AI integration could be useful, especially if the AI could read the exam questions and dictate the answers to the wearer; but this requires, as far as I’ve seen in reviews, that the wearer asks questions and tells the AI what to do, which again is difficult to pull off in an examination setting. So we’re safe for an another year or so, until the technology improves again.
But when I thought about how we, as educators, relate to these technologies, I realised that we are going to lose that battle. Not because the AI does anything wrong, but because we have, over a long time, created the conditions under which a student can successfully obtain a university degree without actually having any relevant skills.
Let me explain.
Expertise
Education has many facets and meanings. “Whole-person education” is something different from memorising the Periodic Table, but both have some use in bringing a human from the state of being, at birth, a tabula rasa, to the state of becoming, in one’s thirties, an expert in ancient history, metaphysics, or analytic chemistry.
What an expert possesses is expertise, obviously. But what is expertise? What does it consist of?
In the 1980s, philosopher Hubert Dreyfus, then a prominent AI critic, asked precisely this question. He did it as part of an examination into what was then the AI flavour of the day, the so-called “expert systems.” An expert system is a computer program that contains human knowledge in the form of a database of rules and facts, together with a “logical inference engine” that allows it to derive conclusions from these rules and facts. A typical rule in an expert system would look something like this:
IF the infection is bacterial
AND the patient has symptoms
THEN assign anti-bacterial treatment
(From: Expert system MYCIN)
The idea was that if we managed to exhaustively describe reality (or even only narrow domain knowledge) with thousands of such rules, eventually we would be able to build machines, “expert systems,” that could replace human experts in any particular domain. The same method could, in principle, be applied to the game of chess, to the rules describing the malfunctioning of a spaceship, or to the identification of the authorship of a medieval manuscript. Or that was the idea, anyway.
Dreyfus disputed the basic assumption behind expert systems: that the human expert is just such a repository of facts and rules. He asked us to examine our own experiences of expertise. We are all experts in something. Let’s say, some of us play the piano. Others play chess. Most of us are experts in handwriting. Some are experts in typing on computer keyboards. Many can drive a car. Many can cook. And so on.
Now pick an area in which you are an expert. Any of the above, or anything of a similar nature, will do. Let’s say, driving a car. Now, how does one obtain expertise in this domain?
Nobody is born with the ability to drive cars.
Nobody is born with the ability to drive cars. We all have to learn the skill by taking driving lessons from a teacher, by studying books and regulations, and by actually driving around a number of hours. In the beginning, we learn, like an expert system would, by memorising abstract facts and rules: to go forward, press this pedal. To brake, press the other one. To turn, turn the steering wheel in the desired direction. And so on.
As we progress, the list of rules and facts grows longer. We need to remember all the traffic signs, the rules of who goes first at a crossroads, the right speed to change into third gear, the distance to keep from the car in front, dependent on one’s own speed, and so on.
Dreyfus now observes that this is exactly how an expert system would learn to drive. But is this how human expertise really works? Is an experienced driver just a novice driver with more rules in his head?
It’s easy to see that this is not the case. Experts often don’t know the rules that they once learned, years ago, but they are still experts; indeed, more so than when they were following the rules. An expert driver does not look at the car’s speed in order to know when to switch gears. An expert chess player does not count points by adding up the probabilities of future moves of the two sides. An expert painter does not use complementary colour wheels. An expert piano player does not have a head full of the exact meaning of every symbol in music score notation. And if you are an expert keyboard typist, go ahead, right now, close your eyes, and tell me all the keys in the middle row of your computer keyboard, in the right order. See? You likely cannot do it.
I am right now typing these words at the speed of my thoughts, as fast as I could speak them, using ten fingers that fly over the keyboard hitting every key just at the right moment to form the exact right word that I need. But if you ask me where the keys actually are on the keyboard, I have no idea. If I was following formal rules, like a computer program would, my expertise should consist at least in part in knowing exactly where the keys are, so that then I can hit them by following these rules and facts about the keyboard that I have memorised. But that’s not what I’m doing. My body knows how to type, but my brain does not contain any explicit rules about the placing of the keys that I could recall.
For an expert driver, the car becomes an extension of their body.
And this, Dreyfus says, is the point of all human expertise, and what makes us different from computers. We just know how to do expert stuff, but we don’t actually follow “expert rules.” Rules are for beginners. Before a person has any competence in an area, they need to learn the rules, so that they can get started with playing chess, driving, or typing. But after a while, their intuition takes over, their body memory, a skill that goes beyond rules and abstract knowledge. In fact, the rules are soon forgotten. For an expert driver, the car becomes an extension of their body. One does not need a rule system to know how to move one’s feet. In the same way, an expert driver or piano player don’t recite rules as they drive or play. Only beginners use rules. Experts exercise a kind of quasi-magical skill, partly body memory, partly very complex but instantaneous pattern recognition, that allows them to quickly reach for the right action in any particular situation — arguably something similar to what Aristotle might have had in mind with phronesis, the ability to employ one’s virtues in the right way in every particular situation. When you have to think about how to act, both Aristotle and Dreyfus would agree, you’re still a beginner. The sophron agent has developed a skill that allows them to instantaneously act in just the right way, whatever happens.
University skills?
Now we can go back to the initial question of this article: what’s happening to university education and how can we deal with AI? Do we have to outlaw ChatGPT for homework? Smart glasses in exams? And how are we going to do that when AI becomes increasingly hard to detect and identify? Are we not chasing windmills in this way? And if so, what’s the alternative? Is there even one?
Look at education in the most general way. Some kinds of education equip the learner with a particular skill. For example, the skill to play the piano. The skill to type. The skill of handwriting. The skill of walking. But also, in more advanced cases, surgery skills. Piloting skills. Football-playing skills. Calculus problem-solving skills.
Other kinds of education transmit knowledge. Learning ancient Greek philosophers’ names and theories. Memorising the elements in the Periodic Table. Learning to recite the Confucian classics.
Often, these two areas overlap. Successful surgery needs both an abstract, memorised knowledge of where particular nerves and blood vessels are located, as well as the skill to cut the skin, open up the body, and successfully manipulate the bits inside to achieve the intended therapeutic outcome. Piloting involves the abstract knowledge of particular communication frequencies, airport layouts, aeroplane characteristics and flying procedures; but it also needs the skill to actually handle the plane, to make it take a turn in the correct way, to aim for a runway and actually land on it, and so on. Chess requires the skill to actually play the game, but also the abstract knowledge of the best opening moves. Piano playing needs the skill to actually create music out of the interaction of one’s fingers with the piano’s mechanics, but also the knowledge of how to read a sheet of music notation. And so on.
Looking at these examples, you’ll notice something. The “skills” part is usually very time- and labour-intensive, and requires training on the job, using the real environment, tools and objects that the learner will need to interact with, and the learning process needs to be directed and supervised by an expert teacher who themselves have all the required skills.
On the other hand, the “knowledge” part is cheap and easy to obtain: one just needs a rule-book and a few quiet hours in which to memorise it.
And this is the problem.
The economics of skill acquisition
Over the course of time, universities and other educational institutions have increasingly become conscious of the cost of education, a cost that they have to bear, but that brings them no immediate benefit. As opposed to training a worker on the job, inside the same factory in which this worker will later work and produce value, a university student will leave after graduating, never to be seen again. Whatever effort and cost a university invests in a student will never bring any return for that university itself. Yes, universities get compensated by state funds, but the incentive is there: the less money we spend on our students’ education, the more is left over for our salaries and other perks, the less we have to work, the easier our lives are, the richer our universities and departments become. Since the incoming funds are somewhat constant per student, reducing the expense of educating these students leaves more wealth in the hands of the educational institution.
From here it’s an easy conclusion: If skills are expensive, difficult and time-consuming for a student to acquire, and knowledge is cheap and easy; and if it does not matter for the university whether the student in the end is a true expert in his field or not; then why would it invest the money and effort into making students into experts rather than making them into the equivalent of expert systems, rule-following containers of abstract knowledge that can plausibly claim to have access to a defined amount of rules and facts about a domain?
AI cheating
And now let’s go back to AI in education. Why is it even possible that students cheat in their exams using AI? It is because the only thing that universities give to their students, and the only thing they are testing for, is abstract knowledge — and this is exactly what AI, hidden notes, computer files and textbooks contain. The same is equally true of most other educational institutions, perhaps with the exception of Kindergartens, that actually still seem to impart some measure of skills. And this is what makes cheating with AI so easy.
Think now of learning situations that focus on skills rather than knowledge. A test flight of a student pilot; a practical examination in the form of a surgical operation by a student surgeon; an evaluation of a football or chess player’s actual playing skill in a real match. In all these cases, access to ChatGPT or to AI glasses would not pose a problem. Indeed, we might encourage students in these fields to use AI to cover the knowledge part of their demonstration, so that they can focus on demonstrating their skills. A person without the relevant skills could never pass a test in these areas, even if they had access to AI. No smart glasses would enable me, for example, to fly a passenger jet, to replace a patient’s knee, to play well in a football game, or even to repair my leaking bathroom sink. These outcomes cannot be achieved by storing knowledge. They require the acquisition of complex and costly skills, of a true expertise that is different from the lookup of rules and facts.
If university studies and examinations were focused on skills rather than memorisation, we would not have to fear AI. If our students were judged on their ability to actually do something, to perform a skill, to achieve a result, then the use of AI would be welcome as a factor that would support them in that task, just like a checklist helps a pilot to fly or land a plane, or a score sheet helps a piano player play a concert. Using the sheet is not cheating. Me and a music sheet don’t make a piano player. Access to the music sheet is not what distinguishes a good from a bad piano player. Skill is, and this is located in the person, not in the external supports.
At the end of it all lies the depressing insight that we, the university teachers and administrators, have actually destroyed university learning. In the name of efficiency and cost-reduction, we and our academic ancestors have, over the decades (or perhaps centuries?) created a situation in which we neither impart not judge our students’ skills. Instead of judging what a student can achieve, we have settled on the much cheaper and easier solution of evaluating what rules and facts are stored in the student’s brain, something that now can easily be faked by AI.
Even if our students weren’t cheating, even if AI did not exist, they still would not be learning anything of real value.
And this leads us to the other realisation, that even if our students weren’t cheating, even if AI did not exist, they still would not be learning anything of real value. It has always been the case that facts can be looked up, with more or less effort. AI is just an interactive textbook, a way of quickly looking up a bit of knowledge; but with a little more time, one could use old Google, or even a library of paper books, to achieve the same result: to look up a piece of factual information. Judging students by the amount of factual information stored inside them is not education. It is a declaration of surrender of all that education should be.
We have destroyed education, and AI is just making it obvious. The emperor has never had any clothes — but now AI is pointing this out for all to see.
The solution is not to outlaw AI, nor to prevent students from using it. In an ideal world, we would want our students to have access to as many tools as possible that will support them in their work. You don’t train a plumber by taking away his toolbox and making him fix the sink using only a paper-clip. You should not evaluate a student by how well he or she does when having no access to books or notes. Because what is really the point of that? These same students who now have to go through an examination without their books and without AI, will spend all their future professional lives having access to exactly these tools. I have taught ancient Greek philosophy for decades now — but I still don’t remember the exact year when Plato was born. Why would I? It’s ten seconds away, on my phone, using a simple Google search. There is no point spending mental power on that. Explaining how Plato’s philosophy of love is motivated by his metaphysics of the Forms; or, even better: using Plato’s philosophy of love to explain what’s wrong with today’s dating apps: this is what my skill as a philosophy teacher is about.
Like we train doctors or pilots, we must also return to training philosophers, historians, translators and biologists to not only be repositories of dead facts, but true experts: People who possess an actual skill, an ability to bring about a result in the world that is honed by long practice under an expert’s instruction.
Then we will not need to fear AI.
Using generative AI is theft. See ya.