I got an AI to impersonate me and teach me my own course. Here's what I learned about the future of education

Stephanie Baum
scientific editor

Robert Egan
associate editor

Imagine you had an unlimited budget for individual tutors offering hyper-personalized courses that maximized and skills development. This summer I previewed this idea—with a ridiculous and solipsistic test.
I asked an AI tutor agent to play the role of me, an , and teach me a personal master's course, based entirely on my own work.
I set up the agent via an off-the-shelf ChatGPT tool hosted on the Azure-based platform, with a prompt to research and impersonate me, then build personalized material based on what I already think. I didn't tell the what to read or do anything else to enhance its capabilities, such as to learning materials that aren't publicly available online.
The agent's course in media and AI was well-structured—a term-long, original six-module journey into my own that I had never devised, but admit I would have liked to.
It was interactive and rapid-fire, demanding mental acuity via regular switches in formats. It was intellectually challenging, like good Oxford tutorials should be. The agent taught with rigor, giving instant responses to anything I asked. It had a powerful understanding of the fast-evolving landscape of AI and media through the same lens as me, but had done more homework.
This was apparently fed by my entire multimedia output—, , , , even university lectures I had no idea had even been recorded, let alone used to train GPT-4 or GPT-5.
The course was a great learning experience, even though I supposedly knew it all already. So in the inevitable student survey, I gave the agentic version of myself well-deserved, five-star feedback.
For instance, in a section discussing the ethics of in computer games, it asked:
"If NPCs are generated by AI, who decides their personalities, backgrounds or morals? Could this lead to bias or stereotyping?" And:" If an AI NPC can learn and adapt, does it blur the line between character and 'entity [independent actor]'?"
These are great, philosophical questions, which will probably come to the fore when and if comes out next May. I'm psyched that the agentic me came up with them, even if the real me didn't.
Agentic me also built on what real me does know. In film, it knew about bog-standard , which I had covered (it's used for creating motion graphics and visual effects). But it added , a professional tool used to combine and manipulate visual effects in Avengers, which (I'm embarrassed to say) I had never heard of.
The course reading list
So where did the agent's knowledge of me come from? My publisher, Routledge, did a with Open AI, which I guess could cover my books on media, AI and live experience.
, I'm up for that. My books guide people through an amazing and fast-moving subject, and I want them in the global conversation, in every format and territory possible (Turkish , Korean this month).
That availability has to extend to what is now potentially the most discoverable "language" of all, the one spoken by AI models. The priority for any writer who agrees with this should be AI optimization: easy for LLMs to find, process and use—much like search engine optimization, but for AI.
To build on this, I further tested my idea by getting an agent powered by China's to run a course on my materials. When I found myself less visible in its , it was hard not to take offense. There is no greater diss in the age of AI than a leading LLM deeming your book about AI irrelevant.
When I experimented with other AIs, they had issues getting their facts straight, which is very 2024. From Google's I learned hallucinatory biographical details about myself like a role running the media company The Runaway Collective.
When I asked Elon Musk's what my best quote was, it said, "Whatever your question, the answer is AI." That's a great line, but Google DeepMind's Nobel-winning Demis Hassabis , not me.
Where we're heading
This whole, self-absorbed summer diversion was clearly absurd, though not entirely. Agentic self-learning projects are quite possibly what university teaching actually needs: interactive, analytical, insightful and personalized. And there is some emerging research around the value. found that AI-generated tuition helped to motivate secondary school students and benefited their exam revision.
It won't be long before we start to see this kind of real-time AI layer formally incorporated into school and university teaching. Anyone lecturing undergraduates will know that AI is already there. Students use AI transcription to take notes. Lecture content is ripped in seconds from these transcriptions, and will have trained a dozen LLMs within the year. To assist with writing essays, ChatGPT, Claude, Gemini and Deep Seek/Qwen are the sine qua non of Gen Z projects.
But here's the kicker. As AI becomes ever more central to education, the human teacher becomes more important, not less. They will guide the learning experience, bringing published works to the conceptual framework of a course, and driving in-person student engagement and encouragement. They can extend their value as personal AI tutors—via agents—for each student, based on individual learning needs.
Where do younger teachers fit in, who don't have a back catalog to train LLMs? Well, the younger the teacher, the more AI-native they are likely to be. They can use AI to flesh out their own conceptual vision for a course by widening the research beyond their own work, by prompting the agent on what should be included.
In AI, two alternate positions are often simultaneously true. AI is both and . It is both a and a . It is , yet . It is , but also powering us up.
So too in teaching. AI threatens the learning space, yet can liberate powerful interaction. A is that it will make students dumber. But perhaps AI could actually be unlocking for students the next level of personalization, challenge and motivation.
Provided by The Conversation
This article is republished from under a Creative Commons license. Read the .