Across UK campuses, a quiet arms race is underway. As Generative AI becomes omnipresent, many academics are retrenching, frantically designing “AI-proof” assignments — either returning to handwritten exams or using obscure, hyper-local case studies. While the desire to preserve academic integrity is noble, the strategy is flawed. We cannot prepare students for an AI-saturated future by forcing them to work in a pre-digital past.

Instead of building walls, we need to build scaffolds.

We should look to the emerging philosophy of “AI-native” education, exemplified by Andrej Karpathy’s newly announced Eureka Labs. Karpathy (PhD Stanford), formerly of OpenAI and Tesla, envisions a curriculum where AI acts not as an oracle to be copied, but as a symbiotic guide. In this model, the student remains the “CEO” of their learning journey, directing the AI rather than outsourcing to it.

So, what does this look like in practice? It requires shifting assessment from the final product to the intellectual process.

AI as a Cognitive Sparring Partner

We can scaffold this by treating AI as a cognitive sparring partner. Rather than asking a student to write a generic essay on leadership or strategy, we might give them a short case: a UK mid-market retailer considering dynamic pricing and AI-driven demand forecasting during a period of volatile input costs and reputational sensitivity.

The student prompts an LLM to produce two conflicting board briefs — one arguing for rapid adoption to protect margin and reduce waste, the other warning against fairness concerns, regulatory risk, and brand damage. The student’s task is then to:

  • Interrogate the AI’s reasoning
  • Surface weak assumptions and invented “facts”
  • Test the claims against credible evidence
  • Synthesise a third, decision-ready recommendation more rigorous than either machine-generated position

The Metacognition Log

To ensure student agency remains central, checking their input is vital. Here, we can look to research from Ethan Mollick and Stanford’s Dorottya Demszky regarding “transparency protocols.” We should require students to submit a Metacognition Log alongside their work — a tracked history of their prompts accompanied by reflective commentary: Why did you reject the AI’s suggested structure here? How did you verify this specific claim?

This approach makes cheating harder than doing the work. It verifies that the student is doing the heavy cognitive lifting — evaluating, synthesising, and auditing — even if AI aids the drafting.

Stop Asking “Did a Human Write This?”

UK universities have a choice. We can play “whack-a-mole” with detection tools, or we can design assessments where transparent, critical human-AI collaboration is the baseline of competence.

We must stop asking “Did a human write this?” and start asking “Did a human lead this?”


Image: AI Classroom at Universal AI University — Wikimedia Commons, CC BY-SA 4.0