
Artificial intelligence is transforming industries worldwide, and higher education is no exception. At Stanford University, computer science professor Jure Leskovec recently made headlines for bringing back traditional written exams—a decision shaped not by nostalgia but by the realities of the AI era.
Leskovec, a leading researcher in machine learning and co-founder of the startup Kumo, has spent decades studying and teaching artificial intelligence. But the release of advanced tools like GPT-3 and GPT-4 forced him and his students to reconsider how knowledge should be tested.
A Student-Driven Shift
Interestingly, it wasn’t Leskovec alone who decided on this change. His teaching assistants and students—future AI experts themselves—pushed for exams on paper. Previously, his courses relied on open-book, take-home tests where resources such as textbooks and the internet were allowed. But as AI tools became more powerful, questions of fairness and authenticity emerged.
To preserve academic integrity, students agreed that returning to in-person, written exams was the best way to measure individual understanding. For Leskovec, this meant more work—hand grading hundreds of exams for classes that can feel like “rock concerts.” Still, he believes it’s worth it to ensure students are truly learning.
The Broader Debate in Education
Leskovec’s decision reflects a larger conversation across universities. Many institutions are rethinking assessments as AI complicates what it means to “do the work.” Some professors have banned AI tools, while others experiment with oral exams or AI-inclusive assignments. Leskovec’s approach is to treat AI like a calculator: a powerful tool that can be useful, but one that also requires students to demonstrate their own skills without it.

The Broader Debate in Education
Leskovec’s decision reflects a larger conversation across universities. Many institutions are rethinking assessments as AI complicates what it means to “do the work.” Some professors have banned AI tools, while others experiment with oral exams or AI-inclusive assignments. Leskovec’s approach is to treat AI like a calculator: a powerful tool that can be useful, but one that also requires students to demonstrate their own skills without it.
Human Skills vs. AI Skills
Beyond testing, Leskovec raises an important question: What counts as a human skill in the age of AI? He points out that while AI can assist with research and automation, human expertise and critical thinking remain essential. Employers are increasingly looking for candidates who can collaborate with AI while also bringing domain knowledge, judgment, and creativity.
This balance is also showing up in the labor market. Platforms like Upwork report a surge in demand for freelancers with “AI skills,” but equally high demand for roles that fact-check, interpret, and apply AI outputs. Companies are discovering that humans remain crucial to ensuring accuracy and trust.
The Future of Learning
For Leskovec, the key lies in reskilling and rethinking education. Universities, companies, and workers must adapt by focusing on domain expertise and teaching collaboration with AI. While the technology evolves quickly, his solution highlights a timeless principle: learning still requires effort, accountability, and human judgment.
In many ways, AI is pushing educators to rediscover the basics—and written exams are just one part of that ongoing transformation.