
Over the past academic year, the number of British students caught using artificial intelligence to cheat has surged dramatically. According to data obtained via Freedom of Information requests, approximately 7,000 such cases were reported—equating to 5.1 incidents per 1,000 students. This represents a nearly threefold increase compared to the previous year’s figure of 1.6 per 1,000.
The statistics, compiled from 131 UK universities, reveal that traditional forms of academic dishonesty—such as copying and plagiarism—remain the most prevalent. Yet, with the proliferation of AI tools, instances of classical plagiarism have significantly declined, and are projected to drop by half again in the current academic year.
Notably, over a quarter of institutions admitted they do not maintain separate records for AI-related infractions. While other forms of misconduct remain steady, confirmed cases of AI-enabled cheating are on course to reach 7.5 per 1,000 students.
As educators grapple with how to manage AI’s presence in the classroom, tech companies are racing to embed themselves within the academic ecosystem. OpenAI, for example, offered students with “.edu” email addresses two months of complimentary access to ChatGPT. Microsoft has provided three months of free Copilot use and a 50% discount on subscriptions. Google has gone even further, granting students a full year’s access to Gemini 2.5 Pro and the Veo 2 video generator, along with 2 terabytes of cloud storage.
Meanwhile, Anthropic is actively promoting its Claude chatbot through a partnership with the London School of Economics. Perplexity last year extended free Pro-level access to students at 45 universities. Reclaim.ai is offering a 50% discount to students for a full year.
These gestures, however, are far from altruistic. The promotion of AI services among students is a calculated marketing strategy—one based on the premise that habits formed during university years are likely to persist into professional life.
This issue is by no means confined to the United Kingdom. According to Pew Research, in the United States, 26% of teenagers aged 13 to 17 have already used ChatGPT for schoolwork—double the figure from the previous year. Nonetheless, most respondents admitted that using AI to write essays felt ethically wrong—or at least, that’s what they told researchers.
Some cases have even escalated to the courtroom. Last year, the parents of a Massachusetts high school student challenged their son’s expulsion from the National Honor Society over his use of AI. The court did not rule in their favor, but the student was ultimately reinstated.
Educators’ responses to these new technologies vary widely. Some regard AI as merely a “modern calculator.” Others take stricter measures, despite the current limitations of algorithms meant to detect machine-generated text.
For many, the answer has been a return to old-school methods: handwritten exams. At the University of California, Berkeley, sales of traditional blue books for in-class writing have risen 80% over the past two years. An increasing number of instructors now favor supervised paper-based assignments with no digital devices permitted.
China has taken an even stricter approach. During the nationwide gaokao entrance exam—which determines the fate of millions of students—companies like ByteDance and Deepseek temporarily disabled access to their AI services. Phones are already banned in exam halls, radio signals are jammed, and students are monitored by cameras powered by the very AI being restricted.
Estonia, by contrast, has chosen to embrace the tide. Starting in September 2025, 20,000 high school students will gain access to ChatGPT and other AI tools during lessons as part of the AI Leap 2025 initiative—the most ambitious program of its kind in the world.
Despite differing perspectives and pedagogical strategies, one trend is unmistakable: students are swiftly adopting AI tools. The debate is shifting from whether to ban or permit their use, to how best to teach responsible and critical engagement with these technologies—especially as such skills are poised to become vital in a future where recognizing AI’s mistakes may be just as important as knowing how to use it.