Chapters
MA Journalism

ACADEMIA IN THE AGE OF AI

In the final months of his economics degree at the University of Bristol, Oscar, who asked for his name to be changed to protect his identity, had stopped attending most lectures. Not out of laziness or disinterest, but because he no longer needed to. With a stable internet connection and a decent laptop, he could produce essay drafts, summarise academic papers or generate mock exam questions in seconds; all with the help of Artificial Intelligence (AI).

“By the end, I barely touched any of the readings,” Oscar tells me. “If I didn’t understand something, I’d just ask the AI to explain it like I was five or write me a draft answer to work from. I was still thinking and doing the work, just maybe not in the way my lecturers expected.”

It seems now that barely a week goes by without a headline warning that AI is reshaping higher education. At the Education World Forum in May, Education Secretary Bridget Phillipson called it “the most important challenge for global education in a generation” and a survey by the Higher Education Policy Institute (HEPI) recently revealed that 92 per cent of undergraduates report using generative AI, up from 66 per cent in 2024. 

It is figures like these that reveal, what began as a niche workaround, is now a central part of how many students approach their education. AI tools are no longer confined to tech-savvy corners of campus and many students use them to break down dense academic readings or rephrase complex texts into more accessible language. Some even copy full assignment briefs into large language models like ChatGPT and submit the generated output with only minor edits. 

Universities across the UK are also embedding AI into their own systems, using it to track attendance patterns or flag disengaged students. Others are experimenting with machine learning to predict academic outcomes or personalise course content. 

“AI can be a real force for good in higher education, especially when it helps students engage more deeply with complex ideas,” says Steve Watson, Professor of Transdisciplinary Studies at the University of Cambridge. “As a meaning mediation tool, AI can support inclusion not just by simplifying content, but by offering new ways to explore, express and work with knowledge. This can be especially empowering for students from diverse backgrounds who may not always feel at ease in traditional academic settings.”

“When AI becomes a tool for mediating meaning, it doesn’t just support education, it changes it.” Proffesor Steve Watson, University of Cambridge

But Watson is clear that these gains come with caveats. “When AI becomes a tool for mediating meaning, it doesn’t just support education, it changes it,” he says. “The way students learn, the role of teachers, what counts as knowledge. These are all shaped by how AI is integrated. And when students and staff don’t understand how a tool works or why it’s being used, trust breaks down.”

Wendy Morrey, who works at AI for Education, a company that specialises in training faculty in the effective use of AI tools, echoes this sentiment. She believes higher education is at a critical inflection point, pointing to the growing divide between how students and staff engage with AI. 

According to Morrey, bridging that divide starts with changing perceptions about the role of AI in academic life. “Part of this mind shift is to recognise that AI isn’t replacing teachers; it is amplifying what we do best,” she says. “While we maintain our essential role in connecting with students and applying our pedagogical expertise, AI enables us to differentiate instruction, support diverse learning needs and provide individualised feedback at a scale that was previously impossible.” 

“AI isn’t replacing teachers; it’s amplifying what we do best” Wendy Morrey, AI for Education

Some institutions are already beginning to respond to this challenge by developing secure, in-house AI systems that reflect both the practical needs of students and the ethical concerns raised by staff. At the University of Edinburgh, students and staff can access advanced generative AI tools through a central platform called ELM (Edinburgh (access to) Language Models). This system allows users to interact with this technology in a more secure and controlled environment.

“Our university uses ELM to provide in-house access to advanced AI models under a zero data retention agreement with OpenAI,” says Dr. Pavlos Andreadis, an informatics lecturer at the University of Edinburgh. “It’s a far safer option compared to the direct use of commercial platforms.” The university plans to move toward locally hosted models, he adds, so that user data never leaves institutional servers. “Just as personal journals extend one’s thinking, AI chatbots should ideally become personal artefacts, individually owned and controlled.”

Across the higher education sector, other examples of innovation are emerging. Professor Rose Luckin, a leading expert in learner-centred design at University College London and author of Machine Learning and Human Intelligence: The Future of Education for School Teachers, points to a host of global institutions experimenting with AI in ways that balance impact and ethics. “On the positive side, we are witnessing remarkable advances in personalised learning at scale,” she says. “Universities like IU International have shown that AI tutoring systems can accelerate learning by 27%, while institutions like the University of Sydney are using AI platforms like Cogniti to give educators unprecedented control over how students interact with course content.”

She also points to Southern New Hampshire University’s chatbot “Penny,” credited with boosting retention rates, especially among historically underrepresented students and notes that institutions like Torrens University Australia and the University of Manchester have reported significant administrative time savings. “This can free up educators to focus on what humans do best: critical thinking, creativity and meaningful student engagement.”

But Luckin is another who suggests that these benefits may come at a cost. “We still don’t fully understand exactly how AI will behave. And the data privacy concerns are absolutely central to getting this right,” she says. “When [AI] systems analyse every click, every question, and every learning pattern, we are creating digital profiles that could follow students throughout their lives.”

Her warning is clear. When AI tools are poorly understood or deployed without transparency, they risk undermining trust and amplifying inequalities. “The key is ensuring that AI enhances human agency in learning rather than replacing it and that the incredible potential for personalisation doesn’t come at the cost of student privacy and autonomy.”

“The key is ensuring that AI enhances human agency in learning rather than replacing it.” Proffesor Rose Luckin, University College London

The result is an increasingly complex learning environment. Some universities are treating AI as an experimental opportunity while others view it as an existential threat to academic standards. Regardless of university rules, if students assume their peers are using AI to gain an advantage, they will feel pressure to do the same. Similarly, if staff believe student work may be machine-generated, they will likely approach marking with more suspicion and less care.

As more students adopt AI and institutions race to catch up, the shared assumptions that underpin higher education are being rethought. What remains unclear is whether universities will double down on enforcement or find new ways to incorporate AI into their teaching and assessments.

Oscar began using AI regularly during the second year of his Economics degree at the University of Bristol. Initially, he says, AI was just a way to make readings more manageable. Academic papers were dense and he often found himself lost in the endless stream of jargon. When a friend introduced him to ChatGPT it felt like a breakthrough. It could summarise complex texts in seconds and explain unfamiliar terms in plain English. By his third year, AI had become a routine part of his academic life. He would paste essay questions and lecture notes into AI tools, ask for draft responses and then rework the tone to sound more like his own voice.

Oscar recalls one particular assignment during his final year, a comparative essay on macroeconomic policy in the Eurozone and BRICS. “I remember feeding my notes and the marking rubric into the AI. It generated a structured outline in less than a minute,” he says. “From there, it was just editing and layering in references.” Though he admits the final submission bore little resemblance to something he might have written unaided, he received a high 2:1 “I always checked the final version,” he says. “But a lot of the structure and argument came from the AI. It helped me get things done faster and honestly, better.”

Oscar says he still had to understand the material. He had to write good prompts and correct mistakes, but much of the cognitive heavy lifting was taken care of. He graduated in 2023 and now works part-time at Citizens Advice while studying for a master’s in law, utilising AI for both purposes. In his job, it helps him write reports and summarise client cases and, in his studies, it assists with writing and background research. “It’s part of how I work now,” he says. “I’d be wasting time if I didn’t use it.”

Oscar believes universities are still too slow to recognise the potential of AI as a practical skill. “We should be getting actual training on how to use these tools properly. Not just being told not to cheat with them,” he says. “It’s like using a calculator. It doesn’t mean you don’t know how to do maths.” He’s also quick to point out the growing relevance of AI beyond the classroom. “It’s already vital in most postgrad jobs. Whether it’s law, marketing, consulting or whatever. If you can’t use AI tools well, you’re already behind.”

To him, the real problem isn’t that students are using AI, it’s that many institutions aren’t keeping pace. “Universities pretending it’s not happening, or banning it, just seems naive. All that does is send graduates into the real world less prepared.” 

Oscar says he doesn’t see himself as dishonest; he sees himself as efficient. While he admits some guilt about how much he relied on AI at university, he also believes the system gave him little reason not to. 

Despite his undeniably bullish perspective on AI use, when approached for interview, Oscar asked not to be identified, explaining that a stigma still lingers around students who rely on these tools. He says, until it is viewed as a natural part of both education and professional life, he feels uncomfortable associating himself with AI.

The rise of AI in higher education has, however, further exposed longstanding inequalities. While these tools offer powerful support to students, not all students have equal access to or knowledge of how to use them effectively. Paid versions of tools like ChatGPT or other AI-powered research assistants offer more accurate results and access to advanced features. Many students cannot afford subscription costs, especially those already struggling with tuition and living expenses. Oscar, who recently began paying £20 a month for the premium version of ChatGPT, says the difference is significant, allowing him to complete more complex tasks more quickly and to a far higher quality. “It’s not something I would’ve been able to afford as a student,” he says. “Now that I’m working, though, it’s worth it for the amount of time it saves.”

Gaps in both digital and AI literacy also remain amongst many students. Those with tech-savvy backgrounds tend to adapt to AI tools more quickly, while others may use them clumsily or in ways that violate academic rules without fully realising it. A student who uses AI to paraphrase a reading may consider it harmless, while a professor might see it as crossing a line. These blurred boundaries contribute to a growing sense of ethical uncertainty.

International students, in particular, face heightened risks. For many, English is a second or third language. AI tools can serve as critical support, helping them write clearly, interpret assignment briefs, or avoid unintentional plagiarism. However, when these same tools are used without explicit approval, they can lead to formal misconduct hearings. 

These issues are exacerbated by many universities’ failure to speak with one voice. Some departments incorporate AI into their teaching and openly encourage its use for brainstorming and revision while others maintain strict bans. Students working across faculties may therefore find themselves subject to contradictory expectations: free to use AI in one class and penalised for it in another. In a recent Guardian column, Josh Freeman, a Policy Manager at the Higher Education Policy Institute, described university policy responses as “incoherent and underdeveloped”, warning that overreliance on detection tools risks undermining trust without addressing the bigger questions.  

We’ve gone from hard denial, ‘we should ban AI,’ to providing AI tools to all students in the space of two years.” Dr. Andrew Rogoyski, Surrey Institute for People-Centred Artificial Intelligence

Dr. Andrew Rogoyski, of the Surrey Institute for People-Centred Artificial Intelligence, says these mixed messages are a symptom of universities trying to catch up with tools already in widespread use. “The challenge of developing an AI response at a university is that adoption is being driven by our students, at pace,” he says. Universities generally struggle to adapt and adopt in such short timescales. We’ve gone from hard denial, ‘we should ban AI,’ to providing AI tools to all students in the space of two years. It’s difficult to say what impact that is going to have on the quality of our students’ education.”

While students like Oscar navigate AI-enhanced learning, many lecturers are still catching up. For some, the shift has prompted excitement about new pedagogical opportunities, whereas for others it has contributed to a sense of losing control over once-familiar academic terrain.

“I am slowly beginning to realise that academics will need active training to understand how to use the reliance on AI,” says Dr. Caroline Blinder, an English Literature professor at Goldsmiths, University of London. “There’s no point trying to go ‘old school’ with closed-book exams. But we do need to rethink the purpose of examinations in light of widespread AI use.” 

“There’s no point trying to go ‘old school’ with closed-book exams. But we do need to rethink the purpose of examinations in light of widespread AI use.” Dr. Caroline Blinder, Goldsmiths, University of London

Blinder is not alone in feeling that AI is forcing a fundamental reconsideration of what higher education is for. Some staff now see traditional assessments as inadequate or completely misaligned with how students are actually working. Rethinking assessments, however, requires time and resources, both of which are in short supply.

Dr Aysem Diker Vanberg, a law professor, also at Goldsmiths, University of London, specialising in the role of generative AI, says her faculty is exploring new assessment formats to respond to these shifts. “We’re using oral presentations and reflective portfolios to uphold academic integrity while also helping students engage with complex issues,” she says. “It’s not just about detection or restriction, but about redesign.

For many educators, however, the challenge is in finding a balance between embracing technological advancement and preserving core academic values. “I’m concerned that AI could diminish students’ critical thinking skills if its used as a replacement for their own intellectual engagement,” says Katharine Adeney, a Professor of Comparative Politics at the University of Nottingham. “However, when integrated into an iterative process alongside independent research, it holds significant potential to enhance learning and students’ future employability. The same principle applies to academic work: while AI can streamline certain tasks and save time, it can never replace the depth and rigour of independent scholarly enquiry.”

“While AI can streamline certain tasks and save time, it can never replace the depth and rigour of independent scholarly inquiry.” Professor Katharine Adeney, University of Nottingham

Her colleague at the University of Nottingham,  Andreas Bieler, a Professor of Political Economy, shares her concerns. “Many academics are worried that students are cheating when writing essays by drawing on AI. This may well be the case,” he says. “My concern is more, however, about students damaging themselves through the increasing use of AI, as they miss out on developing their own research and writing skills.”

For Bieler, the long-term risk isn’t just academic dishonesty, but the erosion of core intellectual habits. Researching, drafting and revising are not just means to an end, but processes that shape critical thinking. If students bypass these entirely, they may graduate with a credential but without the skills that give it meaning.

Their concerns are not merely speculative; they are supported by emerging empirical research. A study published by Microsoft Research earlier this year found that, when students and ‘knowledge workers’ place high confidence in generative AI, they are significantly less likely to engage in critical thinking. The study observed that, while AI tools can reduce the perceived effort involved in complex tasks, this often comes at the cost of deep cognitive engagement. In other words, overreliance on AI doesn’t just result in academic shortcuts, it may unintentionally discourage the very habits universities are designed to foster.

Last month, the Department for Education published its most detailed position yet on the use of artificial intelligence in education. The statement, part of the wider AI Opportunities Action Plan, outlines a vision where AI is a valuable tool that can enhance efficiency and support learning, provided it is used safely and appropriately. 

The policy places a strong emphasis on schools and colleges and outlines practical uses for AI in lesson planning, feedback and reducing teacher workload. It also addresses concerns about misinformation and data misuse. There is, however, little direct guidance for higher education institutions.

The policy also highlights the difference between teacher-facing and student-facing AI. The former, such as tools for generating lesson content or marking answers, is viewed as relatively low risk. Student use is treated more cautiously, especially in relation to data protection, safeguarding and academic integrity. The guidance advises institutions to implement clear rules and ensure that any AI use aligns with legal responsibilities. 

Unlike schools, universities are not bound by a single national curriculum, operating with more autonomy but also with less central coordination. As mentioned previously, without targeted guidance, different departments and faculties are setting conflicting expectations about what is acceptable.

The policy’s release does, however, confirm that AI is now central to government thinking about education, but reveals how far policy still has to go in catching up with students like Oscar who are already working in a post-AI learning environment. 

As universities navigate this new technological terrain, AI continues to raise almost as many questions as it answers. 

What’s clear is that AI is no longer a niche facet of higher education, it’s reshaping it from within. What remains unclear is just how far this transformation will go. Will universities find ways to integrate these tools while protecting core academic values?

The answers won’t come overnight. Like the technology itself, the full impact of AI on higher education is still unfolding and what becomes of it will ultimately depend less on the tools themselves than on how we choose to use them.

read more: