Opinion

Universities play a vital
role in the future of AI

Neil Lawrence and Jessica Montgomery

Neil Lawrence and Jessica Montgomery

Neil Lawrence and Jessica Montgomery

Universities can bridge the gap between those who develop AI systems and those who will use and be affected by them. We must step up to deliver this role, say Neil Lawrence and Jess Montgomery.

As the government considers its ambitious agenda to drive wider roll out of AI across the public sector in areas that directly affect people’s lives, we need to find different approaches to innovation that avoid failures like the Horizon Post Office scandal.

For almost a decade, public dialogues have been highlighting what people want from AI: technologies that tackle challenges affecting our shared health and wellbeing; tools that strengthen our communities and personal interactions; and systems that support democratic governance. As these conversations continue, they reveal a growing public scepticism about AI's ability to deliver on these promises.

This scepticism is warranted. Despite impressive technical advances and intense policy activity over the last ten years, a significant gap has emerged between AI's capabilities in the lab and its ability to deliver meaningful benefits in the real world. This disconnect stems in part from a lack of understanding of real-world challenges.

We’ve seen the impact of this lack of understanding in previous attempts to drive technology adoption. In the UK, both the Horizon Post Office and Lorenzo NHS IT scandals demonstrated how IT projects can fail catastrophically.

These failures share common patterns that we must avoid repeating. Insufficient understanding of local needs led to systems being designed without considering how they would integrate into existing workflows. Lack of effective feedback mechanisms prevented early identification of problems and blocked adaptation to user experiences. Rigid implementation approaches imposed technology without allowing for local variation or iteration based on real-world testing. Together, these factors created systems that burdened rather than benefited their intended users.

As the government considers its ambitious agenda to drive wider roll out of AI across the public sector – in areas that directly affect people’s lives – we need to find different approaches to innovation that avoid these failures.

Achieving its potential

There is an alternative. The UK has strategic advantages in research and human capital that it can leverage to bridge this gap by building AI from the ground up.

Work across Cambridgeshire demonstrates this alternative approach in action. In local government, Greater Cambridge Shared Planning is collaborating with universities to develop AI tools that analyse public consultation responses. By combining planners' expertise with research capabilities, they're creating systems that could reduce staff time for analysis from over a year to just two months.

Similar collaborations are emerging in healthcare, where clinicians and researchers are leading the development of AI tools for cancer diagnosis. Their work shows how frontline staff can ensure AI enhances rather than replaces clinical judgment, while improving outcomes for patients.

We've already seen the value of this approach during COVID-19, when NHS England East collaborated with researchers to develop AI models that helped hospital leaders make critical decisions about resource allocation. This partnership demonstrated how AI can support operational decisions when developed with those who understand local needs.

This points toward what we call an ‘attention reinvestment cycle’. The key to scaling innovation comes when some of the time that professionals save by using AI is reinvested in sharing knowledge and mentoring colleagues, allowing solutions to spread organically through professional networks. Unlike top-down implementation, this approach builds momentum through peer-to-peer learning, with frontline workers becoming both beneficiaries and champions of technology

Too often in the past, universities have been distant from the difficulties that society faces. However, universities have access to the research and human capital that are vital for the next wave of AI innovation. Their position as neutral conveners allows the creation of spaces where people working to deploy AI in public services and industry can collaborate with diverse communities of expertise, from engineering to ethics.

This bottom-up, human-centred approach offers more effective and ethical AI implementation. It is a vital component of how government can successfully implement its national AI strategy and deliver on the promise of AI for all citizens.

We must step up to deliver this role. By fostering collaboration between those who develop AI systems and those who will use and be affected by them, universities can ensure that technological progress truly serves the public good.

Jessica Montgomery is Executive Director of ai@cam and Neil Lawrence is DeepMind Professor of Machine Learning and Chair of ai@cam, Cambridge's flagship mission to support innovative research that connects AI capabilities with societal challenges.

Published: 2 April 2025

The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License