Connecting AI to
public benefit

Beyond the hype: AI that serves society

There’s no shortage of buzz around artificial intelligence (AI). From self-driving cars to the promise of revolutionising healthcare, AI is being hailed as the technology that will change the world around us. But what does this really mean for our everyday lives? And how can we ensure that AI is used to serve all of us across society, not just the interests of a few?
At the University of Cambridge, ai@cam is at the forefront of developing artificial intelligence with real purpose. Led by Professor Neil Lawrence and Jessica Montgomery, the University’s flagship mission on AI unites expertise from across research domains, businesses, policy-makers, and civil society to ensure AI supports practical, community-driven solutions that benefit all in society.
One of the biggest misconceptions around AI is the hype surrounding Artificial General Intelligence (AGI), or machines that could outperform humans in all tasks. Lawrence refers to this as “AGI vaporware,” promises of a technology that haven’t yet been built but are marketed to attract investment. “There is no doubt that these technologies are utterly transformational, but we need to be careful,” he explains. “The term AGI is being used as a marketing tool, but marketing doesn’t solve the real-world problems we care about, like healthcare or education.”
The focus on this speculative future distracts attention from where AI could make an immediate difference. The challenge of translating that potential into practice couldn’t be more true than when it comes to our public services. “There’s often a sense that experts from industry or research can simply hand down a ‘one size fits all’ solution to public service professionals,” Montgomery points out. But the reality is far more collaborative. “If we’re going to make AI tools useful for the public sector, we need to start by understanding the issues that public servants and people using public services are grappling with” she says. “AI won’t offer a quick fix. The real challenge is tackling the complex issues - whether it’s healthcare, education, or crime - by working through these problems together. We need to co-create solutions that truly serve the public and that’s by involving them at the very beginning of AI design.”

Connecting to public voices

People have long expressed hopes about what AI should deliver. Recent public dialogues, led by ai@cam, have contributed to a growing body of evidence that shows the public’s desire for AI to be used in a way where they can see tangible benefits to their lives. From better healthcare, more responsive public services to practical solutions for our frontline workers, their expectations are clear.
“Time and time again, we hear how people hope AI could alleviate nurses and doctors from the burden of tedious admin tasks so it could free time to spend with patients. They imagine AI could support teachers by personalising learning while reducing their paperwork. Or help make their interactions with public services easier, and more ‘human’. These aren’t outside the realms of possibility, but to do this successfully, we need to place public benefit at the core of AI innovation” explains Jess Montgomery.
Yet, behind public aspirations for AI, a growing mistrust is forming. ai@cam’s recent public dialogue highlights an underlying scepticism about the current trajectory for AI development. “People are rightly worried about their data and privacy and the concentration of power in the hands of large technology companies. There is a sense that AI is now something that's being done to us, rather than something that is genuinely here for the benefit of all of us.”
Creating a meaningful dialogue with the public is fundamental to the work of ai@cam and is key to building the trust and accountability needed for successful AI integration. All too often, AI systems are designed without proper consideration of the people using them or are deployed without the necessary feedback loops. This is made even more evident by scandals like the Horizon Post Office and the Lorenzo scandal where flawed technology and disregard for frontline feedback led to serious failings.
At a crucial time in AI development, the stakes couldn’t be higher. “It's absolutely critical to listen to public voices and bring them into discussions around the future of our society. Our frontline workers need to be front and centre of how we design and implement AI systems into public services - it's not ludicrous to imagine a future in which we can engage teachers or nurses directly in the process of software design, but it does require a whole new model of innovation and one in which they are in the driving seat” emphasises Neil Lawrence.
To do this, ai@cam are doing things differently. By facilitating new collaborations between academia and the public sector with a focus on real world problems, a new vision for AI can begin to appear. With the announcement of the latest cohort of “AI-deas challenges” and local government engagements through their Policy Lab, change is already underway. “We want to remove traditional barriers so that those who are closest to the problems are the ones working with us to design the solutions” says Neil Lawrence. “This isn’t just about creating technology just for technology’s sake - it’s about creating lasting impact for the benefit of everyone.”

Ethically rooted AI

Dr Kwadwo Oti-Sarpong and his team are all too aware of the pressures facing local government. As one of five winners of ai@cam’s flagship AI-deas programme, he leads the Decision-making with AI in Connected Places and Cities project - working on developing practical guidance to help local authorities make ethically-informed decisions about using AI in their digital transformation efforts.
“Our project is about giving local authorities the knowledge and confidence to make decisions around AI that are not only technically sound, but socially responsible,” says Dr Oti-Sarpong. “We want to ensure that as digitalisation accelerates, it does so in a way that prioritises ethics, and reflects the values of the communities it’s meant to serve.”
The AI-deas team are already putting those principles into practice. Through a collaboration with Greater Cambridge Shared Planning Service (GCSP), they’re supporting the introduction of a new AI tool designed to streamline the thousands of comments received from residents, community groups and the public, submitted during consultations on planning policies. “The Local Plan sets the future for the area for the next 15 to 20 years. It covers everything - housing, transport, infrastructure, environmental policies. Every planning application will need to comply with these policies once the Plan is adopted” explains Terry de Sousa, Planning Policy and Strategy Team Leader at GCSP.
But processing over 9,500 consultation comments is no small feat. Terry’s team calculated it requires around 450 officer days to review, summarise, and categorise the feedback. “Comments can vary from a single line to a 100-page technical document with multiple datasets,” he says. “Our whole team is dedicated to this for months. The Local Plan can shape people’s lives for decades, so every comment must be considered and accounted for.”
With a clear target to reduce the time taken for analysis, GCSP partnered with the University of Liverpool, Anglia Ruskin University, and University of Cambridge to create a bespoke AI tool that could be responsibly deployed to support their work. Using 55,000 comments gathered over 15 years of public consultations to train the model, the dataset ensures the tool understands the planning context, terminology, and specific terms unique to Greater Cambridge. “It’s not simply about efficiency - it’s about ensuring the model can grasp the complexity and nuance of the comments” notes de Sousa.
To develop the tool successfully, it was crucial that clear ethical principles were considered from the start. “We’ve acted as ‘critical friends’ throughout the process,” Oti-Sarpong explains. “AI tools must be designed to minimise bias or exclusion. If these systems aren’t transparent and inclusive, there is a serious risk of reinforcing existing inequalities or further marginalisation of certain groups."
Through ongoing consultation, the team has helped define ethical benchmarks for the project, keeping public value at the forefront. While the current work centres on a specific AI tool for urban planning, Oti-Sarpong’s team hopes to broaden the support available to other local authorities with help from the Local Government Association (LGA). “We want to learn from this case study and apply those lessons to other AI-based tools” he says. “Local authorities don’t just deliver services - they shape the communities we live in. There are still many unknowns when it comes to AI in the public sector but we want to support local councils to navigate these challenges in a way that builds public trust.”


Universities play a vital role in AI
Professor Neil Lawrence and Jessica Montgomery explain how universities can bridge the gap between those who develop AI systems and those who will use and be affected by them. Read more

26 March 2025
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
