Opinion
We must balance the risks and benefits of AI

AI will only be as good – or as bad – as the information fed into it, so we need to fix any bias that perpetuates inequality and marginalisation, says Michael Barrett.
The potential of AI to transform people’s lives in areas ranging from healthcare to better customer service are enormous. But as the technology advances, we must adopt policies to make sure the risks don’t overwhelm and stifle those benefits.
Importantly, we need to be on alert for algorithmic bias that could perpetuate inequality and marginalisation of communities around the world.
Algorithmic bias occurs when systems – often based on machine learning or AI – deliver biased outcomes or decisions because the data it has been given is incomplete, imbalanced or not fully representative.
I and colleagues here in Cambridge and at Warwick Business School have proposed a new way of thinking about the issue – we call this a ‘relational risk perspective’. This approach looks at not just how AI is being used now, but how it may be used in the future and across different geographies, avoiding what we call ‘the dark side of AI’. The goal is to safeguard the benefits of AI for everyone, while minimising the harm.
We look at the workplace as one example. AI is already having a huge impact on jobs, affecting both routine and creative tasks, and affecting activities that we’ve thought of as uniquely human – like creating art or writing film scripts.
As businesses use the technology more, and perhaps become over-dependent on it, we are at risk of undermining professional expertise and critical thinking, leaving workers de-motivated and expected to defer to machine-generated decisions.
This will impact not just tasks but also the social fabric of the workplace, by influencing how workers relate to each other and to organisations. If AI is used in recruitment, a lack of representation in datasets can reinforce inequalities when used to make decisions about hiring or promotions.
We also explore how this billion-dollar industry is often underpinned by largely ‘invisible’ workers in the Global South who clean data and refine algorithms for a user-group predominantly in the Global North. This ‘data colonialism’ not only reflects global inequalities but also reinforces marginalisation: the people whose labour enables AI to thrive are the same people who are largely excluded from the benefits of that technology.
Healthcare data is in particular danger from such data-driven bias, so we need to ensure that health-related information analysed by the Large Language Models used to train AI tools reflects a diverse population. Basing health policy on data from selected and perhaps more privileged communities can lead to a vicious cycle in which disparity is more deeply entrenched.
Achieving its potential
I believe that we can counter these threats, but time is of the essence as AI quickly becomes embedded into society. We should remember that generative AI is still an emerging technology, and take note that it is progressing faster than the ethical or regulatory landscape can keep pace with.
Our relational risk perspective does not present AI as inherently good or bad. Rather, AI is seen as having potential for benefit and harm depending on how it is developed and experienced across different social contexts. We also recognise that the risks are not static, as they evolve with the changing relationships between technology, its users and broader societal structures.
Policymakers and technologists should anticipate, rather than react to, the ways in which AI can entrench or challenge existing inequities. They should also consider that some countries may develop AI maturity more quickly than others.
Finally, let’s draw on stakeholders far and wide in setting AI risk policy. A multidisciplinary approach which will help avoid bias, while at the same time helping to demonstrate to the public that AI policy really does reflect varied and diverse interests and communities.
Michael Barrett is Professor of Information Systems and Innovation Studies, Vice-Dean for Strategy and University Engagement at Cambridge Judge Business School, and a Fellow of Hughes Hall.
Published: 4 April 2025
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
