From Homer to HAL: 3000 years of AI narratives
We have been writing about artificial intelligence for almost as long as stories have been written. Now, researchers want us to consider why the stories we tell ourselves about AI will have an impact on all our futures.
Nearly 3,000 ago, in the Iliad, Homer described Hephaestus, the god of fire, forging women made of gold to serve as his handmaidens – enabling the crippled deity to work and move around his forge underneath Mount Olympus.
In 300 BCE, Apollonius Rhodius imagined Talos, a giant bronze automaton who protected Europa on the Island of Crete, in his Greek epic poem Argonautica. And while the term ‘robot’ was only coined in the 20th century by Karel Čapek for his play R.U.R (Rossum’s Universal Robots), in which artificial servants rise up against their masters, we have been imagining intelligent machines long before we had the technology capable of creating them.
Our fascination and appetite for AI in the pages of our novels, in our movie theatres and on our television screens remain undimmed. Two of the best-received TV shows of recent years – HBO’s big-budget Westworld and Channel 4’s Humans – both imagine a world where AI replicants are on hand to satisfy every human need and desire – until they reject the ‘life’ of servitude they have been programmed to fulfil. Last autumn, Blade Runner 2049 took cinemagoers into the world originally created by Philip K Dick’s seminal Do Androids Dream of Electric Sheep?
But how do these old and new, polarised and often binary narratives about the dawn of the AI age affect, reflect and perhaps even infect our way of thinking about the benefits and dangers of AI in the 21st century? As the kind of mechanisation that existed solely in the minds of visionaries such as Mary Shelley, Fritz Lang or Arthur C. Clarke looms closer to reality, we are only just beginning to reflect upon and understand how such technologies arrive pre-loaded with meaning, sparking associations, and media attention, disproportionate to their capabilities.
To that end, Cambridge’s Leverhulme Centre for the Future of Intelligence (CFI) and the Royal Society have come together to form the AI Narratives research programme. It’s the first large-scale project of its kind to look at how AI has, and is, being portrayed in popular culture – and what impact this has not only on readers and movie-goers, but also on AI researchers, military and government bodies, and the wider public.
Dr Sarah Dillon is one of the Project Leads of AI Narratives, and Programme Director of the AI Narratives and Justice programme – and a devotee of science fiction and AI storytelling in all its myriad forms. “All the questions being raised about AI today have already been explored in a very sophisticated fashion, for a very long time, in science fiction,” says Dillon.
“Science fiction literature and film provide a vast body of thought experiments or imaginative case studies about what might happen in the AI future. Such narratives ought not to be discarded or derided merely because they’re fiction, but rather thought of as an important dataset. What we want to do is convince everyone how powerful AI narratives are and highlight what effects they can have on our everyday lives. People outside of literary studies have tended not to know how to deal with this power.
“What sort of stories are told – and how they are told – really matters. Fiction has influenced science as much as science has influenced fiction, and will continue to do so. One stream of the project is looking directly at how we have talked about new technologies in the past – and how we can learn from the communication of other complex technologies when it comes to AI.”
Citing the often sensationalist, misinformed or even disingenuous examples of historical narratives around nuclear energy, genetic engineering and stem cells, Dillon and her project colleagues Dr Beth Singler and Dr Kanta Dihal suggest that stories around emerging technologies can significantly influence how they are developed, regarded and regulated.
Exploring the rich array of themes associated with AI in history, myth, fiction and public dialogue, the team has not been surprised to find that many pivot around the notion of control: AI as a tool we are unable to master or a tool that will acquire agency of its own and turn against us.
“The big problem with AI in fiction is dystopia,” says Singler, whose award-winning short documentary film Pain in the Machine looked at whether robots should feel pain. “Dystopia can be fun, and people are fascinated by AI, but most of the narratives are written for and by young, white men – and that directly influences AI researchers and the research they do. We are not at the stage where AI matches human intelligence, but if we do get to a superior form of AI or agency, we will find that they too break laws like us. It’s what we do.”
“Isaac Asimov’s legendary Four Laws of Robotics, for example, have become so ubiquitous that they were referenced in a 100-page report by the US Navy, which is slightly terrifying,” says Dihal. “The Laws are a storytelling device. If Asimov’s Laws worked perfectly there would be no story!”
As well as identifying recurrent dichotomies in popular AI narratives (such as dominance vs subjugation), the CFI team is also considering the problems of continually perpetuating responses to AI, and is thinking of recommendations to mitigate against them in a way that creates space for more positive – and diverse – AI narratives to flourish.
To do so, CFI is establishing partnerships with the wider tech community as well as engaging with the world’s leading AI thinkers from industry, academia, government and the media. In December 2017, CFI submitted written evidence to the House of Lords Select Committee on AI. The AI Narratives programme also includes looking at what AI researchers read and how this influences their research (or not).
All this is an attempt by CFI to make sure that future narratives around AI aren’t bound by the same prejudices and preconceptions as they have been to date.
Says Dillon: “Just consider Google’s photo app tagging the image of an African American-woman as a gorilla in 2015, or the racist and sexist tweets by Microsoft’s Chatbot in 2016. If AI continues to learn our prejudices then the future looks just as bleak as the past, with the repetition and consolidation of discrimination and inequality.
“Who is telling AI its narratives? Whose stories, and which stories, will inform how AI interacts with the world? Which novels are being chosen to ‘teach’ AI morality? What kind of writers are being enlisted to script AI–human interaction?
“If we can create more diverse literary and cinematic AI narratives, this can feed back into the research and into the language and data that feeds into actual AI systems. By paying close attention to what stories are doing and how they are doing it, it doesn’t destroy the power they have – it helps us understand and appreciate that power even more.
“In exploring these AI narratives and their concerns, we will be able to bring new knowledge derived from literature and film to current AI debate and hopefully ensure that the more dystopian futures imagined in such narratives do not become our reality.”
Portrayals and perceptions of AI and why they matter by the Leverhulme Centre for the Future of Intelligence and the Royal Society is published today (11 December 2019).
Main image: Artificial intelligence (Pixabay)