Illustration representing potential online harms

From social media to AI, online technologies are changing too fast for the scientific infrastructure used to gauge its public health harms, say two leaders in the field.

The scientific methods and resources we have for evidence creation at the moment simply cannot deal with the pace of digital technology development

Dr Amy Orben

Scientific research on the harms of digital technology is stuck in a “failing cycle” that moves too slowly to allow governments and society to hold tech companies to account, according to two leading researchers in a new report published in the journal Science.

Dr Amy Orben from the University of Cambridge and Dr J. Nathan Matias from Cornell University say the pace at which new technology is deployed to billions of people has put unbearable strain on the scientific systems trying to evaluate its effects.

They argue that big tech companies effectively outsource research on the safety of their products to independent scientists at universities and charities who work with a fraction of the resources – while firms also obstruct access to essential data and information. This is in contrast to other industries where safety testing is largely done “in house”.

Orben and Matias call for an overhaul of “evidence production” assessing the impact of technology on everything from mental health to discrimination.  

Their recommendations include accelerating the research process, so that policy interventions and safer designs are tested in parallel with initial evidence gathering, and creating registries of tech-related harms informed by the public.    

“Big technology companies increasingly act with perceived impunity, while trust in their regard for public safety is fading,” said Orben, of Cambridge’s MRC Cognition and Brain Sciences Unit. “Policymakers and the public are turning to independent scientists as arbiters of technology safety.”

“Scientists like ourselves are committed to the public good, but we are asked to hold to account a billion-dollar industry without appropriate support for our research or the basic tools to produce good quality evidence quickly.”

“We must urgently fix this science and policy ecosystem so we can better understand and manage the potential risks posed by our evolving digital society,” said Orben.

'Negative feedback cycle'

In the latest Science paper, the researchers point out that technology companies often follow policies of rapidly deploying products first and then looking to “debug” potential harms afterwards. This includes distributing generative AI products to millions before completing basic safety tests, for example.

When tasked with understanding potential harms of new technologies, researchers rely on “routine science” which – having driven societal progress for decades – now lags the rate of technological change to the extent that it is becoming at times “unusable”.  

With many citizens pressuring politicians to act on digital safety, Orben and Matias argue that technology companies use the slow pace of science and lack of hard evidence to resist policy interventions and “minimize their own responsibility”.

Even if research gets appropriately resourced, they note that researchers will be faced with understanding products that evolve at an unprecedented rate.

“Technology products change on a daily or weekly basis, and adapt to individuals. Even company staff may not fully understand the product at any one time, and scientific research can be out of date by the time it is completed, let alone published,” said Matias, who leads Cornell’s Citizens and Technology (CAT) Lab.

“At the same time, claims about the inadequacy of science can become a source of delay in technology safety when science plays the role of gatekeeper to policy interventions,” Matias said.

“Just as oil and chemical industries have leveraged the slow pace of science to deflect the evidence that informs responsibility, executives in technology companies have followed a similar pattern. Some have even allegedly refused to commit substantial resources to safety research without certain kinds of causal evidence, which they also decline to fund.” 

The researchers lay out the current “negative feedback cycle”:

Tech companies do not adequately resource safety research, shifting the burden to independent scientists who lack data and funding. This means high-quality causal evidence is not produced in required timeframes, which weakens government’s ability to regulate – further disincentivising safety research, as companies are let off the hook.

Orben and Matias argue that this cycle must be redesigned, and offer ways to do it.

Reporting digital harms

To speed up the identification of harms caused by online technologies, policymakers or civil society could construct registries for incident reporting, and encourage the public to contribute evidence when they experience harms.

Similar methods are already used in fields such as environmental toxicology where the public reports on polluted waterways, or vehicle crash reporting programs that inform automotive safety, for example.

“We gain nothing when people are told to mistrust their lived experience due to an absence of evidence when that evidence is not being compiled,” said Matias.

Existing registries, from mortality records to domestic violence databases, could also be augmented to include information on the involvement of digital technologies such as AI.

The paper’s authors also outline a “minimum viable evidence” system, in which policymakers and researchers adjust the “evidence threshold” required to show potential technological harms before starting to test interventions.

These evidence thresholds could be set by panels made up of affected communities, the public, or “science courts”: expert groups assembled to make rapid assessments.   

“Causal evidence of technological harms is often required before designers and scientists are allowed to test interventions to build a safer digital society,” said Orben. 

“Yet intervention testing can be used to scope ways to help individuals and society, and pinpoint potential harms in the process. We need to move from a sequential system to an agile, parallelised one.”

Under a minimum viable evidence system, if a company obstructs or fails to support independent research, and is not transparent about their own internal safety testing, the amount of evidence needed to start testing potential interventions would be decreased.

Orben and Matias also suggest learning from the success of “Green Chemistry”, which sees an independent body hold lists of chemical products ranked by potential for harm, to help incentivise markets to develop safer alternatives.

“The scientific methods and resources we have for evidence creation at the moment simply cannot deal with the pace of digital technology development,” Orben said.  

“Scientists and policymakers must acknowledge the failures of this system and help craft a better one before the age of AI further exposes society to the risks of unchecked technological change.”

Added Matias: “When science about the impacts of new technologies is too slow, everyone loses.”


Creative Commons License.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.