Police at the “front line” of difficult risk-based judgements are trialling an AI system trained by University of Cambridge criminologists to give guidance using the outcomes of five years of criminal histories.
Police at the “front line” of difficult risk-based judgements are trialling an AI system trained by University of Cambridge criminologists to give guidance using the outcomes of five years of criminal histories.
The tool helps identify the few ‘needles in the haystack’ who pose a major danger to the community, and whose release should be subject to additional layers of review
Lawrence Sherman
"It’s 3am on Saturday morning. The man in front of you has been caught in possession of drugs. He has no weapons, and no record of any violent or serious crimes. Do you let the man out on police bail the next morning, or keep him locked up for two days to ensure he comes to court on Monday?”
The kind of scenario Dr Geoffrey Barnes is describing – whether to detain a suspect in police custody or release them on bail – occurs hundreds of thousands of times a year across the UK. The outcome of this decision could be major for the suspect, for public safety and for the police.
“The police officers who make these custody decisions are highly experienced,” explains Barnes. “But all their knowledge and policing skills can’t tell them the one thing they need to now most about the suspect – how likely is it that he or she is going to cause major harm if they are released? This is a job that really scares people – they are at the front line of risk-based decision-making.”
Barnes and Professor Lawrence Sherman, who leads the Jerry Lee Centre for Experimental Criminology in the University of Cambridge’s Institute of Criminology, have been working with police forces around the world to ask whether AI can help.
“Imagine a situation where the officer has the benefit of a hundred thousand, and more, real previous experiences of custody decisions?” says Sherman. “No one person can have that number of experiences, but a machine can.”
In mid-2016, with funding from the Monument Trust, the researchers installed the world’s first AI tool for helping police make custodial decisions in Durham Constabulary.
Called the Harm Assessment Risk Tool (HART), the AI-based technology uses 104,000 histories of people previously arrested and processed in Durham custody suites over the course of five years, with a two-year follow-up for each custody decision. Using a method called “random forests”, the model looks at vast numbers of combinations of ‘predictor values’, the majority of which focus on the suspect’s offending history, as well as age, gender and geographical area.
“These variables are combined in thousands of different ways before a final forecasted conclusion is reached,” explains Barnes. “Imagine a human holding this number of variables in their head, and making all of these connections before making a decision. Our minds simply can’t do it.”
The aim of HART is to categorise whether in the next two years an offender is high risk (highly likely to commit a new serious offence such as murder, aggravated violence, sexual crimes or robbery); moderate risk (likely to commit a non-serious offence); or low risk (unlikely to commit any offence).
“The need for good prediction is not just about identifying the dangerous people,” explains Sherman. “It’s also about identifying people who definitely are not dangerous. For every case of a suspect on bail who kills someone, there are tens of thousands of non-violent suspects who are locked up longer than necessary.”
Durham Constabulary want to identify the ‘moderate-risk’ group – who account for just under half of all suspects according to the statistics generated by HART. These individuals might benefit from their Checkpoint programme, which aims to tackle the root causes of offending and offer an alternative to prosecution that they hope will turn moderate risks into low risks.
“It’s needles and haystacks,” says Sherman. “On the one hand, the dangerous ‘needles’ are too rare for anyone to meet often enough to spot them on sight. On the other, the ‘hay’ poses no threat and keeping them in custody wastes resources and may even do more harm than good.” A randomised controlled trial is currently under way in Durham to test the use of Checkpoint among those forecast as moderate risk.
HART is also being refreshed with more recent data – a step that Barnes explains will be an important part of this sort of tool: “A human decision-maker might adapt immediately to a changing context – such as a prioritisation of certain offences, like hate crime – but the same cannot necessarily be said of an algorithmic tool. This suggests the need for careful and constant scrutiny of the predictors used and for frequently refreshing the algorithm with more recent historical data.”
No prediction tool can be perfect. An independent validation study of HART found an overall accuracy of around 63%. But, says Barnes, the real power of machine learning comes not from the avoidance of any error at all but from deciding which errors you most want to avoid.
“Not all errors are equal,” says Sheena Urwin, head of criminal justice at Durham Constabulary and a graduate of the Institute of Criminology’s Police Executive Master of Studies Programme. “The worst error would be if the model forecasts low and the offender turned out high.”
“In consultation with the Durham police, we built a system that is 98% accurate at avoiding this most dangerous form of error – the ‘false negative’ – the offender who is predicted to be relatively safe, but then goes on to commit a serious violent offence,” adds Barnes. “AI is infinitely adjustable and when constructing an AI tool it’s important to weigh up the most ethically appropriate route to take.”
The researchers also stress that HART’s output is for guidance only, and that the ultimate decision is that of the police officer in charge.
“HART uses Durham’s data and so it’s only relevant for offences committed in the jurisdiction of Durham Constabulary. This limitation is one of the reasons why such models should be regarded as supporting human decision-makers not replacing them,” explains Barnes. “These technologies are not, of themselves, silver bullets for law enforcement, and neither are they sinister machinations of a so-called surveillance state.”
Some decisions, says Sherman, have too great an impact on society and the welfare of individuals for them to be influenced by an emerging technology.
Where AI-based tools provide great promise, however, is to use the forecasting of offenders’ risk level for effective ‘triage’, as Sherman describes: “The police service is under pressure to do more with less, to target resources more efficiently, and to keep the public safe.
“The tool helps identify the few ‘needles in the haystack’ who pose a major danger to the community, and whose release should be subject to additional layers of review. At the same time, better triaging can lead to the right offenders receiving release decisions that benefit both them and society.”
Inset image: read more about our AI research in the University's research magazine; download a pdf; view on Issuu.
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.