UK Police Force Trials ‘Risk of Reoffending’ AI

UK Police Force Trials ‘Risk of Reoffending’ AI

Every day UK police forces have to make hundreds of call on whether or not to release individuals under arrest on bail pending trial. The decision of whether or not to grant bail is based on the perceived likelihood of the individual permitting another crime or not should they be released. The decision relies on the experience of senior police officers. But the Durham Constabulary is now trialling an AI algorithm called Hart – the Harm Assessment Risk Tool. The trial is intended to establish if Hart’s prediction accuracy will prove superior to human intuition for bail decisions.

Durham Constabulary is proving to be a particularly technologically progressive police force. Hart has been developed in cooperation between the force’s own engineers and Cambridge University academics. It is informed by 5 years of data on whether individuals arrested by the force have gone on to reoffend within 2 years of being released. There are 33 different data points, or metrics, for each individual. These range from previous criminal record to type or types of crime, age and even post code.

The machine learning framework used by the AI is a technique called ‘random forest’. It consists of ‘decision trees’, a total of 509. Decision trees are essentially questions that connect to another question based on the answer. Simple versions of such decision trees are common in social media memes. ‘Do you like x – yes or no’? ‘If yes, do you dislike y’, ‘If no do you like z’, and so on until the questions narrow down to a final category or conclusion.

Hart’s testing period is due to end in April of this year. At that point the AI’s accuracy over three years will be compared to that of the final human decision makers. No bail decision has so far been granted or denied on the basis of Hart’s assessment and the human decision maker has been unaware of Hart’s interpretation of an offender’s risk of reoffending.

A 2016 trial of an earlier version of Hart did show a significant discrepancy between the risk assessment of the AI and human police officers. 56% of the time they reached different conclusions. This suggests that either the AI or human decision makers were getting bail decisions wrong. A decision whether or not to continue the Hart trial will be based on whether its results over the past three years have been proven to be significantly more accurate than those of police officers.

Civil rights groups are concerned that taking criminal justice system decisions on the basis of data points will unfairly pigeon hole particular groups based on abstract metrics such as post code or race. Comparable systems already used in the U.S., such as ‘Compas’, which as well as assessing the risk of reoffending also influences sentencing decisions, have been widely criticised. A study showed that black offenders were more likely to be profiled by Compas as a higher reoffending risk than white peers.

However, another study conducted by Megan Stevenson, a professor from George Mason University, found that the discrepancy of another risk assessment algorithm that benefited white defendants more than black was not down to inherent bias in the algorithm. Rather, the data was skewed by the fact that the algorithm’s recommendations were more closely followed by judges located in predominantly white communities.

Durham Constabulary’s police chief Michael Barton believes that eventually algorithms such as Hart should be less biased than human decision makers

“because the problem with people — custody sergeants — [is] they have all of those inherent biases that human beings have.”

Leave a Comment