Âé¶¹ÒùÔº

June 9, 2020

AI sentencing tools need to be closely scrutinised, says new study

Credit: CC0 Public Domain
× close
Credit: CC0 Public Domain

In a paper published by the Behavioral Sciences & Law journal, experts from the University of Surrey take a critical look at the growing use of algorithmic risk assessment tools, which act as a form of expert scientific evidence in a growing number of criminal cases.

The review argues that because of several issues, such as the biases of the developers and weak statistical evidence of the AI's predictive performance, judges should act as gatekeepers and closely scrutinise whether such tools should be used at all.

The paper outlines three steps that judges should consider:

Dr. Melissa Hamilton, author of the paper and Reader in Law and Criminal Justice at the University of Surrey's School of Law, said: "These emerging AI tools have the potential to offer benefits to judges in sentencing, but close attention needs to be paid to whether they are trustworthy. If used carelessly these tools will do a disservice to the defendants on the receiving end."

More information: Melissa Hamilton, Judicial gatekeeping on scientific validity with risk assessment tools, Behavioral Sciences & the Law (2020).

Provided by University of Surrey

Load comments (0)

This article has been reviewed according to Science X's and . have highlighted the following attributes while ensuring the content's credibility:

Get Instant Summarized Text (GIST)

This summary was automatically generated using LLM.