Event: Predicting Guilt and Automating [De]incarceration

Passing along this event invitation from the Foresight Institute. It's a very different exploration of Criminal Justice issues with a forward-thinking mentality. Thought some of you might be interested.

*Predicting Guilt and Automating [De]incarceration: Algorithms in the US Criminal Justice System - a Foresight salon with Peter Eckersley*
About this Event

*Predicting Guilt and Automating [De]incarceration -- Algorithms in the US Criminal Justice System -- a Foresight Strengthening Civilization salon series with Peter Eckersley in discussion with Lou de Kerhuelvez.*

*Are we ready for AI judges?*

As automation is increasingly deployed to assist or replace human decisions, it becomes crucial to evaluate potential social and ethical consequences of AI powered decision-making.

Peter Eckersley will be discussing the Report on Algorithmic Risk Assessment Tools in the U.S. Criminal Justice System <https://www.partnershiponai.org/report-on-machine-learning-in-risk-assessment-tools-in-the-u-s-criminal-justice-system/> recently published by The Partnership on AI (PAI) <https://www.partnershiponai.org/>.

This report raises serious concerns on risk assessment tools in the U.S. criminal justice system, most particularly in the context of pretrial detentions. Issues include:

* Bias in the tools themselves;
* Problems with the human-tool interface
* Questions of governance, transparency, and accountability.
These concerns are nearly universal in the AI research community as they apply to most attempt to use data to train statistical models or to create heuristics to make decisions that have social and ethical implications.

Peter led the effort of the Partnership on AI to convene the machine learning research community and produce a shared position on the algorithmic risk assessment tools that are in widespread use throughout the US criminal justice system, and have now been mandated by Californian legislation.

There was widespread agreement that the current tools are deeply flawed on statistical, procedural and bias grounds, though some disagreement about whether they could conceivably be improved enough to be constructive. To synthesize across those views, the report identified 10 requirements that would need to be met before their use could even conceivably be appropriate for the incarcerative purposes they are often being employed.

This salon will outline both what PAI learned along the way, and how this debate fits into the larger context of mass incarceration and criminal justice reform in the United States.

Read the report here: https://www.partnershiponai.org/report-on-machine-learning-in-risk-assessment-tools-in-the-u-s-criminal-justice-system/


Although their email doesn't mention a cost, if you go to the event page you'll find tickets are $25.

Love & Liberty,

((( starchild )))