Law Society logo

Home
Oxford Venue
Oxford Programme
Oxford Speakers
Dunedin Venue
Dunedin Programme
Dunedin Speakers
Organisers

Roundtable on Uses of Artificial Intelligence in the Criminal Justice System 

 

Oxford, November 23-24, 2017

Dunedin, December 11-12, 2017

 

 AI police


Artificial Intelligence (AI) systems are increasingly widely used within the field of criminal justice. Police forces use AI systems to predict crime hotspots, to identify individuals at risk, and to assist on-the-ground policing and detective work. AI systems are also used in the courts, to gauge a defendant’s risk of reoffending, and inform bail, sentencing and parole decisions. Similar systems are in use in prisons, to inform rehabilitation programmes. 

While statistical systems have been in use in criminal justice for many years, the systems coming into use now are increasingly sophisticated, and are finding a variety of new roles. Their increasing prevalence raises many questions. How should the accuracy of these systems be measured? How can we ensure their operation is not biased towards particular social groups? How can we inspect the processes through which they reach a decision? How should human decision-makers interact with such systems? What ethical and legal frameworks do we need to ensure good practice in the use of such systems? 

Many of these questions arise wherever AI systems are used to inform decision-making, whether in public policy or by the companies who shape our experiences on the internet. But the decisions made in the criminal justice system have a particularly large impact on people’s lives: so the use of AI systems in these decisions demands particularly urgent attention. 

This roundtable brings together experts working on AI and criminal This roundtable brings together experts working on AI and criminal justice from several perspectives. It includes lawyers, policy researchers, AI technologists, statisticians, ethicists and police officers. There will be two separate events, one in Oxford and one in Dunedin. Each event will be structured around five issues that we think are especially urgent to discuss, for AI/statistical systems used at any stage in the criminal justice process.
  • Accuracy:  How reliable are the system’s predictions or judgements? How can it be tested for accuracy? Should the results of such evaluations be made public? 
  • Bias: Is the system discriminatory towards particular social groups? Is bias ever acceptable, if it leads to higher accuracy? Are there ways of removing bias without compromising accuracy?
  • Control: How do human decision-makers interact with the system? How can we most productively combine human decisions with the system’s processes? What control should humans have over the system’s outputs?
  • Transparency: Should there be a requirement that the system’s outputs be ‘explainable’? If so, how can/should explanations be provided? Can these explanations be provided without infringing on individuals’ privacy, or (for commercial systems) disclosing proprietary code?
  • Oversight and Regulation: What ethical or legal frameworks should be established to ensure good practice on all of the above issues?