top of page

Beyond Bureaucracy 2026: Applied AI in the Criminal Justice System

This is a free one day online event featuring experts from academia, industry, and public service organised by Dr Ruth Spence of Middlesex University and herEthical AI. 

Dr Ruth Spence is a senior research fellow at Middlesex University. Her research focuses on the use of data and AI in policing and criminal justice. Her work bridges academic research and real-world police practice.

image.png

Thursday 5th February, 2026

This conference is now over

Presentations and slides below

Beyond Bureaucracy was a free a one-day conference bringing together academics, industry leaders, policymakers, and frontline practitioners to explore how artificial intelligence is reshaping the criminal justice system.
 

This event is designed to showcase some of the current best practice in how AI is already being implemented across policing, law, and justice, while also providing space to confront the challenges - ethical, practical, and operational.

​

​

​​​​

10-10:30 AM GMT

Andrew Fahey

Forensic Analytics

Abstract: Andrew will examine best practice in deploying AI across policing, highlighting UK case studies where AI is improving efficiency, investigative capability, and public contact. It also addresses challenges including data quality, culture, skills, governance, and ethical and legal risks such as bias, privacy, transparency, and accountability, with practical recommendations for policymakers and practitioners.

Bio: Andrew is Public Affairs Director at Forensic Analytics, who provide law enforcement worldwide with digital forensics software and training. He has worked as a digital forensic expert with the UN at the Special Tribunal for Lebanon in the Hague, and with the Metropolitan Police Service including on the 7th July 2005 London Bombings. 

10:50-11:20 AM GMT

Dr Uchenna Nnawuchi & Dr Carlisle George

Middlesex University

Abstract: Uchenna and Carlisle will focus on judicial due process and automation bias. They will explore how risk-prediction tools can subtly shape judicial reasoning, embed historical inequalities, and obscure accountability, raising concerns about fairness, transparency, and the preservation of human judgment in judicial decision-making.

Bio: Dr Uchenna Nnawuchi is a lawyer and Lecturer at the School of Law at Sheffield University. He is also an associate member of the ALERT (Aspects of Law and Ethics Related to Technology) Research Group at Middlesex University. His research focuses on law and technology, including AI Governance and accountability in automated decision-making systems.
Dr Carlisle George is a lawyer and Associate Professor at Middlesex University. Among other roles, he is the Research Convenor of the ALERT  (Aspects of Law and Ethics Related to Technology) Research Group and the Chair of the Ethics Committee of the Faculty of Science and Technology. His research interests include legal and ethical aspects of emerging technologies including use of AI in various sectors.

12:00-12:30 PM GMT

Nicola Cain

Handley Gill Ltd

Abstract: Nicola will present on deploying live facial recognition in UK policing. It covers the legal, regulatory, and ethical context, implementation challenges, and compliance measures adopted to safeguard rights, ensure accountability, and maintain public trust, alongside reflections on emerging policy and regulation, including recent Home Office consultations. Tag Hampshire and Isle of Wight Constabulary

Bio: Nicola is Principal Consultant at Handley Gill Ltd, an award-winning legal and regulatory compliance consultancy specialising in AI governance, data protection, online safety, information access, human rights, GRC and ESG. Nicola advised Hampshire Police on its deployment of live facial recognition technology, and was a finalist in the PICCASO Awards 2025 in the Responsible AI and Data Ethics Initiative or Leadership category. 

1:30-2:00 PM GMT

Stephen Grix

Meganexus

Abstract: As AI enters criminal justice, the challenge is using automation responsibly without undermining fairness or human judgment. Steve will explore where AI can support high‑volume tasks and where human agency must remain and he will introduce the Justice‑SAMR Framework, a practical tool for evaluating AI initiatives ethically and effectively.

Bio: Steve has 20 years experience in the justice sector, specialising in prison education and digital innovation. In the past five years he has spent his time as a consultant for the Prisons Digital Learning and Data Team before joining Meganexus to explore AI focused innovation, looking at how emerging technologies can responsibly transform criminal justice services.

11:30-12.00

Ruth Reymundo Mandel

Safety Nexus AI | Safe & Together Institute

Abstract: Ruth will challenge the current reliance on survivor behaviour, institutional SOP's and historical system data—such as case notes, removals, and past interventions—as the foundation for AI decision-making in child welfare, domestic violence, and justice systems and introduce The Credible Expert QA—a survivor- and culture-led framework for ethical AI development in govtech and social care.

Bio: Ruth is cofounder of the Safe & Together Institute a robust e-learning ecosystem with more than 400 certified trainers, providing training to 50k frontline workers in the past 10 years. She assisted in envisioning web-based resources and tools for practitioners on the frontline. Her podcast Partnered With a Survivor attracts top-name speakers from the industry worldwide.

10:30-10:50 AM GMT

Desi Yunitasari & Devi Yusvitasari

University of Melbourne

Abstract: Desi & Devi will be speaking about how digital and data-driven systems shape Indonesia’s criminal justice process, and what this means for women survivors of gender-based violence. Focusing on police, prosecution, and court platforms, they will argue that without trauma-informed safeguards efficiency-driven digitisation can embed structural bias.

Bio: Desi and Devi are Master of Laws students at the University of Melbourne and LPDP scholars. Their research focuses on gender-based violence, women’s and children’s rights, and the implementation of international human rights norms within Indonesian legal and policy frameworks, combining doctrinal analysis with advocacy-oriented research.

11:30-12:00 PM GMT

David Lydon

Canterbury Christ Church University 

Abstract: David will explore the ethical implications of AI in UK policing from a global perspective. He will examine accountability gaps related to environmental harm, labour exploitation, and impacts on Indigenous communities, arguing that existing human rights frameworks are insufficient and proposing a broader, global ethics model to address transnational and systemic harms.

Bio: David is an interdisciplinary criminologist, researcher, and course director at Canterbury Christ Church University. His research interests include AI and Digital, Data and Technology (DDaT) in policing, visual and creative criminology, surveillance studies, and interdisciplinary research methods. Current research projects include, 1. AI literacy and DDaT in policing, and 2. creative representations of surveillance and control

12:30-1:00 PM GMT

Tamara Polajnar

herEthical AI

Abstract: Tamara will talk about how language-focused AI can surface harmful patterns, such as victim-blaming, coercive control, and APP/romance fraud, across case files, chats and judgments, without replacing human judgment. Drawing on deployments of herEthical AI, she will outline how we can use trauma-informed, explainable tools that support fairer decisions, protect vulnerable people and drive measurable cultural change rather than function black-box automation.

Bio: Tamara is the CEO of herEthical AI, a startup specialising in AI solutions that optimise organisational performance and culture in the Criminal Justice System in order to improve the experience for survivors of domestic abuse and gender based violence. She has a PhD in computer science and extensive AI experience. 

2:00-2.30 PM GMT

Rachel Lovell

Cleveland State University

Abstract: Rachel will examine how rape myths and bias shape police decision-making through written case narratives. Using NLP analysis of sexual assault reports, it shows how negative and abbreviated narratives are associated with unfounded classifications, highlighting how language and bias can obstruct justice and harm victims’ access to accountability.

Bio: Rachel Lovell, PhD, Associate Professor of Criminology and Director of the Criminology Research Center at Cleveland State University in Cleveland, Ohio, is an applied criminologist and methodologist, her research focuses on gender-based violence and victimization, including sexual assault, sexual assault kits, human trafficking, and intimate partner violence.

Let's Get
Social

Tag us on social media
#beyond26 #AIinCJS #AIinPolicing  #AI

bottom of page