Why the West Midlands Police AI error is a wake-up call for all
Our academics warn that without clear guardrails and governance for AI, organisations risk reputational and ethical failure.
West Midlands Police’s admission that AI was to blame for its error of judgement in the decision to ban Maccabi Tel Aviv football fans, serves as a very public example of the risks posed by the unintended misuse of AI.
The case highlights the critical need for organisations to implement clear guardrails and governance to ensure AI is used responsibly and accurately by all. It also demonstrates that when leaders don’t fully understand the tools shaping decisions in their organisation, they expose themselves, and their institutions, to reputational and ethical failure.
We asked some of our academics for their take on the story and the wider implications for business:
Benjamin Laker, Professor of Leadership
“Cases like this are rarely about rogue AI. They’re about governance failure. When organisations deploy AI without clear rules, training, and accountability, leaders end up denying responsibility for systems they’ve authorised. That erodes public trust fast. Senior leaders are accountable not just for decisions, but for the decision systems they allow to operate. If you can’t explain those systems clearly and honestly, the problem isn’t the technology – it’s leadership.”
Dr Rodrigo Perez-Vega, Associate Professor of Marketing
“The case highlights the need for digital governance around the use of AI within organisations. This event shows two important signs about how we're using these technologies. Firstly, that AI is no longer an ‘emerging’ technology, but instead it is already embedded at the core of decision-making. AI is no longer just a tool for administrative efficiency; it is actively informing high-stakes intelligence and operational decisions. In this case, AI was a fundamental, invisible, participant in the ‘room’ where decisions are made.
“Secondly, since the use is so prevalent and goes beyond administrative efficiencies, now more than ever, it is important to consider the accountability of those decisions. The incident shows that we must take personal and professional responsibility for verifying the accuracy of AI outcomes. AI is a powerful assistant, but it lacks the capacity for truth-seeking. In any role, we should remain the final arbiter of truth. Trusting an AI output without rigorous, manual verification isn't just a technical oversight; it is a failure of leadership and due diligence.”
Dr Nadeem Khan, Lecturer in Governance Policy and Leadership
“An underlying governance capability factor is the 'pace gap' between public services and individual or private entities regarding having access to AI technologies, deploying funds and engaging up to date expertise.
“AI public governance requires dedicated resourcing to enable faster and deeper evaluative assessment to support human decision-making. This is vital for societal trust and confidence in public services. Leaders need to be more AI-aware and tech savvy in assessing validity and reliability of social and material evidence.”
Professor Keiichi Nakata, Director of AI, World of Work Institute
“Generative AI can be a powerful tool if used responsibly and proportionately to the potential outcome, but there seems to have been a number of issues in the governance of AI use at WMP. The lack of clarity by the senior officials around whether AI tools were used or not in the first place, and using AI tools when its official guidelines did not approve it, revealed that there is a disconnect between the AI policy and practice, and lack of awareness of the potential harm when generative AI tools are used uncritically. This also hints at the need for better training. This is a wake-up call for organisations to take governance around the use of AI seriously.”
If you’d like to speak to any of our faculty experts contact pr@henley.ac.uk.