Skip to main content

A big week for AI - the political reaction

Network 6511448 1280

There was a mixture of high expectations and a feeling of fatigue in the run up to AI Safety Summit. There are some hopes that the Summit that brings together major players in tech as well as governments will lead to some sort of consensus in relation to the “safe” use of AI, or at least kick start the global conversation. On the other hand, a lot of discussions and debates have already been taking place, with varying perspectives and emphases, creating a feeling that we have been here before.

The significance of this event, however, is that it is happening.

It is no surprise that the US issued the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence just before the Summit to reaffirm their presence and their position in this debate. To some, this may feel too political and the adoption of the term “frontier models” presumably proposed by the Frontier Model Forum started by Microsoft, Anthropic, Google, and OpenAI somewhat too tech-giant-driven – but whatever the specific interest of each party might be, it demonstrates a strong political will and delivers the message that countries and businesses are taking it seriously.

The EU’s AI Act and the US Blueprint for an AI Bill of Rights issued in 2022 (among others including Google’s responsible AI principles) have already set out basic principles of responsible use of AI. In terms of the "what", a set of key principles is needed, and no one will dispute that any technology needs to be used responsibly. In terms of "how", EU AI Act has chosen the risk-based approach, which is also difficult to argue against. What is refreshing about this Executive Order is that it clearly indicates “who” should be doing “what”. There could be debates about the suitability of the agencies to take on the specific tasks but at least it is pragmatic, and some actions will be taken beyond principles and debates. Intentionally or not, in the Executive Order the word “regulate” only occurs once. Some see regulation as a hindrance to innovation, but it has many benefits including providing confidence and safety for the technology – it is the operationalisation of regulations such as “red tape” that slows down developments rather than regulation itself.

The current explosion of interest in AI research brings back memories of the ‘70s and ‘80s of the likes of DARPA in US, Alvey in UK, Esprit in Europe and Japan’s Fifth Generation Computing projects, which were heavily backed by governments. These are now often associated with the onset of an “AI Winter” by not delivering what people expected, not to mention that there were hardly considerations on responsible AI. But the business environment in which AI technologies are being used and developed is markedly different from those times. The AI Safety Summit and similar global conversations need to deliver tangible results to maximise the benefits of AI technologies as public goods.

Professor Keiichi Nakata

Head of Business Informatics, Systems and Accounting (BISA)
Published 2 November 2023
Topics:
Leading insights AI and automation

You might also like

When will UK petrol prices go down, and why are they so high?

10 June 2022
Fuel prices in the UK are hitting over £2 a litre in some places. Professor Adrian Palmer looks at why this has happened, and what needs to happen for them to decrease again.
Leading insights

Interest rates: short-term pain for long-term gain?

9 August 2022
As the Bank of England raises interest rates by half a percentage point – the highest jump since 1995 – Dr Nikolaos Antypas looks at what this could mean for the UK economy.
Henley news Leading insights

What does the Autumn Statement mean for me?

17 November 2022
Ali Bowen, Lecturer in Tax, breaks down what the new tax rises mean for individuals, businesses and the oil and gas industry.
Leading insights