Skip to the content

UK sets out plans for AI regulation


Gary Flood Correspondent

Get UKAuthority News


AI button on digital data technology concept background
Image source:

The UK Government has laid out a blueprint for a UK centred approach to the combined employment threat and opportunity AI represents.

The Department for Science, Innovation and Technology (DSIT) said that the key to making AI work for us all must be driving “responsible” innovation and maintaining public trust in this “revolutionary” technology.

It said there is a danger in that AI’s rapid development is raising questions about the future risks it could pose to the public’s privacy, their human rights and even their safety – as well as concerns over the potential unfairness of using AI tools to make decisions that impact people’s lives, such as assessing your loan or mortgage application. Thus, the need new national blueprint for the UK’s in-place “world class” regulators to do that.

This blueprint will follow five core principles:

  • Safety, security and robustness - Applications of AI should function in a secure, safe and robust way where risks are carefully managed.
  • Transparency and explainability - Organisations developing and deploying AI should be able to communicate when and how it is used and explain a system’s decision making process in an appropriate. level of detail that matches the risks posed by the use of AI
  • Fairness - AI should be used in a way which complies with the UK’s existing laws, for example the Equality Act 2010 or UK GDPR, and must not discriminate against individuals or create unfair commercial outcomes.
  • Accountability and governance - Measures are needed to ensure there is appropriate oversight of the way AI is being used and clear accountability for the outcomes.
  • Contestability and redress - Citizens need to have clear routes to dispute harmful outcomes or decisions generated by AI.

A high level of AI regulation informed by these five principles, believes the Government, will be the best context for the creation of “the right environment for artificial intelligence to flourish safely in the UK”.

In addition, the rules need to be light touch, as the Goverment says it wants to avoid heavy handed legislation which could stifle innovation, and instead wants to take an “adaptable approach” to AI regulation.

No single watchdog

Instead of creating any new single AI watchdog, the idea is to spread responsibility for monitoring all this over a clutch of existing bodies, such as the Health and Safety Executive, the Equality and Human Rights Commission and the Competition and Markets Authority. These bodies are now expected to come forward with tailored, context-specific approaches that suit the way AI is being used in their sectors.

That work needs to happen over the next 12 months, with regulators now tasked to issue practical guidance tailored to their area of responsibility. They are also being asked to create helpful AI risk tools and resources, too, like risk assessment templates to set out how to implement these principles in their sectors.

Interestingly, “when parliamentary time allows”, legislation could also be introduced to ensure regulators consider the principles consistently.

At launch of the white paper, Science, Innovation and Technology Secretary Michelle Donelan said the UK’s approach is based on “strong principles” that will allow people to trust businesses to unleash this “technology of tomorrow”.

Industry welcome

Welcoming the UK’s approach to AI regulation, Lila Ibrahim, chief operating officer and UK AI council member, DeepMind, noted that AI has the potential to advance science and benefit humanity in numerous ways, but that this “transformative technology” can only reach its full potential if it is trusted – something that requires public and private partnership in the spirit of pioneering responsibly.

“The UK’s proposed context driven approach will help regulation keep pace with the development of AI, support innovation and mitigate future risks,” she stated.

“Both our business and our customers will benefit from agile, context driven AI regulation, which will enable us to continue to lead the technical and quality assurance innovations for safety-critical industrial AI applications while remaining compliant with the standards of integrity, responsibility and trust that society demands from AI developers,” added the chief technology officer of Rolls-Royce, Grazia Vittadini.

Commenting on the approach HMG has chosen on AI regulation, Sue Daley, director for tech and innovation at techUK, said that her organisation welcomes a context-specific, principle based approach to governing AI that promotes innovation.

But she added: “The Government must now prioritise building the necessary regulatory capacity, expertise, and coordination” to cash out these ideas.

In addition to the white paper, DSIT also yesterday announced £2 million to fund a new trial environment where businesses can test how regulation could be applied to AI products and services.

In this so-called ‘sandbox,’ innovators could safely test new ideas to market without being blocked by “rulebook barriers”.

The department wants both organisations and individuals involved in the AI sector to provide feedback on the white paper through a consultation which launched Wednesday 29 March and will run until Tuesday 21 June. To do so, check out the full policy paper, which is here.

Register For Alerts

Keep informed - Get the latest news about the use of technology, digital & data for the public good in your inbox from UKAuthority.