Skip to the content

MPs’ committee sounds warning on AI governance


Mark Say Managing Editor

Get UKAuthority News


AI icon
Image source:

Parliament’s Science, Innovation and Technology Committee has said the next Government should be ready to legislate on AI if it encounters gaps in the powers of any of the regulators.

It published a report on the governance of AI just before the end of the Parliament in the run-up to the general election.

The report says that while the current sectoral approach to regulation is right, AI is evolving quickly and gaps could emerge in the ability of regulators to protect the public.

It identifies the most far reaching challenge as the way AI can operate as a ‘black box’ – its decision making processes being unexplainable – while having very strong predictive powers. In the face of the face of this there must be stronger testing of the outputs to assess their power and acuity.

There are also concerns at suggestions that the new AI Safety Institute has been unable to access some developers’ models to carry out pre-deployment safety testing. The report says the next Government should identify any developers that have refused access, name them and report their reason for refusing.

Backing regulators

The committee concludes that in a world in which some AI developers command vast resources, UK regulators must be equipped to hold them to account. It says the £10 million announced to support the UK’s sectoral regulators, particularly Ofcom, as they respond to the growing prevalence of AI in the private and public sectors will be “clearly insufficient to meet the challenge, particularly when compared to even the UK revenues of leading AI developers”.  

Recommendations in the report include that the Government should address the 12 challenges for AI governance that the committee set out in an interim report in January, that it should be ready to introduce new AI-specific legislation, produce quarterly reviews of the effectiveness of regulation, and provide further financial support.

Change in thinking

Committee chair Greg Clark MP said: “The overarching 'black box' challenge of some AI models means we will need to change the way we think about assessing the technology.  Biases may not be detectable in the construction of models, so there will need to be a bigger emphasis on testing the outputs of model to see if they have unacceptable consequences.

“The Bletchley Park Summit resulted in an agreement that developers would submit new models to the AI Safety Institute. We are calling for the next government to publicly name any AI developers who do not submit their models for pre-deployment safety testing.

“It is right to work through existing regulators, but the next government should stand ready to legislate quickly if it turns out that any of the many regulators lack the statutory powers to be effective. We are worried that UK regulators are under-resourced compared to the finance that major developers can command.

“The current Government has been active and forward looking on AI and has amassed a talented group of expert advisers in Whitehall. Important challenges await the next administration and in this, the committee’s final substantive report of this Parliament, we set out an agenda that the new Government should follow to attain the transformational benefits of AI while safeguarding hard won public protections.”

Register For Alerts

Keep informed - Get the latest news about the use of technology, digital & data for the public good in your inbox from UKAuthority.