The Department for Science, Technology and Innovation (DSIT) has announced over £100 million in funding to support research and innovation in AI and develop the skills of regulators in the field.
The announcement has been made as part of the response to the consultation on the AI regulation white paper, with indications that it will include a research hub for healthcare and provide a more agile approach to regulation.
Nearly £90 million of the funding will go to nine hubs for research in areas including healthcare, chemistry and mathematics, while £2 million from the Arts and Humanties Research Council (AHRC) will support new projects to define what responsible AI looks like in sectors including education and policing.
£19 million will go towards 21 projects to develop trusted and responsible AI and machine learning solutions. This will be funded through the Accelerating Trustworthy AI Phase 2 competition, supported through the UKRI Technology Missions Fund and delivered by the Innovate UK BridgeAI programme.
In addition, DSIT has pledged £10 million to upskill regulators to address the risks and harness the opportunities of the technology. The fund will support new research and the development of practical tools to monitor and address risks and opportunities in their sectors.
This will be supported by the launch during the spring of a steering committee to support and guide the activities of a formal regulator coordination structure.
Rapid response to risks
The department said the UK Government wants to build on work by regulators such as the Information Commissioner’s Office by enabling them to respond rapidly to emerging risks while giving developers room to innovate.
Key regulators, including Ofcom and the Competition and Markets Authority, have been asked to publish their approach to managing the technology by 30 April. This need to set out AI related risks in their areas, detail their current skillset to address them, and set a plan for how they will regulate over the coming years.
Secretary of State for Science, Innovation, and Technology Michelle Donelan said: “I am personally driven by AI's potential to transform our public services and the economy for the better – leading to new treatments for cruel diseases like cancer and dementia, and opening the door to advanced skills and technology that will power the British economy of the future.
“AI is moving fast, but we have shown that humans can move just as fast. By taking an agile, sector-specific approach, we have begun to grip the risks immediately, which in turn is paving the way for the UK to become one of the first countries in the world to reap the benefits of AI safely.”
The announcement drew a positive response from IT industry association techUK. Its CEO, Julian David, commented: “We’re pleased to see this update to the government’s thinking on AI regulation, and especially the firm recognition that new legislation will be needed to address the risks posed by rapid developments in highly capable general purpose systems.
“Moving quickly here while thinking carefully about the details will be crucial to balancing innovation and risk mitigation, and to the UK’s international leadership in AI governance more broadly.
“We look forward to seeing the government work through this challenge at pace.”
A welcome also came from BCS, The Chartered Institute for IT. It published a statement from AI expert Adam Leon Smith saying: “The UK is relying on its existing legal framework to regulate AI in areas that affect a lot of people, like employment. Even with ‘old fashioned’ AI, we need to balance the risks with the opportunities.
“It is, therefore, right that the Government moves to fund and empower those existing regulators with the tools they need to do their job.
“We also need to remember that this future will be shaped by AI professionals. Managing the risk of AI and building public trust will be most effective when the people creating it are professionally registered and accountable to clear standards.”