A House of Lords committee has said the UK Government’s approach to AI has become too narrowly focused on safety and is neglecting the opportunities the technology offers.
The Digital and Communications Committee has published a report on the issue, saying large language models (LLMs) will produce epoch defining changes comparable with the invention of the internet.
But it warns there is a “real and growing” risk of regulations stifling the market, and says there is a need to prioritise open competition and transparency. Without this a small number of tech firms may rapidly consolidate control of a critical market and stifle new players.
The publication has come after a period in which the UK Government has promoted itself as a worldwide leader in promoting safe deployments of AI, with the establishment of the AI Safety Institute and its backing for a set of guidelines for the secure development of the technology.
The Lords committee has welcomed the moves, but says in the report that apocalyptic concerns about threats to human existence are exaggerated and must not distract policy makers from responding to more immediate issues.
Vision for benefits
It says a more positive vision for LLMs is needed to produce the social and economic benefits, and enable the UK to compete globally. Key measures include more support for AI start-ups, boosting computing infrastructure, improving skills and exploring options for an ‘in-house’ sovereign UK LLM.
There is also a need to support copyright holders so they are not exploited by LLM developers, and to ensure that tech firms do not use data without permission.
The committee sets out 10 core recommendations, including that the Government rebalances its strategy for AI towards the opportunities it offers, and makes market competition and explicit AI policy objective, with enhanced governance and transparency. These would be accompanied by a series of measures to boost opportunities, such as supporting academic spinouts and looking at the option of a sovereign LLM capability.
The paper also advocates a more nuanced approach in dealing with the arguments around open and closed models of AI, a respect for copyright, and that preparations are made quickly in the face of a protracted international competition and technological turbulence.
Other recommendations are more reflective of the worries about AI. They are to address immediate security risks from LLMs, review catastrophic risks, empower regulators and to ensure that regulation is proportionate.
Chair of the committee Baroness Stowell said: “The rapid development of AI large language models is likely to have a profound effect on society, comparable to the introduction of the internet. That makes it vital for the Government to get its approach right and not miss out on opportunities – particularly not if this is out of caution for far-off and improbable risks. We need to address risks in order to be able to take advantage of the opportunities – but we need to be proportionate and practical. We must avoid the UK missing out on a potential AI goldrush.
“One lesson from the way technology markets have developed since the inception of the internet is the danger of market dominance by a small group of companies. The Government must ensure exaggerated predictions of an AI driven apocalypse, coming from some of the tech firms, do not lead it to policies that close down open source AI development or exclude innovative smaller players from developing AI services.
“We must be careful to avoid regulatory capture by the established technology companies in an area where regulators will be scrabbling to keep up with rapidly developing technology.