A new government report says there are notable deficiencies in the UK’s regulatory and governance framework for artificial intelligence in the public sector.
Published by the Committee on Standards in Public Life, it says there is no need for a specialised AI regulator, but that regulators must adapt to the challenges that AI poses to their sectors, and there is an urgent need for guidance and regulation on the issues of transparency and data bias.
Titled Artificial Intelligence and Public Standards, the report says that AI offers the possibility of improved public standards in some areas, but that an existing lack of information about its use by the public sector risks undermining transparency.
There are three risks to accountability: AI may obscure the chain of organisational accountability; undermine the attribution of responsibility for key decisions made by public officials; and inhibit public officials from providing meaningful explanations for decisions reached by AI.
There are also worries around data bias risks that could undermine objectivity.
The report says that explainable AI is a realistic goal for the public sector, but that further work is needed on the data bias issues, and there is a need for effective governance to mitigate the risks.
It points to initiatives such as the Guide to Using Artificial Intelligence in the Public Sector, issued by the Government Digital Service and the Alan Turing Institute but says the governance is a work in progress, that there is some confusion over multiple sets of ethical principles, and the Centre for Data Ethics and Innovation does not yet have a clearly defined purpose.
Consistency and authority
In response, it urges the Government to establish consistent and authoritative ethical principles and issue guidance that is easier to use. In addition, procurement processes should be reformed and the Digital Marketplace should offer greater assistance to public bodies seeking technologies that are compliant with public standards.
There are also recommendations for providers of public services, including: evaluate the risk of any AI system to public standards; take account of diversity to overcome any data bias; ensure that responsibility for AI systems is clearly allocated and documented; set oversight mechanisms for proper scrutiny; and always inform citizens of their right and method of appeal against automated and AI-assisted decisions.
Jonathan Evans, chair of the committee, said: “Artificial intelligence – and in particular, machine learning – will transform the way public sector organisations make decisions and deliver public services. Demonstrating high standards will help realise the huge potential benefits of AI in public service delivery. However, it is clear that the public need greater reassurance about the use of AI in the public sector.
“Public sector organisations are not sufficiently transparent about their use of AI and it is too difficult to find out where machine learning is currently being used in government.
“Explanations for decisions made by machine learning are important for public accountability. Explainable AI is a realistic and attainable goal for the public sector - so long as public sector organisations and private companies prioritise public standards when they are designing and building AI systems.”
Image from iStock