Committee report urges changes in procurement rules and the Technology Code of practice to encourage adoption of artificial intelligence in public services
Public procurement regulations should be changed to give UK based companies a better chance to provide artificial intelligence (AI) solutions to the public sector, according to a new report from the House of Lords.
Its Select Committee on Artificial Intelligence has made the recommendation within its new report, AI in the UK: Ready, Willing and Able? along with a call for changes in the Technology Code of Practice, and a series of proposals to support the ethical use of AI.
The report endorses the earlier recommendation of the Hall-Pesenti Review for an increase in the use of the AI in the public sector, and says the Government should look to promote this through changes in the procurement rules.
There would be scope to do this after the UK leaves the EU, with UK companies invited to tender and given the greatest opportunity to take part. This could be supported by the Government Digital Service and Crown Commercial Service amending the Technology Code of Practice so that AI solutions would have to be considered from the start – as is now expected for cloud solutions and the better use of data – and that incentives are provided for UK firms.
It urges the Government to “be bold” in its approach to the procurement of AI, encouraging the development of possible solutions to public policy challenges through limited speculative investment. This should be accompanied by support for businesses that convert ideas to prototypes and help determine which ideas are viable.
While this would come at a cost, it would be compensated by the value for the taxpayer generated by AI systems, the report says.
It also recommends the setting up of an online bulletin board to advertise challenges that the Government Office for AI and GovTech Catalyst have identified across the public sector. This could create the potential for innovative AI based solutions.
The recent announcement of the creation of the catalyst also wins approval for its potential to improve the development of AI in the UK.
A section on AI in healthcare says the NHS is not yet close to harnessing the potential, and that it needs to collaborate with developers; but that this needs a more thorough digitisation of its practices and records in consistent formats.
It also refers to the well publicised controversy over the exchange of patient data between the Royal Free Hospital and AI company DeepMind, and says that by the end of this year NHS England the National Data Guardian for Health and Care should publish a framework for sharing NHS data. Among its provisions should be that patients are aware of the use of their data and given an opt out, and that SMEs should have access.
Overall, the report places a strong emphasis on the ethical implications of AI, urging the industry to make ethics a central element of the technology’s development.
Chair of the committee Lord Clement-Jones (pictured) said: “The UK contains leading AI companies, a dynamic academic research culture, and a vigorous start-up ecosystem as well as a host of legal, ethical, financial and linguistic strengths. We should make the most of this environment, but it is essential that ethics take centre stage in AI’s development and use.
“AI is not without its risks and the adoption of the principles proposed by the committee will help to mitigate these. An ethical approach ensures the public trusts this technology and sees the benefits of using it. It will also prepare them to challenge its misuse.”
This was accompanied by the proposal of five principles, that: AI should be developed for the common good and benefit of humanity; operate on principles of intelligibility and fairness; should not be used to diminish data rights or privacy; all citizens should have the right to be educated alongside AI; and it should never be given the autonomous power to hurt, destroy or deceive human beings.
The committee called for these principles to form the basis of a cross-sector AI code, which can be adopted nationally, and internationally.
Other notable elements of the report include a call on the Government, with the Competition and Markets Authority, to review proactively the use and potential monopolisation of data by big technology companies; and that people should be informed when AI is being used to make significant or sensitive decisions that could affect them
It drew a positive response from IT industry association techUK. Sue Daley, head of programme for cloud, data and analytics, said: "The five key principles identified are aligned with current thinking and highlight the importance of ensuring human needs and values remain at the core of technological innovation.
"If we get the policy and regulatory framework right there is no reason why the UK can't be a world leader in the development and effective safe use of AI."