Government Office for Science points to issues around decision making and the legal framework in using artificial intelligence in public services
Government has been presented with some cautionary words over its future use of artificial intelligence (AI), particularly over the legal constraints and its use in decision making.
The Government Office for Science has issued a report on the broad outlook for the technology, including a section that highlights its possible use in the public sector.
It conveys a generally positive perspective, saying that AI could provide a number of benefits, especially with the growth of deep learning, which involves combining layers of neural networks to identify features of a dataset. These include making existing services more efficient by anticipating demand and more effectively deploying resources, and supporting decision making with a swift analysis of pertinent information.
But the latter could have its limitations, especially if the public become wary of the possibility of key decisions being made by machines.
The report says it is likely that a human will have to be kept in the loop for many types of decisions, using AI for support. But this could raise new tensions: if they never question the advice of a machine it could seem that AI is the de facto decision maker; and if they do so they may be thought to acting recklessly, especially if their own decision is shown to be poor.
Questions and transparency
Government bodies using AI for this purpose will have to be ready to answer questions about its influence, and be transparent about its role in decision making.
They will also need to understand how the legal frameworks, such as the Data Protection Act and EU General Data Protection Regulation, apply to the use of AI. For example, deep learning could involve processing personal data, possibly without intention, when it may not be clear if consent has been obtained.
“Understanding the opportunities and risks associated with more advanced artificial intelligence will only be possible through trials and experimentation,” the report says. “For government analysts to be able to explore cutting edge techniques it may be desirable to establish sandbox areas where the potential of this technology can be investigated in a safe and controlled environment.”
It also points to wider issues around understanding the privacy and consent issues, and finding the right approaches to accountability for decisions made or supported by AI.
The publication prompted a response from Digital Catapult, the organisation backed by Innovate UK that provides support to SMEs in digital, emphasising the potential of AI in healthcare.
Its chief technology officer Marko Balabanovic said: “One of the biggest fields where AI can make a difference is healthcare: improving lives, reducing costs and providing opportunities for UK companies to grow to serve large global markets.
“AI for health relies on health data, and the UK has an unparalleled opportunity to open up data flows between centralised health organisations and faster moving innovative companies and research groups. We’ve recently seen proof that this can happen, with two London hospitals working with DeepMind (a Google owned leader in machine learning), sharing over a million patient records.
“This is encouraging. We would like to build on these examples to enable competitive advantage for innovative UK AI companies.
“The size of the prize is huge, but transparency and control over our personal data are paramount. The good news is that the UK has world leading capability to tackle this dilemma.”
Image by www.flickr.com/photos/cblue98/ CC 2.0