Skip to the content

What can the public sector do with AI?

27/01/17

Mark Say Managing Editor

Get UKAuthority News

Share

Analysis: Artificial intelligence is currently big on potential, short on applications in delivering services – but small steps could provide the best approach to finding solutions

Great idea, big potential, but few applications so far and a lot to learn. This sums up the outlook for how public services could make use of artificial intelligence (AI), the technology that is stirring up hopes and fears, and is already surrounded by an aura of inevitability.

There is some debate about its definition, but it is generally seen as a stream of computing developed to carry out tasks usually requiring human intelligence, and to learn from what it takes in.

It came in for a new round of attention last week when the Government’s chief scientific adviser, Sir Mark Walport, delivered a Turing Institute lecture on the potential. It was notable for emphasising the overall significance rather than much precision on how AI could be used: Walport spoke of applications in justice, welfare, education and medicine, but largely in broad terms.

This reflected the tone of the Government Office for Science report published last November, which conveyed the potential in generalities: supporting decision-making; anticipating demand for existing services and deploying them more effectively; making decisions more transparent; and helping departments better understand the groups they serve. But it did not get into specific applications.

Chatbot projects

A couple of recent announcements point to relatively simple implementations on the frontline. There is growing interest in chatbots – which have the ability to simulate human conversations through AI – and Enfield Council will be the first local authority to pick up on the technology with the launch of the Amelia virtual service agent, developed by IPSoft, during the spring.

Its first use will be in responding to enquiries on building and planning controls, reflecting a view that AI could initially be suited to dealing with mid volume, mid complexity transactions. But Rocco Labellarte, the council’s interim assistant IT director, told the recent Local Digital Transformation conference that the technology could mature within two or three years to take on more complex processes in areas such as social care.

“What’s really fascinating is that if local authorities pull together to work in this area, because of the complexity of our services, we can build these things together,” he said.

A group of North London clinical commissioning groups are also looking to put a chatbot to work, in the form of an app developed by Babylon Health in a basic triage process to provide advice on urgent healthcare. It draws on the deep learning element in AI to provide responses to a request, ask follow-up questions and provide advice.

IT support

Other precise possibilities are being identified, but with degrees of caution. Steve Robinson, managing director of IT service management firm Littlefish – which supports a number of public sector clients, one of which requires that it keeps a watching brief on AI – says he can see the potential for the technology used in Amelia to provide service desk support.

“We could look at deploying it today for more straightforward activities. If you think of it on a service desk it could take a user through a password reset process and probably do it in a way that seems quite humanistic,” he says.

“An AI interface as the first point of contact for tasks like this, in which you are taken through a process should be simple. In fact, I believe up to 70-80% of tasks could be potentially delivered through an AI interface in the end. A benefit of this is the complete removal of human error potential leading to better efficiency.”

But he adds that for more nuanced processes, which account for a lot of the volume in IT support, the technology is still a couple of years away from having a full diagnostic capability, and that this is holding his company back from an immediate commitment. He suggests that the next viable step could be where the AI has sufficient understanding to know when it needs to hand off an enquiry to a human operative.

There have been other projections that, although coming from outside the UK, would clearly be applicable here. The Stanford School of Engineering has highlighted some in its report, Artificial Intelligence and Life in 2030, which sees applications in several areas.

Traffic and transport

It includes the smarter management of traffic through using real time sensors and cameras to predict flows and optimise traffic light timing, and to provide a new dimension to public transport through directing people to ridesharing.

It points to education, at school and higher levels, where AI could be used in intelligent tutoring systems and online learning. The technology is already showing the capability for a human-machine dialogue and to mimic the role of a good human tutor in, for example, providing hints when a student gets stuck on a problem. Systems have already been devised for subjects including maths, geography, computer literacy and languages.

The report says that a few cities have begun deploying AI for public security, drawing on data from surveillance cameras and drones to spot anomalies that point to a possible crime and for more predictive policing. It could also be used in helping police manage crime scenes with tools to prioritise tasks and allocate resources.

There are possibilities to support social care in combining AI with robot technology, providing robotic carers to support elderly people in their homes.

The emphasis on the potential in healthcare is also there, in tasks such as using robotics to support surgery, or drawing on machine learning to predict patients at risk. But the report says that progress here has been restrained by the difficulties in integrating the technology in the large, complex systems used in healthcare, and by worries over the clumsy interactions between the computers and humans.

Risk issue

For now most of this is at the speculative stage: while a few applications are being prototyped it is difficult to find anything in widespread use. As with any technology, it needs early adopters to take on a degree of risk, feed into the development of AI, and to show the benefits when they emerge. It is also difficult to make a case for investment when the technology is not yet proven.

Public services think tank Reform has been monitoring the early work on AI and suggests that it could find momentum from contributing to the long term demand for more cost-effective services. Researcher Alexander Hitchcock says this could apply particularly to healthcare.

“There are challenges in terms of funding the NHS, and if the government can create a debate about the means to improve the NHS, reduce its cost with technology and say ‘This is how AI works’, it can be a useful means to incorporate it into public services,” he says.

He adds that it could be possible to build a moment through an early focus on low risk initiatives.

“What government should be doing is focusing on the small wins and aiming for a snowball effect. So in the next few years it can use the technology that is proven to work and is not a high risk, like the more basic administrative stuff.

“It can create a momentum in the use of AI to show internally in government what it can do, even though it might be less innovative than some of the stuff coming from the private sector. That will involve piloting, probably at local level and with small budgets.”

Ethics arguments

Meanwhile, there is also the prospect of resistance around some ethical issues. AI would often make use of personal data, and could do so in a way that reignites arguments over the boundaries of sharing and combining it from different sources.

The prospect of replacing humans with machines, however relevant or appropriate it might be to specific tasks, is also going to stir up opposition; as will the reluctance of many people to interact with a machine rather than a human.

Then there is the issue of how far we should go in enabling machines to make decisions previously in the realm of the human. Sir Mark Walport made the point in his speech that, while AI could be valuable in the justice system for sorting through huge volumes of case law and assessing the circumstances to determine a sentence, the idea of it making the decision would make a lot of people nervous.

These issues will be difficult to resolve, but they are not going to block the adoption of AI in the public sector: within a few years it will offer solutions for handling a lot of the pressures faced by organisations in all areas.

In fact, the sector is likely to contribute not just to finding applications for the technology but to resolving the ethical issues. The culture within public authorities often makes them more inclined than private enterprise to consider the social implications in a new approach, and they could be equipped to strike the right balance sooner rather than later.

Government is still finding its way into the possibilities of AI, but as it does so it can also show the way to others.

Image by www.flickr.com/photos/cblue98/, CC 2.0

Register For Alerts

Keep informed - Get the latest news about the use of technology, digital & data for the public good in your inbox from UKAuthority.