Skip to the content

Follow us @UKAuthority

Government science chief highlights AI questions

20/01/17

Sir Mark Walport says great potential in public services is accompanied by a series of awkward issues and possible threats

Artificial intelligence (AI) will have applications across public services, but society needs to be careful in how it is used, the Government’s chief scientific advisor has warned.

Sir Mark WalportSir Mark Walport used a lecture at the Alan Turing Institute to warn that, despite the immense potential of AI, it could bring new vulnerabilities and would not necessarily be objective in supporting decision-makers.

“I think the potential applications in terms of delivering services better extend across the whole of the public service realm, be it justice, welfare, education or medicine,” he said.

The ability of systems driven by machine learning to compile and analyse huge amounts of information could provide benefits in several areas, and Walport cited the example of AI having the capacity to exceed that of lawyers in amassing the body of case law. This could provide valuable support for all elements of the justice system.

But even with the benefits there would be limitations.

“Most of us would agree that having judges who are informed by the corpus of knowledge is important, but would we really think it is a good idea to have AI systems that would work out the sentencing to be applied?” he asked.

“These are interesting questions, and it’s probably best to see the role of the computer as the assistant rather than as a replacement for the human being.”

Danger of bias

This is related to the fact that, despite perceptions around the objectivity of AI, it could be as subjective as humans if not used correctly.

“The idea that AI is not going to be subject to its own prejudices and stereotyping is completely wrong,” Walport said. “It can reflect the cognitive biases of the people who do the programming or the information that is fed into it.

“If the data that goes in is biased, then the output will be biased.”

Over time this is likely to lead to questions about the need for regulation of AI and the use of algorithms, although he acknowledged that there is a widespread reluctance to increase bureaucratic burdens on organisations using the technology.

He also warned that increasing the use of AI could create new vulnerabilities in society, reflecting the perception that the increased dependence on IT has already done so. In the worst case a widespread breakdown could have catastrophic consequences.

Walport highlighted another potentially difficult issue in the public reactions to AI.

“With all the interactions with chatbots and other machine learning, AI implementations, is it right when we don’t know if we’re interacting with a human or a computer program?” he asked. “At the moment I don’t think we do, and I don’t know the answer, but it’s an interesting question.”

Register: Library & Alerts

Keep informed - Get the latest news about the use of technology, digital & data for the public good in your inbox from UKAuthority.