Skip to the content

Philosophy, public services and artificial intelligence

23/02/17

Share

Interview: Professor Lucian Floridi of the Oxford Internet Institute calls for some serious thought about what we really want to do with AI

Luciano Floridi has a job title that might seem odd at first glance but has a strong underlying logic: professor of philosophy and ethics of information at the Oxford Internet Institute at the University of Oxford.

Anyone who thinks seriously about technology’s role in society has to bring an element of philosophy to the mix. It involves critical discussion and questions around ethics in addressing the core issue of whether innovations are going to be good or bad for people in the long term.

It underpins Floridi’s work in looking at the implications of digital technology on people’s lives and society, and leads him into areas beyond those highlighted by many of the tech evangelists. It includes the implications for public services, and while he is no alarmist, he says that governments should be careful in planning for what they want to achieve.

The is particularly pertinent at the moment for artificial intelligence (AI). In recent months a few public authorities have announced plans to use it in specific services, the Government’s chief scientific adviser Sir Mark Walport has said the opportunities it presents are accompanied by threats, and the think tank Reform has forecast it could replace 250,000 public sector jobs by 2020.

Floridi says that, as AI becomes more prevalent, there is a danger of the public reacting against it.

Too pervasive

“I think there will probably be a backlash,” he says. “There are many other forces pressing forward so I wouldn’t be surprised if in a generation or two we still have the world we have now, but with too much AI, it’s too pervasive and in everything we are producing.

“You design what future is preferable and try to move in that direction, but there are forces pulling and pushing in every direction - commercial, political and social.”

He does not argue for standing in the way of AI in public services, but says government should aim to give it a firm steer, keeping in mind the social implications of how it develops. He uses the metaphor of being “in the driving seat of a very fast car”, and claims it would be better than seeing how things develop then reacting to any social ill effects.

“You have to have a design for the environment you want,” he says. “The role of government is remarkably significant, if it wants to pick it up. It should say ‘This is the world we want to live in’ rather than see how it develops and try to regulate afterwards.”

Which leads to the question of what can government do?

“We need more education in the sense of telling people what it is about, so there is more knowledge, less fear, and people can get a better sense of the potential advantages. The presumption is that ‘I’m the government, I’m on your side, I’m going to do something good for you’. Maybe, maybe not.

“Second, everybody knows we need to be more imaginative about the public-private relationship here. We could be in the hands of six or so companies in the development of AI, the ones making all the difference, so inevitably a good agreement with the corporate research and development world would be very welcome.”

He acknowledges that this brings its own difficulties, and that corporate organisations are usually going to be driven by their own views on market opportunities, but says government holds much larger volumes of a major asset for the development – data.

Data bargain

“That is the primary resource to train any algorithm,” he says. “Machine learning without the data is no good. So there is a bargain there to say we have the data, you know the algorithms, and maybe we can find a partnership for the common good.”

There is a precedent for this, he says, in how since the 1970s Norway’s government has sold the country’s oil through a joint enterprise with the industry, directed at managing the resource for the nation’s benefit.

“Think in terms of these new technologies as the oil of the future and that we need an agreement with commercial partners,” he says. “It would be wonderful to see the government in partnership with companies to exploit data on the one side and algorithms on the other for the benefit of business and society.”

On a broader front, Floridi says it is inevitable that AI will prompt changes in the way that people interact with technology, in public as much as private services. As good as the technology becomes, there will be some limits on its flexibility and that will force people to adapt.

He does not see it as something to worry about, and believes that most people will be able to interact with AI in a way that is good for them. But on a societal level there is a need to identify a long term aim for where the technology is taking us.

Conceptual need

This can be related to the way the idea of sustainability has developed in the management of the environment. Floridi says that, with the digital environment being increasingly populated by autonomous and intelligent agents, we need a similar concept as a framework for its development – but we have not yet worked out what it should be.

“I’m a bit tentative because this is uncharted territory,” he says. “What is the equivalent of sustainability in a digital environment? We need to think about it.”

It reflects the fact that AI is breaking new ground and developing at a pace that makes it difficult to foresee all of the implications, never mind create a clear sense of what it should do for society.

This might prompt its advocates to pause for thought about what the public sector, and society as a whole, really want to achieve from making the technology part of everyday life.

Photo courtesy of Ian Scott

Register For Alerts

Keep informed - Get the latest news about the use of technology, digital & data for the public good in your inbox from UKAuthority.