Interview: Dr Bill Mitchell, director of policy at BCS, talks about the need for a new mindset in the public sector approach to data and AI
Ask Dr Bill Mitchell about why the public sector should be careful in its use of machine learning and artificial intelligence and he’ll raise the subject of a ‘virtual pigeon’.
“Machine learning has the actual intelligence of the actual pigeon,” he says. “It’s not a super intelligent magical being. So we’re basically talking about a virtual pigeon making decisions, and how much would you trust a pigeon making decisions about you?”
It may sound lighthearted but it’s an important issue for the director of policy at BCS – The Chartered Institute for IT. The organisation has recently warned that there is still a need for human review of decisions made by AI, and he has authored a report on how the UK can set a gold standard for the ethical use of the technology.
This comes within the remit of Mitchell – who has previously been a computer scientist at Manchester, Surrey and Southampton universities and worked for Motorola – aiming to bring together the BCS’s focus on professionalism in IT with the public sector’s efforts to apply technology in a way that works for the public good.
The virtual pigeon is part of that. He refers to a study - reported in the Science journal - that real pigeons can be trained to spot signs of cancer in images of biopsied tissue and says AI systems basically work on the same premise: they can be trained to identify specific features from mountains of data, but not yet to match humans in making decisions that take account of a much wider context.
“If you get into that mindset you are more likely to put in place the right safeguards around AI systems,” he says. “If you think of them as artificially intelligent pigeons rather than AI people you are more inclined to take a more responsible view.”
Consequences and exceptions
He acknowledges the benefits of AI and less sophisticated forms of automation for the public sector, not least in providing significant savings when it is under immense financial pressure. But he emphasises the need to think at the design stage about exceptions to the intended consequences and make provision for humans to deal with them.
AI systems “can’t take into account the wider context but make fairly narrow decisions made on very constrained data. You really have to design from the start that there will be exceptions that have to be handled separately, and for an escalation process depending on the severity of what’s going on and build it into your automation.”
Part of the problem is that currently there are no outstanding examples of good practice, but Mitchell says there is a precedent of sorts in Civil Service standards for the procurement of AI systems. These can ensure they are ethical, do ensure people are treated fairly and they understand what fair means.
“But what’s not clear,” he adds, “is how far those standards have been adopted across the rest of government, and it isn’t clear whether the private sector is following similar standards when developing services and systems for the public sector.”
He says there is potential for government to make quick progress in telling private sector bodies that are providing services for the public sector that they should follow the Civil Service standards and provide evidence that they are doing so. But he is concerned that this could take second place behind the pressure to keep down costs, which in turn prompts a need to think about where the true value lies.
“Value for money criteria takes account of costs rather than outcomes,” he says. “If the value for money was looking at the benefit to society it would lead to looking for evidence that was happening.”
Benefits and risks
His position on AI reflects a broader concern about the use of data in public services, acknowledging that it can be highly beneficial when tailored to individuals, but there can often be unintended consequences. This is compounded by the rapid development of new technology and needs more systematic thinking to mitigate the risks.
“People are very good at thinking about the typical use case where it is all lovely and everybody gets what they want, but what fails almost every time is doing a proper rigorous analysis of unintended consequences,” he says. “They need to be thought through, identified and mitigated against.
“System engineering is about anticipating unintended consequences and building in mechanisms to handle exceptions. That’s what good engineers do and what good professional practice is about, but it’s often the bit missing in what policy makers think about.”
Mitchell argues that government’s struggle to keep up with this derives partly from ministers not staying in their positions long enough to fully understand the factors and see initiatives through to the end. But there has also been insufficient engagement with IT professionals – a shortcoming he sees in the Government’s recently published proposals for changing the UK’s data regulation regime.
These have been described as laying the ground for a more flexible approach, and while he agrees there is scope to improve the General Data Protection Regulation to reduce the bureaucratic burden, he is not sure the Government is approaching it with the right mindset.
Aspirations and practicalities
“If we look at the current proposals around loosening up data regulations, it’s fine that we have that kind of aspiration, but when you go into the detail it’s not clear that the professionals on the ground will be able to make it work day in and day out, or that they want those reforms.
“There is an example in data protection impact assessments. From our consultations with members they are fine with them and know how they work, but the current proposed reforms are for getting rid of them. Why?
“On the public sector side the thing that needs a lot more clarity is what we mean by the responsible use of data. What does it mean to be ethical? There are a number of places in the consultation where it says ‘Do you agree that we can loosen regulatory procedure with the right safeguards in place, but then the question is ‘What are the right safeguards?’ I’m thinking ‘Don’t you need to tell us the right safeguards?’”
“There needs to be more clarity over the safeguards and a better understanding of the responsible use of data, which is where you have responsible organisations, and they are competent, ethical and accountable. I think that should be front and foremost in the work around the public sector
“It’s fine to be clever in doing things with the data you have as long as you can demonstrate you are ethical, competent and can show how you are being accountable. Where is the transparency and openness? If you can show those I’m prepared to trust you to use my data in more innovative ways than are allowed at the moment.”
Need for trust
It relates to the consensus view of a need to build public trust in how personal data is used, and Mitchell sees this as being closely related to the cause of building IT professionalism in public and private sectors.
He cites a BCS public survey last year that showed 64% would like chartered professionals to develop AI systems if they would affect them in their personal lives because those are the type of people they would trust. But there is a shortage of them, which he sees as a major challenge for the way forward.
“Often people not involved assume IT professionals are in background hacking away with code, but IT professionals are the ones who have to talk to all the people from different business units and get them to understand the overall impact of introducing new systems, how to do change management, to make sure they know what users really want, how to make sure this thing is future proof and able to withstand strange events.
“That requires a lot of teamwork and finding people who can do that is quite hard.”
As a coda he identifies another area in which government needs to do more – although the UK needs to work with others to make it possible – in developing standards for the data needed to hit decarbonisation targets. This would underpin the deployment of sensors and systems to manage features such as traffic and energy supplies.
“Part of the ethical question is making sure we are capable of reducing our carbon footprint by having those in place,” he says. “But it feels again as if there are a lot of good aspirations, but in terms of delivering it on the ground it is not really happening yet.”
“This is where there is not sufficient leadership in government to ensure they are working globally to achieve the right data standards. When we talk about global warming the data has to be standardised and shared globally, and we need to get the technologies in place to understand what is going on in the different systems.
“There needs to be more focus in looking at how the technology will enable that decarbonisation.”
Image from BCS