Skip to the content

Nesta proposes principles for public sector AI

22/02/18

Share

Director of government innovation calls for feedback on 10 principles for use of algorithms in decision making

Public authorities should be transparent about their use of algorithms to support their decisions, not just in the purpose but in the assumptions and inputs that went into them, according to a new set of proposed principles.

Innovation foundation Nesta has published 10 draft principles for the use of artificial intelligence in public services with a call for questions and comments on how they could be amended before being used by the public sector.

Its director of government innovation, Eddie Copeland (pictured), has outlined the points in a blogpost that conveys a strong emphasis on transparency and making the algorithms available for evaluation and audit.

“The application of AI that seems likely to cause citizens most concern is where machine learning is used to create algorithms that automate or assist with decision making and assessments by public sector staff,” he says.

“While some such decisions and assessments are minor in their impact, such as whether to issue a parking fine, others have potentially life changing consequences, like whether to offer an individual council housing or give them probation. The logic that sits behind those decisions is therefore of serious consequence.”

He also makes the point that, while people can choose not to deal with a private sector organisation that uses algorithms and their personal data, public authorities usually have a monopoly for specific services and people cannot opt out.

Possible code

This has prompted the drafting of the 10 principles that might go into a code of standards for the public sector, each accompanied by a rationale and relevant questions.

The emphasis on transparency is stated at the beginning, with the first principle being that every algorithm should be accompanied by a description of what it aims to do, followed by the second that says details of the data used and underlying assumptions should also be published. The latter should also involve a risk assessment, reflecting the concerns that algorithms could often reflect the biases of people who programme them.

Other principles involve publishing the inputs used by an algorithm, telling people when their treatment has been influenced by an algorithm, and publishing the results of public authorities’ own evaluations of the impact of the algorithms they use.

There are also proposals to:

  • Categorise algorithms on a risk scale in line with the impact they could have.
  • Develop identical sandbox versions of algorithms for auditors to use.
  • Name a member of senior staff as being responsible for any actions taken as a result of an algrorithmic decision.
  • Create an insurance scheme for public authorities that use algorithms in high risk areas. This could be used to compensate people affected by a mistaken decision.

Copeland says in the blog that the principles are a working draft and that Nesta is open to comments.

 

Register For Alerts

Keep informed - Get the latest news about the use of technology, digital & data for the public good in your inbox from UKAuthority.