Skip to the content

NAO raises questions over DWP use of machine learning

13/07/23
DWP entrance plaque
Image source: GOV.UK, Open Government Licence v3.0

The National Audit Office (NAO) has highlighted risks in the use of machine learning by the Department for Work and Pensions (DWP) to identify fraud and error in benefits claims.

It has raised the issue in its latest report on the department’s accounts, which includes a section on how the technology has been used.

This is part of a spend of around £70 million over a three-year period on advanced analytics with the aim of saving around £6.1 billion by 2030-31.

The report says DWP’s use of a machine learning model began in 2021-22, with an algorithm trained on historical claimant data and fraud referrals for universal credit advances, and which can flag up new claims that could contain fraud and error. Similar models have been piloted for other features in universal credit – people living together, self-employment, capital and housing.

Claims identified as containing possible fraud or error are referred to a caseworker who then performs a manual review. There is no automated decision making, and a fairness analysis is taking place on a weekly basis due to some evidence of bias towards older claimants.

Inherent risk

NAO points out that machine learning brings an inherent risk of the algorithms being biased towards selecting claims from groups of people with protected characteristics, which may be due to the model’s design or the data used.

It says DWP faces a challenge in balancing transparency over the use of the technology with not tipping off fraudsters. But it should be able to provides assurances that it is not unfairly treating any group of customers.

It also acknowledges that the department has established tight governance and controls, with safeguards to assess the impact that using the model has on different claimants. However, the ability to test for unfair impacts across protected characteristics is currently limited – in part because claimants do not always answer the optional questions about their demographics.

In addition, personal data is segregated on the analytical platforms for security reasons, although there are plans to incorporate it soon.

There are also plans to make the fairness analysis more comprehensive.

Register For Alerts

Keep informed - Get the latest news about the use of technology, digital & data for the public good in your inbox from UKAuthority.