UK government departments are "at the mercy" of external software developers when it comes to testing the quality of software code running vital public services, with no set testing or quality standards of their own, according to research by one supplier.
The statement follows publication of responses to a series of Freedom of Information (FOI) requests submitted to 26 UK government departments and agencies by software quality checking specialist Coverity, asking about their arrangements for testing the quality of externally-developed software code.
Respondent numbers were not large - of the 26 questioned, eight did not respond at all, and a further five said the cost of response would be prohibitive, as permitted under FOI law (questions which would cost more than £600 to answer may legally be ignored).
However, the results from those who did respond suggest a high proportion of government software is developed by third parties, with six out of nine relevant responses stating at least 70% of their software was developed externally. The Department of Business, Innovation and Skills (BIS); Department of Environment, Food and Rural Affairs (DEFRA) and Department of Work and Pensions (DWP) - said 100% or close to 100% of their software was written by third parties; Department of Environment and Climate Control (DECC) said 90%; Department of Health (DH), 75%; Ministry of Justice (MOJ), 70%; Department for International Development (DFID), 25%; and the Department for Transport (DT) and Foreign and Commonwealth Office (FCO) said none, with the latter not developing any software.
Costs are also significant: when asked about external software development costs, most respondents could not give a figure but estimate from those who did ranged from £517.1m at DWP to £0.35m at DFID.
When it came to bug-checking, however, the pattern was hugely varied. When asked whether they had specific service level agreements (SLAs) relating to the number of bugs or errors that would be acceptable in a given piece of software when it returns from the outsourcers, out of seven relevant responses, only two - DEFRA and MOJ - stated they had SLAs in place for code quality. One other department - DWP - said arrangements vary from contract to contract; and four (DFID; BIS; DH and DECC) said no SLAs were in place.
When asked how they measure the quality of the software when it returns from the outsourcers, five departments responded. DWP said software quality is measured "through a series of testing phases"; DFID said "All software development conforms to an agreed quality plan"; MOJ said Testing standards are defined in the department's digital strategy; DH responded that its managers "Expect outsourcers to have their own standards"; and BIS said the information was not recorded.
"There is no universally agreed upon standard for measuring the quality and security of software code across government departments", Coverity European marketing director Richard Walker told UKAuthority.com this week.
"In response to the FOI request, only five departments were able to provide greater detail on the processes they had in place, and of these, each one was different.
"For example, DEFRA stated their classifications of bugs ranged from minor cosmetic flaws to complete system failure. Criteria for accepting software involved completely resolving the most severe functional flaws, while action plans were required to resolve existing non-vital or superficial bugs.
"However, several departments had no specific Service Level Agreement metrics, with for example the Department of Health stating it expected outsourcers to have their own standards in place, while having no SLA in place to determine this. Additionally, some departments classified bugs only as a result of end users feeding back on their experience of using the software."
While the Government Service Design Manual drawn up by the Government Digital Service in the Cabinet Office does detail the importance of code testing to government software, there needs to be an agreed set of standards applied across government for acceptable levels of errors, Walker said. "The first step is to have government departments agree to an acceptable standard for code quality that reflects the importance of code to government operations."
Next, an iterative testing cycle is needed, starting as early as possible in the development process, he said. "By the time it reaches the end user it is often too late to make significant changes, but if it is analysed and changed early in the development process, major flaws can be kept to a minimum.
"As it stands, too many departments are at the mercy of external third parties, which often have no obligation to ensure their code has been extensively tested."
The consequences of government software failure could potentially be severe, causing not just service disruption but loss of sensitive personal data, Walker said.
"Government software supports processes that are fundamentally important to how a country runs, and how people access and organise central aspects of their life", he said. "This can range from the delivery of benefits payments to individuals doing their tax returns.
"If such services are slow, or don't function properly due to poor software defects, it could lead to the stalling of [services] and potentially threaten the privacy of personal details that are stored digitally."
Pictured: Wikimedia beetle logo, by Isarra / Wikimedia Commons