Skip to the content

Alan Turing Institute report advises AI to be embraced in security

26/04/24
Futuristic AI concept
Image source: iStock.com/metamorworks

The Alan Turing Institute reports that artificial intelligence (AI) must be embraced and used for national security decision making or the Government risks losing out on the opportunity presented by the technology.

Government Communication Headquarters (GCHQ) and the Joint Intelligence Organisation (JIO) commissioned the Alan Turing Institute and its Centre for Emerging Technology and Security (CETaS) to research and write the report.

The key findings of the report include that AI will be vital to national security decision making, as AI is able to identify patterns, trends and anomalies at a speed and size that human security staff are not able to achieve. This means AI can assist intelligence analysts understand complex issues, but the Institute warns of dangers presented by AI, and the need for investment into educating users.

"Our research has found that AI is a critical tool for the intelligence analysis and assessment community," said Dr Alexander Babuta, director of The Alan Turing Institute's Centre for Emerging Technology and Security.

He added: "But it also introduces new dimensions of uncertainty, which must be effectively communicated to those making high stakes decisions based on AI enriched insights. As the national institute for AI, we will continue to support the UK intelligence community with independent, evidence-based research to maximise the many opportunities that AI offers to help keep the country safe."

Transforming intelligence

The report says AI will transform intelligence analysis through faster data processing and an increase in accuracy. It also highlights the risks of using AI, in what it describes as "the potential to exacerbate dimensions of uncertainty inherent in intelligence analysis and assessment". It therefore suggests that "additional guidance" aimed at those using AI for national security decision making will be required.

A failure to embrace the use of AI in the field is also seen as a risk by the Institute, and could lead to the value of intelligence assessments being weakened. Therefore the report states there needs to be "continuous monitoring and evaluation" of AI to prevent bias impacting results.

Training and guidance for strategic decisionmakers and intelligence analysts using AI will be important to ensure they understand a new set of "uncertainties" that using AI based intelligence will create, the Institute says. This training should be extended to senior leaders such as the director general, permanent secretaries and ministers.

"AI is not new to GCHQ or the intelligence assessment community, but the accelerating pace of change is," said Anne Keast-Butler, director GCHQ. "In an increasingly contested and volatile world, we need to continue to exploit AI to identify threats and emerging risks, alongside our important contribution to ensuring AI safety and security."

Register For Alerts

Keep informed - Get the latest news about the use of technology, digital & data for the public good in your inbox from UKAuthority.