Skip to the content

RUSI report highlights problems for AI in national security

28/04/20

Mark Say Managing Editor

Get UKAuthority News

Share

AI shows a lot of potential in supporting national security, but cannot completely replace human judgement and raises serious issues around privacy, according to a new thinktank report.

The Royal United Services Institute (RUSI) has published a paper on the policy considerations for AI in UK national security following talks with a range of interested parties and a review of earlier reports on the subject.

It says the technology could be deployed by intelligence agencies in three ways: to automate administrative processes; identify cyber security threats; and in intelligence analysis, drawing on natural language processing, audiovisual analysis, filtering and triage of material, and behavioural analytics. It could be particularly useful in collating data from a lot of sources and flagging up the significant elements for human review.

The requirement for AI becomes more pressing as malicious actors begin to use the same technology to create threats, especially towards digital security and the in the use of the ‘deepfake’ technology to create disinformation.

But the report warns that none of these could replace human judgement, and that systems that attempt to predict human behaviour at the individual level are likely be of limited value in assessing possible threats.

It also says there are opportunities and risks relating to privacy. On one hand AI could reduce intrusion by minimising the amount of personal data going to human review; but it could also lead to more material being processed, some of which could be misused.

Algorithmic risks

There are particular threats in algorithmic profiling, which could be seen as unfairly biased and needs safeguards in internal processes, and in the ‘black box’ nature of some AI methods, where inputs and operations are not visible to the user.

The latter can undermine accountability in decision-making, and there is a need to design systems in such a way that non-technically skilled users can interpret and critically assess key technical information.

Along with this, despite a proliferation of ethical principles for AI, it remains unclear how these should be used in practice, the report says. This suggests the need for additional sector-specific guidance.

It does not prescribe any detailed solutions, but says it is crucial for the intelligence community to engage with external stakeholders in developing its policy for the use of AI, and to draw on lessons from other sectors.

“An agile approach within the existing oversight regime to anticipating and understanding the opportunities and risks presented by new AI capabilities will be essential to ensure the UK intelligence community can adapt in response to the rapidly evolving technological environment and threat landscape,” it says.

Image from iStock

Register For Alerts

Keep informed - Get the latest news about the use of technology, digital & data for the public good in your inbox from UKAuthority.