Skip to the content

i.AI sets out five assurance principles for AI

16/08/24
AI symbol
Image source: istock.com/Jittawit.21

The Incubator for AI (I.AI) has published a set of assurance principles for the deployment of AI technologies for the public good.

It said the move is aimed at building public trust in the use of AI systems by ensuring they are ethical, equitable and will benefit wide society.

i.AI’s public engagement and strategy manager, Farzana Chowdhury, said the principles are aligned with the Generative AI Framework for Government, the AI Regulation White Paper and the Department for Science, Innovation and Technology’s (DSIT) Introduction to AI Assurance.

First of the five principles is safety, security and robustness, requiring considerations on the extent of human oversight of the systems and the safety measures in place to mitigate potential harms.

Second is that systems are appropriately transparent and explainable, which involves questions such as whether it is possible to make the code available as open source and if any decision made by AI can be properly explained to someone who is not a specialist.

Reducing bias

Third is fairness, aimed at reducing any potential bias in a system and depending on the quality and representativeness of the data used.

Fourth is improvability and openness to challenge, which requires the ability to report and address issues, and engage with stakeholders to ensure a well rounded development.

Fifth is the need to be accountable with clear governance.

“We also need to ensure that our actions are proportionate,” Chowdhury said. “We are testing a lot of ideas quickly, many of which will fail before ever interacting with a real user, so it is important we don’t put undue burden on our early stage products.”

She said that i.AI – which works within DSIT to promote the use of AI for the public good – will continue to iterate the principles and is embedding them into its processes.

Three-stage approach

It is taking three main factors into account, the first being to check that the AI behaves as expected under a variety of conditions at the development stage.

The second is that all of its products include a ‘human in the loop’ approach, designing systems that support workers, and that it explores a range of questions to really understand the nature of the tool being developed. This includes questions such as whether it has conversational capabilities, how explainable are its decisions and what is the scale of its impact?

The third is to take into account external market factors, notably the latest developments in AI algorithms, machine learning techniques and data processing methodologies. This is accompanied by keeping a watch on market dynamics, such as the introduction of new regulations, evolving user expectations and increased competition.

The assurance principles are likely to have a strong influence at least in central government, given that in March the then administration announced that any government AI projects would have to involve collaboration with i.AI – which was then part of the Cabinet Office.

Chowdhury added that this is part of a wider strand of work for i.AI’s product strategy.

 

 

Register For Alerts

Keep informed - Get the latest news about the use of technology, digital & data for the public good in your inbox from UKAuthority.