Organisations developing services using artificial intelligence need to be clear about its purpose and pay a lot of attention to transparency, according to newly published guidance.
The Information Commissioner’s Office (ICO) and Alan Turing Institute have published the document, aimed at helping organisations explain the processes, services and decisions delivered or assisted by AI.
It reflects the continual concerns about the ethical and data protection implications of the technology in public and private services.
The guidance consists of three parts, the first of which covers the basics of explaining AI. This is aimed at data protection officers and compliance teams, but is relevant to all of those involved in the development of AI systems.
It outlines potential benefits and risks, with a warning that failing to explain decisions assisted by AI could lead to regulatory action and damage to an organisation’s reputation.
The second part deals with explaining AI in practice, outlining a series of tasks that include selecting priority explanations depending on use cases and the impact on individuals, collecting and pre-processing data, and building a system to extract relevant information from a range of explanation types.
Along with these is the need to translate the ration of a system’s results into useable and easily understandable reasons.
The third part covers explaining what AI means for an organisation. This includes setting out policy and procedures, such as explaining why a specific AI model was selected, keeping an audit trail and providing the relevant documentation.
The publication follows the ICO’s recent outlining of steps to eliminate bias in the use of AI. It includes an emphasis on algorithmic fairness and a willingness to add or remove data on different groups.