The Council of Europe - the 47-nation body that oversees the European Court of Human Rights - has added its voice to a growing chorus of calls for measures to prevent abuses involving artificial intelligence technologies in government.
New guidelines on AI and data protection published this week state that protection of human rights should be “an essential prerequisite when developing or adopting AI applications, in particular when they are used in decision making processes”.
They continue: "AI developers, manufacturers and service providers should adopt a ‘human rights by design’ approach and avoid any potential biases, including unintentional or hidden, and the risk of discrimination or other adverse impacts on the human rights and fundamental freedoms of data subjects."
As "baseline measures" that "should" be followed, the guidelines are not directly binding on the Council of Europe, much less member states that include Russia and Turkey as well as the UK. (The council is a separate body to the European Union.)
But they are likely to be referred to in future rulings by the European Court of Human Rights in cases involving governments’ AI implementations. The court last year ruled that bulk telecommunications interceptions by the UK government violated the European Convention on Human Rights.
The guidelines reflect principles set last year by a specialist agency of the council, the European Commission for the Efficiency of Justice (which has no connection with the European Commission) on the use of AI in judicial systems. These set five core principles, headed by a 'principle of respect of fundamental rights' and followed by a principles of non-discrimination, quality and transparency, which would ensure that data processing methods and algorithms are accessible and understandable. A final principle requires systems to be 'under user control'.
The council’s interest in the topic reflects growing concerns about the fairness and transparency of AI systems deployed to make decisions about individuals.
In December the annual report of the AI Now Institute, a New York University research group, described it as an “intensifying problem space”. Noting that "AI systems allow automation of surveillance capabilities far beyond the limits of human review and hand coded analytics", it warned that: "Without adequate transparency, accountability, and oversight, these systems risk introducing and reinforcing unfair and arbitrary practices in critical government determinations and policies."
Potential threats from AI also feature in the latest Global Risks Report presented at this month’s World Economic Forum conference in Davos, which observes: “The use of new technologies to monitor or control civil society is likely to have deepening geopolitical ramifications.”
Expert commentators have pointed out that meaningful regulation of AI would have to be enforced on an international basis. English barrister Jacob Turner, author of a new book Robot Rules (Palsgrave Macmillan), has proposed that the starting point could be an international treaty to set general principles along the lines of the 1966 Outer Space Treaty. An 'International Academy for AI Law and Regulation', could develop and disseminate knowledge and expertise in international AI law, Turner says.
However, in the current international climate of suspicion, particularly between the USA and China, it will be hard to turn such aspirations into reality.
Image from iStock, Abidal