Skip to the content

A Hippocratic Oath for AI developers? It may only be a matter of time

19/05/17

Share

Guest blog by Ben Dellot, associate director, economy, enterprise and manufacturing, RSA (Royal Society for the encouragement of Arts, Manufactures and Commerce) 

Developments in artificial intelligence (AI) and robotics are picking up pace. But are policymakers and regulators ready for the ethical fallout?

What new institutions and practices – from AI watchdogs to oaths for developers – should be put in place to maximise the social benefit of these technologies, while limiting their potential for harm?

New AI and robotic systems are beginning to challenge human superiority in a variety of tasks. Machines are now capable of identifying cancers in medical images, dealing with customer queries through retail and banking chatbots, and organising smart transport solutions to manage traffic flows.

Much of the commentary surrounding these technologies has focused on what they mean for the future of work. Less attention, though, has been paid to the broader ethical dilemmas posed. Seldom do we hear how they might impact matters of privacy, discrimination, fairness and self-determination.

Dilemmas

  • Discrimination: Machine learning systems that are trained using legacy datasets could reinforce biases in decision making. For example, employers that use AI powered recruitment software could lock out skilled candidates whose attributes fail to mirror those in their ‘training set’. It’s not hard to imagine AI systems being used to exclude certain groups from purchasing insurance (e.g. if algorithms predict they have a high risk of contracting a chronic illness).

  • Privacy: Given that data is the fuel powering artificial intelligence systems, many of the companies developing them will need to harvest and store greater amounts of our personal information. This is fine insofar as we consent to the extra tracking, but what happens when sharing becomes so normalised that it is felt an obligation to divulge our most sensitive of details? The fact that tech companies can store our data securely is no guarantee that our privacy will be protected.

  • Agency: Artificial intelligence will help to address stubborn challenges relating to healthcare, education and energy efficiency. But as a commercial tool, it will also be used to steer the behaviour of consumers, with potential consequences for human agency and free will. Addictive app design – or ‘captology’– is already a point of contention in Silicon Valley, and we should wonder what AI could be used to do in the hands of some of its readers.

  • Authenticity: The most sophisticated AI systems will not only be able to replicate human abilities, they will also one day be able to mimic human nature and pass off as real people. Since there is no obligation for companies to say whether their AI interface is a human or not, many people could falsely believe they are interacting with another person. Not everyone will think this is an issue, but consider the reaction in some quarters to the cute Paro robot, which has proved effective in calming dementia patients but also been criticised for removing the human from an important caring role.

Keeping AI in check

There is often no easy answer about what is right and wrong, and while machines may harm some users they will also deliver big gains to others.

Retailers have long used marketing to encourage people to buy more of their goods, so why is it inappropriate to use more powerful adverts underpinned by AI? The time that nurses in care homes can devote to older patients is increasingly squeezed, so why not use robotics to at least plug some of the gap?

AI developers, policymakers and regulators cannot answer these questions alone. But they can start taking steps to limit the most obvious potential for harm based on what we know to be broad societal values..

The largest tech companies – Apple, Amazon, Google, IBM, Microsoft and Facebook – have already committed to creating new standards to guide the development of artificial intelligence. Likewise, a recent European Parliament investigation recommended the development of an advisory code for robotic engineers, as well as ‘electronic personhood’ for the most sophisticated robots to ensure their behaviour is captured by legal systems.

Sandboxes and software deposits

Other ideas include regulatory ‘sandboxes’ that would give AI developers more freedom to experiment but under the close supervision of the authorities, and ‘software deposits’ for private code that would allow consumer rights organisations and government inspectors the opportunity to audit algorithms behind closed doors.

There have even been calls to instate a Hippocratic Oath for AI developers. This would have the advantage of going straight to the source of potential issues – the people who write the code.  An oath might also help to concentrate the minds of the programming community as a whole. Inspiration can be taken from the way the IEEE, a technical professional association in the US, has begun drafting a framework for the ‘ethically aligned design’ of AI.

It’s still early days but it’s important we begin experimenting with different protections and institutions, and to arrive at a package of measures sooner rather than later. Leave it too long and we may find the technology running away from us, with knock on effects for everyone – users, developers and tech companies alike.

If you would like to hear more, Ben will be speaking at UKAuthority's Rise of the Bots event on 20 June in London. More details and registration here.

The full version of the blogpost can be viewed here.

Register For Alerts

Keep informed - Get the latest news about the use of technology, digital & data for the public good in your inbox from UKAuthority.