Skip to the content

Public ‘sees benefits but supports regulation of AI’

06/06/23

Mark Say Managing Editor

Get UKAuthority News

Share

AI chip on motherboard
Image source: istock.com/Andy

More than 60% of the British public support laws and regulations to guide the use of AI, according to the results of a national survey commissioned by the Ada Lovelace Institute and The Alan Turing Institute.

They have published a report on the survey of over 4,000 adults, which has also had substantial input from the London School of Economics’ Methodology Department. It comes at a time when conversations around AI regulation and the need to mitigate risks are heightening.

The survey, carried out by Kantar Public Voice, asked about specific and clearly described uses of AI as the technology is subject to multiple interpretations and often poorly understood.

It asked about attitudes towards the relevant governance. When asked what would make them more comfortable with the use of AI, almost two-thirds (62%) chose ‘laws and regulations that prohibit certain uses of technologies and guide the use of all AI technologies’ and 59% chose ‘clear procedures for appealing to a human against an AI decision’.

Areas of benefit

In general, it found that the public see clear benefits for many uses of AI, particularly technologies relating to health, science and security.

When offered 17 examples of AI technologies to consider, respondents thought the benefits outweigh concerns for 10 of these. For example, 88% said AI is beneficial for assessing the risk of cancer, 76% could see the benefit of virtual reality in education and 74% thought climate research simulations could be advanced using the technology. 

The survey also showed that people often think speed, efficiency and improving accessibility are the main advantages of AI. For example, 82% thought that earlier detection is a benefit of using AI with cancer scans and 70% that speeding up border control is a benefit of facial recognition technology.

However, attitudes vary across different technologies. Almost two-thirds (64%) were concerned that workplaces will rely too heavily on AI for recruitment, rather than using professional judgement, and 61% were worried that AI will be less able than employers and recruiters to take account of individual circumstances.

Public concerns extend beyond use of AI in the workplace. For example 72% expressed concern about driverless cars and 71% about autonomous weapons. Over three-quarters (78%) said they worry that the use of robotic care assistants in hospitals and nursing homes would mean patients missing out on human interaction, and over half (57%) were concerned that smart speakers gather personal information that could be shared with third parties.

Contextual factors

Awareness of AI technologies also varies greatly depending on context. 93% of respondents were aware of the use of AI in facial recognition for unlocking mobile phones, but only 19% of its use in assessing social welfare eligibility.

Andrew Strait, associate director at the Ada Lovelace Institute, said: “AI technologies are developing faster than ever and more organisations, in both the private and public sector, are expanding their use of AI. However, it is important that companies and policymakers are aware of public expectations and concerns.

“Our research provides a detailed picture of how the public perceive the use of AI across a range of contexts. We hope that it will help AI companies and policymakers understand and respond to the public’s nuanced attitudes towards AI and its regulation.”

Professor Helen Margetts, programme director for public policy at The Alan Turing Institute and principal investigator, said: “The survey showed that for the majority of technologies, people saw more benefits than concerns. But their views of these technologies were highly nuanced, in that they could see benefits and concerns simultaneously.

“Studies like this can be helpful in considering the development and deployment of AI, especially with the advent of newer generations of AI such as ChatGPT. People's clear support for the regulation of AI showed how important it is to get the governance right, to ensure that the uses of these technologies embody fairness and transparency, and that people can benefit from them safely.”

Register For Alerts

Keep informed - Get the latest news about the use of technology, digital & data for the public good in your inbox from UKAuthority.