New report trumpets the potential of artificial intelligence in healthcare, but also points to the need for safeguards in its adoption
NHS leaders need to look closely at including artificial intelligence (AI) in their transformation plans, but be ready to take a cautious approach and devote a lot of effort to getting the supporting data right, according to a new report on the subject.
Think tank Reform has said the technology has great potential for improving the detection of health issues and helping hospitals to operate more efficiently, but it will not be feasible if the NHS continues to use paper records and its IT systems do not communicate with each other.
It also needs to think carefully about the objectives in applying AI, and look closely at the ethical issues.
Titled Thinking on its own: AI in the NHS, the report highlights some of the benefits that have already been identified. These include: speeding up interpretations of breast cancer scans; providing cognitive behavioural therapy to mental health patients; keeping medical staff up to date with scientific articles; and improving administrative tasks such as scheduling operations.
But the use of AI has so far been piecemeal and few benefits realised. This has partly been because health service data is not yet up to the job, especially in secondary care, and needs to be fully digitised and kept accurate and up-to-date.
One of the report’s recommendations is that the NHS should pursue its efforts to fully digitise its data and ensure it is all generated in a machine-readable format. It also says NHS Digital should improve its monitoring of data quality by making submissions to the Data Quality Maturity Index mandatory.
While the report advocates embedding AI in service transformation plans, it says this should be done with caution, not using it as a tool for deciding objectives or outcomes. There is a case for a framework for the procurement of AI systems to ensure they do not hinder transformation or create a burden for healthcare professionals.
It points to some of the difficult issues in the adoption of AI, such as the line between decision support and decision making when used by clinicians, the potential for the algorithms to change as the data fed into them changes, and understanding how a system is making sense of the data it receives.
This gives rise to a number of other recommendations, including:
- Tech companies operating AI algorithms in the NHS should be held accountable for system failures.
- The Department of Health (DoH), Care Quality Commission and Medicine and Healthcare Products Regulatory Agency should develop clear guidelines for medical staff using AI for decision support.
- The development of a framework of ‘AI explainability’, requiring every organisation deploying an application to explain its purpose clearly on their website, along with what type of data is being used and how it is protecting anonymity.
- A framework to protect NHS organisations from being charged high fees by private companies to use the relevant algorithms.
The overall conclusion is that AI could help to narrow the gaps in healthcare identified by the NHS Five Year Forward View, but that there is a long way to go, it will take time for people to trust the technology and a lot will depend on the interface design and usability.
Danger of mistakes
Eleonora Harwich, head of digital and tech innovation at Reform, said: “The NHS has experienced difficulties in the past in realising the benefits of technology. Given the big hype around AI, there would be a danger of replicating past mistakes, when a radically different approach to technological adoption is required.
“It is crucial to understand what AI can do to help reform the NHS and the challenges that will have to be tackled to fully reap the benefits of this technology.”
NHS Digital has responded to the report with a focus on data. Its director of data, Professor Daniel Ray, agreed with the importance of ensuring the data is right, and that it is encouraging its partners to evaluate the quality of its statistics through the Data Quality Maturity Index.
"We also need to make sure that the data provided for use in AI algorithms is designed with the best interests of patients at the forefront of all decision making.
To do this we need to overcome the challenges of understanding the decisions AI algorithms make when using data. These include to what extent have AI algorithms been tested and if they work well in one group of patients is this transferrable to another group. We also need to assess when an AI algorithm makes a decision is this always programmed to be in the best interests of the patient.
"In specialist areas AI has great potential for success and there are good examples of this starting to happen in the NHS, but we need to understand and evaluate this to move it forwards."
Image by Rolling Dice CC by 2.0 via Flickr