Careful consideration of key issues combined with a strong master data management platform can support use of the technology for the public good, writes Ross McIntosh, business development director at Civica
Support for people living with dementia is not where they need it to be, and that is partly due to not making the best use of the relevant data.
Bo Ruan, head of data at the Alzheimer's Society, has made the point that better data, used in the right context, could help people with dementia to live independently for longer, to ensure they understand their rights and entitlements and to deal with their financial arrangements. It is disappointing that this is still an ambition rather than common practice, given the immense amount of data now being collected in public services.
It is also part of a broader question on whether that data is currently being treated as a business asset. If it is not, the case is made that by default it is a liability, so a focus should be on how to ensure it becomes the former. This is especially important with the rise of artificial intelligence as a potentially valuable tool in dealing with big societal challenges – and a growing realisation that it needs the right high quality data to make this possible.
This formed the basis of a recent UKA Live discussion supported by Civica, with contributions from Bo Ruan, John Campbell, digital transformation programme director for the Scottish Government, Robert Musekiwa, digital transformation consultant at Birmingham City Council, Chan Phung, chief innovation officer at Koi-Consulting, and UKAuthority publisher Helen Olsen Bedford.
There was agreement that, despite pockets of good practice, there is a lot of scope for improvement in the public sector’s use of data. Among the problems are that:
- senior leaders often acknowledge the importance of data at conceptual level but do not always know the practical steps to support actions towards improvement;
- often there is not enough consideration related to the source and nature of data before it is applied;
- and it is often not fit for purpose, being inaccurate, out-of-date or not appropriate to support a specific use case.
In many organisations there is still a culture of tolerating and working around poor data. It is due to insufficient attention to data literacy among staff, no appreciation that it could be used beyond the silo in which it is collected, familiar problems in extracting it from legacy systems for a new purpose, and failures to apply data standards. The discussion raised the point that many officials are unfamiliar with existing standards – such as HL7 for interoperability in healthcare and SAVVI for vulnerability data – or, as impactful, they are aware but do not know how to apply them.
These factors have hindered progress in abstracting data from service silos in a way that it could be used to good effect for the business generally and in AI applications – with the appropriate measures on information governance to ensure public trust. Overcoming the barriers requires careful thought and effort in a number of areas, as highlighted in the discussion.
Among them the need to ensure the accuracy, completeness and currency of the data and the way it is used is right for the purpose. Also, by sharing data internally and with external partners, many more use cases will be supported, creating further benefits and broader value to the organisation, customers and partners.
Crucial human input
There has been a lot of talk about the value of large language models (LLM), a type of AI algorithm that uses deep learning techniques and very large datasets to understand, generate and predict new content. But the discussion brought up a warning against putting too much faith in these as, while they work very fast, they are not as intelligent as a skilled human; and they may inherit a bias in the way the data has initially been generated.
Bo Ruan highlighted this in pointing out that dementia is unfortunately often misdiagnosed and that, until the factors behind this are corrected, there is a danger of training an AI model that will continue to make the mistakes and enforce existing biases.
It raises the question of whether the skills are available to train a model to shed any bias. Rob Musekiwa suggested this may never be fully achievable as there are issues around social factors, diversity and technical interoperability that will always be subject to change. It will always need an intelligent human input, possibly working through a ‘feedback loop’, to monitor and correct any bias.
Along with this is the question of having the right model and technology in place for the abstraction of data from service siloes. It has to fit into the organisation’s enterprise architecture with the ability to extract data from different systems for an ‘appropriate view’ relevant to the issue to which an AI is being applied.
This is where a master data management (MDM) platform plays an important role, making it possible to develop the appropriate view, to match and merge data to provide a rich view that is accepted as relevant to the purpose - so achieving trust in its use. This can be examined for the removal of biases and provide a foundation for innovation in obtaining the insights and capabilities promised from AI.
It is an area in which the possibilities are still being explored with much to learn, but a robust MDM platform will do a lot to ensure that an organisation’s data is a valuable asset fit for the future.
MDM is a powerful tool and, coupled with human experience and empathy, provides a compelling solution to ensure that data works for the public good.
Civica supports public sector organisations to be effective in addressing their data management challenges. Get in touch with them here or read more about Civica Master Data Management.
Watch the full UKA Live discussion below: