Analysis: The prospect of an emerging market for data raises the question of how to quantify its potential
It’s now conventional wisdom that data has a value that any organisation, public or private sector, needs to exploit. But how do you assess that value and what are the prospects for building a functional market in data from different sources?
This is more than a hypothetical question, given that public authorities are aiming to raise their game in exploiting their data, and increasing its value by combining it with data from other sources. It takes time and effort to extract and analyse your own data – a real cost in itself – more so to obtain it from other sources, and some of them could be attaching price tags in the not-distant future.
The effects could be felt across public services, especially in the development of smart places that will only thrive with data coming from a range of sources.
Such questions underpinned some of the talk at last week’s Moving Urban Data Markets conference, staged by a group of interested organisations and hosted by the Greater London Authority (GLA).
A couple of attractive ideas were dispelled. Authorities won’t find all of the solutions in open data; there are reasons around sensitivity and commercial value that will keep it out of the realm. More likely is a reliance on a combination of open and closed data, and the development of ecosystems in which it comes from different sources for specific purposes.
The other was that it will not be straightforward to identify the available sources then pick, process and analyse the data. It can lead to dead ends, the work involved could be wasted, and if there is a financial cost it prompts a round of agonising over whether it would be justifiable. Which brings it back to the question of assessing the value upfront.
Among the ideas were to assess its potential in supporting innovation, contributing to social justice, and how it could help to provide better outcomes in services. If there was a common theme in these it was the need to think beyond organisational efficiency to those areas where an organisation can prove it works for the public good.
A more detailed perspective came from Alanus von Radecki, head of urban governance and innovation at Germany’s Fraunhofer Institute for Industrial Engineering. He presented a cumulative data value framework that breaks down the costs and potential benefits of single datasets.
Costs come largely from recurring investments in data management and digital infrastructure: benefits can be around more efficient services, better use of infrastructure and resources, reductions in pollution, improved levels of public health, better risk predictions, new income opportunities and job creation.
Use cases and cost
Quantifying all these is not easy, but as a body of use cases becomes available, with evidence of the benefits, it could be possible to place a monetary value on them. As that becomes credible it provides a step towards assessing a return on investment, and helping authorities decide if it is worth the cost of obtaining data.
Andrew Collinge, assistant director of the GLA, said there is a need for more use cases, demonstrating the potential for solving problems and supporting decision-making, and that the London Office of Data Analytics (LODA) is looking at pilot projects. Over time such initiatives could contribute a firmer understanding of the value of different types of data.
Public authorities are only beginning to find their way around this issue, and the various ideas need refining over time. There is going to be experimentation, and hopefully this will lead to some clear methodology and good practice.
It might be that there will always be a fuzzy element to this and organisations will have to take punts on some investments in data; but the greater the availability of clear numbers the better, and plenty of data leaders will be glad for any progress in the field.
Image from justgrimes, CC BY 2.0 through flickr