Skip to the content

Sound sensors ‘could support smart cities and assistive tech’

24/06/19

Sound sensors could be used to support smart city initiatives and as assistive technology to help vulnerable people live at home, a leading academic in the field has said.

Professor Mark Plumbley of the Centre for Vision, Speech and Signal Processing at the University of Surrey highlighted the potential in a presentation staged by the Connected Places Catapult in London last week.

He said the possibilities are being expanded by the ability to combine the sensors with machine learning technology to obtain accurate data on the origin of specific sounds.

Plumbley is focusing on the two possibilities as part of a submission to the Turing Institute to become one of its AI Fellows under the Government’s AI Sector Deal.

He said the Dementia Research Institute’s Care Technology Centre is looking at the possibility of sound sensors to help people with the condition live at home for longer, although it could also be used in supporting people with medical conditions.

Early intervention

“It can work in detecting changes of activity for people with conditions such as urinary tract infections,” he said. “If you can pick up the changes in behaviour that goes along with that early, you can make an intervention that helps someone get better and stay at home.

“It could be that with the internet of things (IoT) solution you need a sensor on every plug, door and on the stairs, but you may be able to replace those with sound sensors in a couple of rooms. You can spot the sound of a tap being turned on or the microwave beeping or the kettle boiling.”

Regular patterns can be identified for such sounds, and when a system picks up sharp digressions it could pass on an alert for an intervention. Algorithms within machine learning systems could ensure this could be automated without becoming too rigid.

As examples of the potential in smart cities he pointed to two ways in which sound sensors could be used.  One is in traffic monitoring, such as the Department of Transport’s plan to run trials of a prototype ‘noise camera’ on selected roads to support the enforcement of traffic noise regulations.

Another is in measuring the annoyance to people caused by noise from nearby factories, an issue that Pumbley said has attracted the interest of the Environment Agency.

Better models

“It could be about building better models of how people are affected by different noises,” he said. “It could be like a digital twin to predict the behaviour of people, to assess which noises are particularly annoying and you want to do something about, and others that people don’t mind as much.

“It could be used to improve wellbeing in those areas.”

Other possibilities include providing architects and planners with new tools to design buildings and create a more comfortable environment.

Plumbley highlighted two elements in the data that could be collected: acoustic scene classification, which can convey settings such as a busy city street, market or train station; and event detection, which in the context of an office could be a door knock or slam, speech, the sound of a printer or a keyboard click.

These can be combined with an audio tagging framework and used with machine learning to gain a better understanding of the sound issues in different environments.

He warned, however, that any efforts would have to take account of the familiar demands on data management such as those around protecting privacy and guarding against bias in how it is used. Also, machine learning devices are very expensive.

“We’re at a point where we can show research demonstrator proof of concept for recognising sounds, but we need data to bring researchers onboard and to consider machine learning,” he said.

Image by T.Equity13 CC BY-SA 2.0

Register For Alerts

Keep informed - Get the latest news about the use of technology, digital & data for the public good in your inbox from UKAuthority.