Latest Audio Technology Teaching Machines To Listen and Not Just to Speech

Latest Audio Technology Teaching Machines To Listen and Not Just to Speech

Speech recognition isn’t quite the latest technology in the world of artificial intelligence. Apple’s Siri and the range of voice activated home and personal digital assistants that followed such as Amazon’s Echo and Alexa and Google Home, have for several years been able to listen to human speech and respond. While perhaps not quite yet mainstream, younger generations are increasingly using voice search for things like the weather forecast or to set reminders. The habit will inevitably begin to spread up the generations from younger early adopters.

However, one British start-up is taking things further. Audio Analytics is working on the latest technology in the world of machine hearing and the young company is developing AI that can place, put into context and respond to a huge range of everyday and less common sounds. These could include a window breaking, a dog barking, baby crying, a collision or a strong wind. But why?

Potentially a huge range of reasons and applications. A security system able to hear a window smashing is one step ahead of one that responds to movement inside a property or other kind of protected territory. Headphones that block out noise can occasionally lead their wearers into danger. Audio Analytics has signed a partnership agreement with headphone maker Bragi.

The company wants to produce headphones that embed Audio Analytics’ technology to alert wearers if an emergency siren they may not notice is sounding in their vicinity. This would prevent anyone wearing the headphones from stepping onto an ordinarily safe pedestrian cross, blissfully unaware a police car, fire engine or ambulance is hurtling towards them at high speed. Driverless vehicles are another obvious application and there are plans to explore uses in health such as monitoring the audible symptoms of patients such as coughs, sneezes and wheezes.

Audio Analytics was founded in 2010 by Chris Mitchell, who had just completed a PhD on audio AI technology specific to sounds rather than speech recognition. There was no company in the field he could apply to for employment so rather than change direction he decided to start his own. It’s turned out to have been worth the risk. The company has succeeded in raising $8.5 million (£6.62 million) from investors and has established partnerships with big tech companies such as Intel and Cisco as well as Centrica’s smart home unit Hive and several product specific companies like the headphone makers Bragi. The company now has 42 employees in Cambridge and a sales office in San Francisco, establishing a presence on the U.S. market.

Mitchell believes AI being able to interpret sounds as well as speech is not only key to some of the practical applications of Audio Analytics’ technology already mentioned. He believes that adding sound recognition to their bows will hugely improve voice activated assistants:

“When devices respond to these sounds, they suddenly take on less inanimate properties. They tend to [feel] more caring, more understanding . . . more of the things you want from modern consumer electronics.”

Leave a Comment