In association with Prof. John Chowning’s guest visit to Trondheim and NTNU, he will play a concert at Vitensenteret 6th November.
Update! The lecture has been postponed 15 minutes until 14:30.
The World Health Organization (WHO) has updated their recommendation on exposure to aircraft noise, but the datasets used are being questioned by our senior researcher Truls Gjestland.
We previously wrote about our work on deep neural networks for speech enhancement. In late August, we presented our newest results as a paper and a poster at the speech technology conference Interspeech 2017 in Stockholm, Sweden.
A lot of people could use some help with their hearing, but getting a hearing aid has traditionally been a big, time-consuming, and expensive step. As we reported earlier, the Oslo-based company Listen have therefore been developing an app in collaboration with SINTEF that turns your iPhone into a hearing aid.
Using deep learning to improve the intelligibility of noise-corrupted speech signals
Speech is key to our ability to communicate. We already use recorded speech to communicate remotely with other humans and we will get more and more used to machines that simply ‘listen’ to us. However, we want our phones, laptops, hearing aids, voice controlled and/or Internet of Things (IoT) devices to work in every environment — the majority of environments being noisy.
This creates the need for speech enhancement techniques that remove noise from recorded speech signals. Yet, as of today, there are no noise-filtering strategies that significantly help people understand single-channel noisy speech, and even state-of-the-art voice assistants fail miserably in noisy environments. Some recent publications on speech enhancement show that deep learning, a machine learning subfield based on deep neural networks (DNNs), will become a game-changer in the field of speech enhancement. See for example reference  below.
In this blog post we will go through a relatively simple implementation of Deep Learning to speech enhancement. Scroll down to the end of this post if you just want to know what the resulting enhanced samples can sound like.
Sharleen has a hearing impairment, and couldn’t understand what the teacher was saying. Her father thought he needed her at home to look after the goats. Unfortunately, Sharleen is just one among many children lacking help.
WHO has estimated that over 5% of the world population – 360 million people – has a hearing impairment (328 million adult and 32 million children), and the majority of children with hearing impairment live in low-income countries. In contrast, less than 2% of the hearing aids produced in 2005 went to low income countries.
Traditional hearing devices are advanced equipment; expensive, fragile and not developed for the Third World. Specialised personnel and complex infrastructure in the individual fitting process is needed, reducing the usefulness of such complex hearing aids to a minimum in low-income countries, where trained people and specialists are scarce.
With funding from Norwegian Research Council, SINTEF’s project “I Hear You”, starting early 2017, aims to help children like Sharleen by ensuring access to education for the hearing impaired.
Noise-induced hearing loss is a result of exposure to loud sounds over a long time, or to one extremely loud impulse. In addition to permanent loss of hearing, tinnitus is a common symptom. While this is the most common permanent injury in the world, it is also preventable through hearing protection equipment and safe working practices.
We have acted as scientific consultants for a new five-minute information video from Honeywell on this topic. You can see the entire video here!
Our very own Truls Gjestland has been elected a Fellow of the Acoustical Society of America, by action of their Executive Council. This honour was bestowed on Truls due to the contributions that he has made throughout his career to research and standards development on transportation noise effects on communities.
The formal announcement and Fellowship certificate presentation will be made at the ASA meeting in Honolulu, on 30 November 2016.
Horns are used in many fields, including musical wind instruments and loudspeakers. The physics in the two cases is of course the same: sound propagation in a flaring duct open at one end. Therefore we can in principle use the same simulation methods for both cases. But what we want to obtain from the horn simulation can be very different.
A very important requirement for horn loudspeakers is directivity control. This entails directing sound into a specific region in front of the horn, giving the same frequency response inside that region and little sound outside it. Any simulation method for horn speakers must be capable of predicting directivity. Horn speakers should not be resonant, but should present a constant and smooth acoustic load to the driving unit, so this is also an important, but somewhat less critical, factor in the design.
For wind instruments, we are usually interested in the resonance frequencies. This is important for the tone, intonation and playability, and it is useful if we can predict this when designing the instrument. Any simulation method must therefore be able to predict these frequencies accurately. Or we may have an old valuable instrument and want to find the internal shape without cutting it into pieces. Then we can use an optimization algorithm to solve the inverse problem of finding the internal shape from measured resonance frequencies. For this, the simulation method must be fast.