Hearing aids - orientation at the cocktail party
Hearing means converting sound waves into neuronal, i.e. electrical signals and evaluating them in the brain. Digital systems are simulating more and more acoustic perception skills.


On the way from sound waves to electrical impulses, the air movements in the human hearing system are first transmitted to the basilar membrane of the inner ear via the eardrum and auditory ossicles. Their vibrations move the hairs of the inner hair cells. These in turn generate ion currents and release messenger substances for transmission through the auditory nerve. If the complex system is severely disrupted by illness or aging processes, the consequences for those affected are serious, because a large part of human communication takes place through language.

Depending on the cause of the disability, hearing aids make life easier for the sufferer. Analogue devices usually only separate the acoustic signal picked up by a microphone into low, medium and high frequencies and amplify each of these three channels separately. Digital systems - on the market since 1996 - differentiate between up to 22 frequency bands, which they analyze separately and raise the volume in a differentiated manner. In addition, the developers are trying to exploit the technical possibilities to enable "comfortable" listening. People who are hard of hearing often find loud noises particularly unpleasant, presumably as a result of defective outer hair cells. Normally, these structures expand the dynamic range of hearing by attenuating strong oscillations of the basilar membrane and amplifying weak ones. Separately for each frequency band, an individually adjustable automatic system therefore significantly boosts quiet signals and only slightly amplifies loud ones.

Background noise is another problem as it interferes with the speech signal. As a first step, hearing aids therefore filter out frequencies below 100 to 200 Hertz, such as those generated by engines. Digital systems also search for voice signals in the various frequency channels, for example using statistical properties, in order to then use filters to suppress interference.
Even the cocktail party effect can be achieved with hearing aids today. The person with normal hearing has an arsenal of physiological functions to understand what the other person is saying, even at high noise levels. For example, if the signal comes from the left but the disturbance comes from the right, the brain prefers to process the information from the left ear (head shadow effect). But even with only slight differences in direction, he althy hearing is able to differentiate between signal and noise. Imitating this requires a directional characteristic of the microphone: Its sensitivity is maximum forwards, but low to the side (so-called cardioid characteristic).

Digital hearing aids work with two to three microphones that are coupled together. If the signal processors do not detect a voice signal, the system receives it equally from all directions. Otherwise, they control the microphones in such a way that a directional characteristic is created. The minimum of sensitivity then points in the direction of the disruptive noise source. Even if this is moving - such as a car - the disruption can be dampened in this way without the hearing impaired person noticing the process. Unfortunately, today's speech recognition methods reach their limits as soon as more than one person is speaking.
Did you know?
- Feedback occurs when sound produced by the speaker is picked up again and amplified again. Digital processors identify the interference and its center frequency, and very narrow-band filters eliminate the annoying howling.
- Wind at the microphone openings creates a low-frequency, fluctuating noise. The associated spectra are largely uncorrelated - this is the cue for the hearing aid controller to, for example, amplify low frequencies less.
-
The hearing trumpet used until the late 19th century amplified the sound by 10 to 30 decibels, but only between about 500 and 2000 hertz. It is precisely the higher frequencies that are important for understanding speech that are often the first to be lost in the case of hearing loss.
"Science in Everyday Life" is a regular rubric in Spectrum of Science. A collection of particularly beautiful articles in this category has just been published as a dossier. © Spectrum of Science
The Heidelberger Verlag Spektrum der Wissenschaft is the operator of this portal. Its online and print magazines, including "Spektrum der Wissenschaft", "Gehirn&Geist" and "Spektrum – Die Woche", report on current research findings.