Modeling Normal and Impaired Hearing with Deep Neural Networks Optimized for Ecological Tasks
* Presenting author
Abstract:
Computational models that perform real-world hearing tasks using cochlear input could help link the peripheral effects of hearing loss to real-world perceptual consequences. We trained a deep artificial neural network to jointly localize and recognize speech, voices, and environmental sounds using simulated auditory nerve representations of naturalistic auditory scenes. Once trained, we compared the model’s auditory behavior to that of humans. Despite never being fit to human data, the model replicated aspects of binaural speech perception in humans with normal hearing, accounting for the effects of noise, reverberation, and spatial separation between speech and noise. Psychoacoustic thresholds measured from the model were also human-like, mirroring patterns of human amplitude modulation processing and binaural unmasking. To investigate the perceptual consequences of hearing loss, we altered the model’s peripheral input and measured the impact on behavior. Simulations of plausible and idealized hearing loss phenotypes suggest that both outer hair cell and auditory nerve fiber loss contribute to real-world hearing difficulties, with each also producing distinct behavioral outcomes in psychoacoustic listening tests. Machine learning models that operate on simulated auditory nerve input can predict aspects of hearing-impaired behavior and may help disentangle the perceptual consequences of different types of hearing loss.