Assessing Listening Effort and Listening Stress in Hearing Aid Users with Audiovisual Virtual Reality Environments
* Presenting author
Abstract:
This study explored the effects of multi-modal sensor integration (MMSI) in hearing aids. MMSI enhances automatic listening support by estimating listening intentions from head and body movements, conversation activity, and acoustic scene analysis. Traditional hearing aid evaluations often overlook the impact of these movements, as they require participants to remain still. We introduced a novel method using Virtual Reality (VR) to incorporate natural movements during listening tasks. Twenty-five hearing-impaired participants wearing premium hearing aids performed a dynamic audio-visual scene analysis task followed by a static speech comprehension task. First, participants had to locate a target speaker among interfering talkers and noise. Then, they stayed still facing the target location to listen to a 33-second news clip and answered a two-choice question about its content. The complexity of the scenes varied in the number and spatial distribution of interfering talkers. Measures of speech comprehension, listening effort (assessed via pupillometry), and listening stress (assessed via heart-rate monitoring) were sensitive to the number of competing talkers and to the activation of MMSI technology. This innovative approach demonstrates the effectiveness of VR and non-invasive measures in evaluating hearing aids under realistic conditions.