Exploring Dynamic Binaural Rendering Approaches in 6DoF Acoustic Augmented Reality
* Presenting author
Abstract:
Previous work has shown that it is possible to seamlessly blend real and virtual sound sources, albeit in very controlled conditions. Such conditions mainly involve environments with very little reverberation, the use of individually-measured head-related transfer functions, as well as limiting the participant’s movements to head rotations only (i.e. no translations). The study presented here investigates an attempt to explore beyond some of these limitations, moving towards conditions which are more similar to real-life interactions. Spatial room impulse responses (SRIRs) were measured in a reverberant space at 66 positions on a grid, and for seven separate sound sources. The measured SRIRs were then binaurally rendered allowing for up to 6 DoF movements, using several different approaches. These varied in their degree of simplification with respect to typical reverberant sound field characteristics, for example rendering signals at lower spatial accuracies, using artificial reverberators, or again employing a smaller number of measured positions. A listening experiment was devised to compare these approaches, and early results will be presented.