One of the hallmarks of the HTC Vive Pro has the removable premium audio headset built into it. that headset provides an extremely immersive experience in games that are designed to take advantage of it. These individualized games look for rooms of a certain size and take advantage of sound cues within that space. They might know your play space is exactly 5×5, so they will be able to use cues that fit your space.
Not every game is built this way, many utilize generic Head Related Transfer Functions (HRTFs) to try and render audio around us. This more generic rendering of sound is great for getting the job done and providing some sense of immersion in certain scenarios. However, it falls short of what a truly customized experience can provide in terms of sound design.
That’s where a new experiment may provide some path forward for developers in VR.
Using Visual Cues to Improve Sound Immersion
Sound localization is your ability to understand where sound is coming from, and to pinpoint its direction (typically by turning your head and looking at the direction). We do this all the time without appreciating what a cool brain function it really.
The test used a beeping sound that was sometimes paired with a visual component. The idea was to expose users to 60 seconds, like a calibration tool. It exposed users to this sound in three distinct phases:
- Unimodal vs. Multimodal – First a sphere highlights where the sound was coming from
- Multimodal Mapping with Impact Sound – A visualization that highlighted the effect of sound moving
- Remapping with Impact Sound Only – Essentially no visualization
In the video above, you can see this for yourself. Although without context it’s a little tricky to tell what’s happening. Essentially, you will notice that following the sound first with your eyes and then with your head is much easier than simply trying to identify where the sound is coming from. It’s difficult to tell how accurate your own testing might be, but taking the test gives you an idea of the effect.
With minimal stimuli, users still turn their head but may miss some cues.
However, the result showed pairing visual and audio cues has a powerful impact on our ability to perceive sound three-dimensionally
Sound Immersion in Fitness Games
This study shows that you can be exposed to as little as 60 seconds of this kind of calibration and have a better understanding of the audio happening in the world around you. That’s got huge implications for fitness games that aren’t built for your contained space. In an open world game like Sairento or Fallout 4, you could be exposed to a brief audio calibration and find yourself even more immersed without any additional money spent on hardware.
Of course, HD audio systems will always provide the luxury experience and we recommend diving head first into them, but this is great news for open world VR games. Most of us simply don’t have a warehouse at our disposal, so these improvements in audio comprehension will be important for sound design in future game design.
The fitness benefits of audio cues that appear to be coming from all directions are interesting. In a shooter game, we’ll be more prone to seek cover. It would not be hard to imagine a “skiing” style game where users have to dodge obstacles approaching them at high speeds by moving, leaning and squatting in VR.
It’s difficult to contextualize exactly what this means for development because we really haven’t seen any examples in the wild. What we do know is that this important development should greatly improve the quality of sound in VR. We expect these effects to be felt greatly in Triple A games and look forward to the unique experiences this innovation will present.