Auditory illusions illustrate sound source localization
- Health & Medicine
In the everyday world listeners and sound sources move, which presents a challenge for localising sound sources. The subtle differences in sound received at each ear, known as auditory spatial cues, are used for locating sound sources. However, these cues change based on the movement of either the sound source and/or the listener. This means, when listeners move, the auditory spatial cues would indicate that the sound source moved, even if it was stationary. In the everyday world, however, this is not the case: a stationary sound source is not perceived as moving when a listener moves. Professor Yost’s research is focussed on understanding how the auditory system determines that a stationary sound source does not move when listener movement causes the auditory spatial cues to change.
In 1940, Hans Wallach described an auditory illusion which could be created by rotating the listener and the sound source in the same plane. If the sound source is rotated at twice the speed of the individual’s head, the sound is perceived as stationary originating from the opposite position from which it started. This correlates with another feature known as front-back confusions, where a sound located behind an observer can appear to emanate from in front, and vice versa when the head remains stationary.
A forgotten mystery
Much of Wallach’s research was largely forgotten with the outbreak of war in Europe, and his experiments only recently reproduced, but modern researchers in sound source localization (such as Prof Yost) are coming to understand the importance of the observations he made. Wallach reasoned that, “Two sets of sensory data enter into the perceptual process of localization, (1) the changing binaural cues and (2) the data representing the changing position of the head.”
Prof Yost has picked up where Wallach left off, having already identified these features independently before discovering Wallach’s earlier work. He believes, based on research in vision, that, “sound source localization requires the interaction of two sets of cues, one based on auditory spatial cues and another that indicates the position of the head (or body, or perhaps the eyes). Both sets of cues are required for the brain to determine the location of sound sources in the everyday world.”
To test this hypothesis the team are using two interacting approaches. The first involves experiments examining the role of auditory spatial cues and their interaction with head-position cues when listeners and/or sound sources move. The second is concerned with cues used to determine the position of the head (or body, or eyes) when listeners and/or sound sources move, and listeners are asked to judge where a sound is coming from.
Sound source localization requires the interaction of two sets of cues, one based on auditory spatial cues and another that indicates the position of the head
A very modern approach
This research is important for understanding the basic concepts of how we perceive and process dynamic auditory cues, though this relatively simple premise has proven difficult to tease out in practice. It is clear that complex neural computation is at work to process the various cues, so discoveries in this field have the potential to impact on other areas of neural computation. The relevance of the work has also expanded as virtual reality (VR) has developed. The requirement of VR developers and programmers to produce virtual auditory scenes to accompany the visual elements (within the strict computational constraints imposed by available technology) is a unique driver to this field of research.
“The vast majority of what is known about sound source localization comes from studies in which the listener and the sound source were stationary,” says Prof Yost, “I decided that a study of sound source localization when listeners and sounds move might be a valuable research undertaking.”
Detective tools
With generous funding from Arizona State University, a specialised facility was constructed to empirically investigate these phenomena, believed to be the first of its kind in the world. A chair which enables listeners to be immobilised and precisely rotated, sits within an array of loudspeakers in an echo-reduced room so that sounds can be presented from multiple sources or themselves rotate about the subject. Alongside new psychophysical procedures, which have been developed to study sound source localization when listeners and sound sources can be manipulated independently, this facility is shedding light on a much under-researched topic.
From these experiments the team have already successfully repeated and confirmed Wallach’s initial observations that small movements of the head are sufficient and necessary to disambiguate front-back confusions. They have also recreated the illusion in which subjects confused a rotating sound source for a static one initiating directly opposite the true location. This has been further clarified by the group as requiring low-pass filtered stimuli, which provide fewer auditory cues and would be likely to generate front-back confusions.
In the everyday world, sound source locations are referenced to positions in the environment (a world-centric reference system), but the auditory cues for sound source location indicate locations relative to the head (a head-centric reference system). Prof Yost clarifies, “We deal with a general hypothesis that the world-centric location of sound sources requires the auditory system to have information about auditory cues used for sound source location and cues about head position.”
We assume and hope others, who do health care research, will follow up on our work – Prof William Yost
Sleuthing skills
Using the same basic approach as for the illusion, the group have shown that sound rotation perception when sources and listeners rotate was based on information from the acoustic, visual, and, perhaps, vestibular systems (parts of the inner ear and brain that help control balance and eye movements). This led them to the conclusion that sound source localization is a multisystem process: localizing sound sources cannot be based just on understanding the processing of auditory spatial cues.
This work is uncovering a new understanding of the basic systems underlying sound source localization, vitally important to those who work to restore hearing impairments. Prof Yost is aware of the potential applications of this research, saying, “we assume and hope others, who do health care research, will follow up on our work.” But the real driver for him is the thrill of discovery, investigating a mystery which stretches back into the last century and beyond, to the core of our perception of the world around us.
Hans Wallach published three papers in a row in the late 1930s and they have been usually cited in terms of the role head motion might play in reducing front-back sound source localization confusions. While the papers require some effort to understand, they are full of incredible insights regarding sound source localization. It just took me a while to discover those insights.
What are the main challenges in investigating neural computation of auditory cues?
There are several: First, the differences between the sounds arriving at two ears that are partially responsible for sound source localization can be incredibly small, e.g., a few microseconds. Second, reflections from surfaces (e.g., the ground or walls) near a listener can adversely affect the ability to localize a sound source, so ideally you want to reduce those reflections as much as possible. Most of the time, multiple systems provide simultaneous information about head position. It takes a lab like ours to be able to study one potential head-position cue at a time.
How has the funding from Oculus VR impacted your research?
The Oculus VR funding compliments our NIH funding in providing sufficient resources to run the lab. Our interaction with Oculus VR opens up new possible applications of what we are doing, those beyond dealing with issues of sensory impairments which is of interest to the NIH.
How does movement of listener and sound source enable you to probe neural computation?
First, as sound has no dimensions of space and there are no auditory spatial receptors, determining the location of a sound source based on only sound requires the brain to compute some aspect of the interaction of sound with obstacles (e.g., the head) in the path that the sound travels to the ears. For instance, sound from a source will usually reach one ear before the other. A computation of that interaural time difference can be used to indicate the relative location of the sound source. Likewise, the requirement that there be an integration of cues about the position of the head and auditory spatial cues can only occur within the brain.
What do you see as the main applications of your research?
At a basic science level, the work should provide valuable insights into how the brain does neural computations. In terms of hearing impairment, we have already shown that patients fit with cochlear implants (CI) experience front-back sound source localization confusions for stimuli that people with normal hearing perceive without such front-back confusions. However, if CI patients are allowed to move their heads, they no longer experience such front-back confusions. So, there are possibilities to improve diagnosis and treatment of challenges people with hearing impairment have in sound source localization. Virtual environments are mostly about having people “virtually move” through a scene and perceive the scene as they would if they actually moved through it. Our work might provide insights into how this can be done efficiently using an auditory scene.
Professor Yost studies the auditory brain and specifically, how we are able to locate sounds.
Funding
- Oculus VR, LLC
- Arizona State University (ASU)
- NIH
Collaborators
- Dr Michael Torben Pastore, Post-Doctoral Fellow
- Dr Michael Dorman, Professor Emeritus, Speech and Hearing Science, ASU
Bio
Contact
William Yost, PhD
Research Professor
Speech and Hearing Science
Arizona State University
PO Box 870102
Tempe AZ, 85287-0102
USA
E: [email protected]
T: +1 480 727 7148
W: https://isearch.asu.edu/profile/1099656
Creative Commons Licence
(CC BY-NC-ND 4.0) This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Creative Commons LicenseWhat does this mean?
Share: You can copy and redistribute the material in any medium or format