Hearing impairment can be psychologically and socially devastating, yet in many cases is poorly understood or may even go unrecognised. This is partly because many forms of hearing impairment, including difficulties understanding spoken language, or in locating the source of a sound in a noisy environment, result not from physical or functional abnormalities in the structure or function of the ear, but from errors in the transmission of sound signals within the brain. This ‘auditory processing deficit’ is remarkably common, perhaps affecting as many as ten percent of school children in the US.
Our brains are composed of literally billions of nerve cells (neurons), many of which are occupied with processing the sensory information received by our eyes, ears and other sense organs. Signals pass from neuron to neuron in the form of ‘action potentials’: sudden and rapid changes in electrical activity at the surface of a neuron, which spread to adjacent neurons. How and when these action potentials are generated depends upon the particular anatomical and molecular features of each neuron, from those of the inner ear where sound is perceived, to those of the brainstem where sound is encoded, to eventually the brain where it is interpreted.
All in the timing
In many cases, auditory processing deficits are thought to result from errors in the timing of action potentials within the brainstem. Work in the ‘CAPLAB’ – Northwestern’s Central Auditory Physiology Laboratory – led by Dr Jason Tait Sanchez, focuses on how neurons have evolved to generate extremely fast and precisely-timed action potentials in response to sound, and how their design is regulated during normal development. It is only through understanding how hearing is accomplished successfully, that we will be able to identify errors in the system resulting in hearing impairment, and, ultimately, find ways to treat them.
The key weapon in Dr Sanchez’s repertoire is the humble chicken. The hearing systems of birds (including chickens) and mammals (including us) share comparable components at many levels. For instance, the mammalian ‘anteroventral cochlear nucleus’ – a part of the brainstem connected by auditory nerve cells to the inner ear – has a direct analogue in the chicken known as the ‘nucleus magnocellularis.’ Whilst neuroscientists, for obvious reasons, cannot study the brains of developing humans, chickens therefore provide an ideal model system. In fact, the hearing of chickens is more similar to humans than that of many other mammals such as rats and mice, which hear mainly high frequencies. Chickens can hear both high and low frequency sounds, and it is lower frequency sounds in particular which are crucial to speech recognition and sound localisation in humans.
Down in the brainstem
A fundamental feature of auditory neurons that enables them to send well-timed signals to the brain is their temporal and spatial specialisation into different types, each of them essential to the proper functioning of the whole hearing system. Firstly, during early development of the chicken nucleus magnocellularis, Dr Sanchez and others have shown that neurons at different developmental times show different characteristics, with the early-developing neurons producing action potentials in response to low frequency sounds, and later-developing ones, to higher frequencies (see Figure 3).
But, the complexity doesn’t stop there. The nucleus magnocellularis can be broken down spatially into different parts, responding to different frequency sounds – a phenomenon termed ‘tonotopy.’ Scientists have divided the nucleus magnocellularis into ‘caudolateral’ (towards the lower frequency edge of the structure) and ‘rostromedial’ sections (toward the higher frequency edge of the structure) (Figure 1). Dr Sanchez and his collaborator, Dr Yuan Wang have recently shown that the neurons of the caudolateral part – which respond to low frequency sounds – are more excitable, smaller and extensively branched, while those of the rostromedial part are less excitable, larger and simpler in structure (see Figure 2). Furthermore, even the caudolateral nucleus magnocellularis itself can be divided into two parts, each with neurons of a distinct, characteristic structure and role.
Tracing the connections to and from these neurons has indicated that the different types receive different frequency signals from the ear, respond to different levels of stimulation, express unique ion channel patterns and release different chemical responses, causing action potentials with different strengths and frequencies. These hitherto unrealised levels of specialisation within the brainstem, particularly with regard to processing low frequency sound signals, may be crucial for auditory perception and scene analysis, including speech recognition.
It’s about action
Action potentials are generated through the movement of charged ions, namely sodium and potassium ions, through channels in a neuron’s boundary membrane, with different channels opening in response to different levels of stimulation in the form of ion concentration. Dr Sanchez’s research has characterized differences in the activity of the genes encoding these channels, which in turn predict the precise auditory input to which each neuron is ‘tuned.’ Using computer-modelling, his team have helped explain how specialised channels are defined, and how synergistic interactions between channels dedicated to sodium or potassium ions enable action potentials to occur in quick succession, contributing to more rapid and precise responses to complex auditory signals.
Perhaps the final frontier in achieving a full understanding of how sounds are perceived by the brain is to understand how the crucial differences in neurons are generated during development. Generating a tonotopic axis of specialised auditory neurons requires orchestrated and sophisticated biological regulation to set up a precise gradient of the regulatory molecules, known as ‘neurotrophins,’ which control the growth and functional development of neurons.
Dr Sanchez’s current work, funded by the US National Institutes of Health, will explore in more detail how ion channel characteristics are controlled by interacting molecular receptors known as AMPA-receptors and NMDA-receptors in the brainstem. He will explore the role of these molecules during development, and their potential implications for normal hearing and impairments, using physiological and biochemical assays and even state-of-the art genetic manipulation techniques. Ultimately, his work may identify molecular targets for genetic, pharmacological, or stem cell therapies to treat auditory processing deficits, transforming the lives of many affected by these debilitating but little-understood disorders.
Soundwaves travel down the ear canal of the outer ear and are converted into mechanical energy by structures within the middle ear. The mechanical conversion of soundwaves helps offset the resistance of the fluid filled inner ear, where sensory receptors – known as hair cells – convert the dispersion of fluid energy into electrical potentials. The electrical potential from hair cells permits the release of a chemical (neurotransmitter) that binds to receptors located on adjacent nerve fibres. Here, the all-important, ultrafast and extremely well-timed action potential is generated and sent to numerous downstream auditory structures (five to be exact) until ultimately reaching the auditory cortex portion of the brain, where the electrical activity is encoded as the cognitive perception of sound.
How is it possible to determine what a chicken can hear?
There are several objective and subjective methods used to determine the hearing specificity (i.e., frequency range) and sensitivity (i.e., lowest level of perceived sound) of many vertebrates. Such studies – spanning nearly a half century – have determined what a chicken can hear. As one might expect, the method and age of the animal varies across studies but in large part includes (1) electrophysiology recordings (from individual cells to scalp recordings of electrical activity in response to sound) or (2) behavioural paradigms (animals trained to a task in response to sounds varying in the frequency and intensity domain) in both embryos and hatchlings, respectively. Despite differences across studies, an accurate profile of chicken hearing has emerged that is generally accepted among the hearing scientific community that study avian hearing.
Why is the timing of action potentials so crucial to accurate hearing?
In the auditory system, timing is everything. The ability of an auditory neuron to accurately fire an action potential to a specific time point of a stimulus is an effective way to encode temporal patterns of sound. This phenomenon is known as ‘phase-locking’. It is best described by the consistent relationship of well-timed action potential firing at a given phase of a periodic stimulus like a sound wave. Aberrant action potential firing and the subsequent breakdown in the ability to correctly and accurately encode temporal elements of sound is thought to contribute to numerous auditory problems.
How can computer models help in your research?
Occasionally, experimental attempts at addressing highly-specific mechanisms of action potential properties is limited by the pharmacological method(s) available to scientists. That is, the specificity and sensitivity of available drugs that block or alter ion channel properties is not always as ‘specific’ or ‘sensitive’ as one would like, resulting in off target effects on other ion channels. Therefore, control of ion channel function can be investigated using computational modelling. For example, we recently showed that removal of a very specific type of potassium channel in our model neuron regulated low frequency action potential firing and it also revealed the real-time and dynamic interaction between other ion channels that we could not profile experimentally with drugs.
What kinds of treatments for hearing impairment might emerge from this research?
The ultimate goal is not only to provide valuable insight on normal ion channel function but to elucidate and pharmacologically target ion channels thought to be responsible for channelopathies; diseases caused by dysfunction of ion channels or the proteins and genes that regulate them. The most commonly accepted in the auditory system is tinnitus, (the hearing of sound when no external sound is present). Tinnitus is a symptom due to numerous underlying etiologies, like noise-induced hearing loss, ageing and medication. Although the exact mechanisms remain elusive, aberrant excitability via specific ion channel dysfunction is thought to be a key contributor.
Dr Sanchez’s lab explores the developmental mechanisms responsible for the precise encoding of sound in the auditory brainstem. Through an in-depth understanding of auditory development, his aim is to provide pharmacological targets that improve auditory pathophysiologies.
- National Institute on Deafness and other Communication Disorders (NIDCD)
- Knowles Hearing Research Center, Northwestern University
- Dr Diego Zorrio, (Florida State University)
- Dr Xiaoyu Wang, (Florida State University)
- Dr Sanchez’ Mentees, Dr Ting Lu, Hui Hong, Momoko Takahashi, (Northwestern University)
Dr Sanchez earned his PhD from Kent State University, his MSc from Michigan State University and his BSc from the University of Northern Colorado. Dr Sanchez was clinically trained in audiology at Cleveland Clinic and completed postdoctoral training at the University of Washington. He is currently Assistant Professor at Northwestern University.
Yuan Wang, PhD, is Assistant Professor at the Department of Biomedical Sciences, Florida State University. She earned her PhD in biophysics from the Chinese Academy of Sciences. She completed her postdoctoral training at the University of California, San Diego and the University of Washington.
Jason Tait Sanchez, PhD CCC-A
Frances Searle Building, 2240 Campus Drive
Room 2-254, Evanston, IL 60208
T: +1 847 491 4648