Auditory processing supports us to connect with others and the environment across space.
Our sense of hearing enables us to register, orient to, and process sound energy waves and create responses to those sounds.
We often detect the presence and location of objects and activity before we can see them. We can respond to sounds through our behaviours and by making our own sounds – our vocal communication.
Our ability to communicate with others efficiently and effectively is heavily dependent on hearing and making sense of speech sounds. However, development of language is not entirely dependent on hearing. Deaf children learn to understand and use sign language using vision and somatosensory skills. What is important is that use of sign language is started from birth. Deaf children will consistently produce about 50% more manual shapes – ‘hand babbling’ than their hearing peers during the first 10 – 14months of age.
Sound waves are all around us travelling through air, water, and solids. Other animals will ‘hear’ through their skin or body hairs. Humans have a fairly narrow range of hearing in comparison to other animals. However, it seems we are better able to discriminate tiny differences in frequencies than other animals and our discriminative ability is heightened in the presence of humankind relevant activities such as speech and music.
As with all our sensory systems, we have peripheral receiving structures. Sound reaches our brain from two ears meaning we can better spatially locate and judge the distance of the sound. The ear comprises the outer, middle, and inner ear structures, vibrations in the form of sound waves are channelled by the outer ear, amplified by the middle ear bones that knock on the ear drum (tympanic membrane) and transmit the vibrations into the inner ear.
The inner ear contains the sensory end organ, the cochlear, made up of curled tubes filled with fluid and lined with thousands of hair cells.
These hair cells sit in the Organ of Corti a sensory cell layer that sits along the length of the basilar membrane within the cochlear. Movement in the fluid within the cochlear moves the basilar membrane and stimulates the Organ of Corti, enabling the conversion of the liquid vibrations into a neural signal. This travels along the Cochlear /8th cranial nerve into the brain where the impulses can be interpreted as individual frequencies of sound stem.
Central Auditory Processing (CAP) occurs via the central auditory nerve pathways and the structures they reach. Starting with the brainstem, auditory processing is undertaken by extensive structures in the brain.
- Brain stem nuclei involved in auditory processing are numerous. (cochlear nuclei, superior olivary nuclei, lateral lemniscus, inferior colliculus
- Thalmic nuclei (medial geniculate nuclei) are actively engaged in sensory processing, integrating and regulating the flow of information.
- Cerebellum. This structure is involved in most auditory processing tasks and includes spatial and temporal aspects and emotional responses to sounds. Cerebellar and brainstem nuclei, together with the amygdala, are important for habituating to sounds and reducing startle reflex activity to sounds. Damage or differences in the cerebellum may inhibit this maturation. Newer parts of the cerebellum are involved in auditory tasks from listening to sounds habitual learning of auditory and speech.
- The expansion of the cerebellum in humans is thought to be linked to development of speech. The temporal sequences of auditory inputs are associated with particular events enabling the precise timing of relevant motor responses including speech.
- Impaired activation of the cerebellum is implicated in many neurodevelopmental disorders, ASD, ADHD, DCD, Dyslexia. This may impact the way sounds are processed,including the emotional salience of sounds and the spatiotemporal aspects impacting the coupling of sounds and movement representations and our sense of rhythm.
- The proximity of speech recognition to speech articulation mechanisms in the cerebellum is apparent. Cerebellar disorders can give rise to ataxic dysarthria, which is characterised by distinct articulatory and phonatory deficits.
- Basal Ganglia – there is growing evidence for the involvement of the Basal Ganglia in decoding emotional information from vocal cue sequences and the rhythmic aspects of speech.
- Amygdala – the amygdala is particularly sensitive to sound with emotional meaning; vocalisations, crying or music. It plays a central role in auditory fear conditioning.
- Cortical level. There are multiple stages of cortical processing, beginning with the
- Primary cortex. This contains a map of the cochlear, each point in the cochlear corresponds to cells in the auditory cortex, for example the homunculus map seen in the primary somatosensory cortex.
- Non-primary auditory cortex areas are involved in distinct aspects of speech perception including attending to one voice over another or background noise.
- Insula cortex recent studies suggest that the insula monitors the emotional reaction in our own voice production
It is now apparent that there is a significant descending system of circuits within the auditory system that help to modulate auditory processing at every level, even in the cochlear to modulate hair cells These descending circuits help to modulate auditory attention, based on the relevance of environmental cues, attention, learned behaviours, and emotional state of an individual.
In our next blog, you will learn all about auditory system development.