Biopsychology Chapter 6 Review
What sort of process occurs when the ossicles transmit vibrations from the tympanic membrane to the oval window ?
When the ossicles transmit vibrations from the tympanic membrane to the oval window, waves or ripples are created in the fluid of the scala vestibuli, which in turn cause the basilar membrane to ripple, like shaking out a rug
Functions of the middle ear (pt. 2)
The middle ear is equipped with the equivalent of a volume control, which helps protect against the damaging forces of extremely loud noises. Two tiny muscles- the tensor tympani and the stapedius- attach to the ends of the chain of ossicles. Within 200 milliseconds of the arrival of sounds. Interestingly, the middle-ear muscles activate just before we produce self-made sounds like speech or coughing, which is why we don't perceive our own sounds as distractingly loud.
Outer ear (pinna)
The oddly shaped fleshy objects that most people call ears are properly known as pinnae (singular pinna)
Functions of the outer ear
The outer ear directs sound into the inner parts of the ear
Inner ear
The part of the inner ear that ultimately converts vibrations from sound into neural activity-the coiled, fluid-filled cochlea (from the Greek kochlos, "snail") - is a marvel of miniaturization. In an adult, the cochlea measures only about 9 millimeters in diameter at its widest point-roughly the size of a pea. Fully unrolled, the cochlea would be about 35-40 millimeters long. The cochlea is a coil of three parallel canals: 1) the scala vestibuli, 2) the scala media (middle cannal), and 3) the scala tympani (tympanic canal)
Fundamental
The predominant frequency of the auditory tone; Fundamental is the basic frequency and harmonics are multiples of the fundamental. For example, if the fundamental is 440 Hz, the harmonics are 880 Hz, 1,320 Hz, 1,760 Hz, and so on
Function of the ridges and valleys of the pinna
The ridges and valleys of the pinna modify the character of sound that reaches the middle ear. Some frequencies of sound are enhanced; others are suppressed. For example, the shape of the human ear especially increases the reception of sounds between 2,000 and 5,000 Hz- a frequency range that is important for speech perception. The shape of an external ear- and, in many species, the direction in which it is being pointed-provides additional cues about the direction and distance of the source of sounds
Functions of stereocilia on the basilar membrane
The rippling of the basilar membrane is converted into neural activity through the actions of the hair cells. Each hair cell features a sloping brush of miniscule hairs called stereocilia (singular stereocilia) on its upper surface. The hair cells-especially the stereocilia themselves- thus form a mechanical bridge between the two membranes, and they are focused to bend when sounds cause the basilar membrane to ripple.
Functions of the scala media
The scala media contains the receptor system, called the organ of Corti, that converts vibration (from sound) into neural activity. It consists of three main structures: 1) the auditory sensory cells, called hair cells, which are embedded in the basilar membrane, 2) an elaborate framework of supporting cells, and 3) the auditory nerve terminals that transmit neural signals to and from the brain.
Why is vestibular information important?
Vestibular information is crucial for planning body movements, maintaining balance against gravity, and smoothly directing sensory organs like the eyes and ears toward specific locations, and the nerve pathways from the vestibular system have strong connections to brain regions responsible for the planning and control of movement
Perception of sound
We perceive a repetitive pattern of local increases and decreases in air pressure as sounds. Usually this oscillation is caused by a vibrating object, such as a loudspeaker or a person's larynx during speaking. A single alteration of compression and expansion of air is called one cycle
Harmonic
A multiple of a particular frequency called the fundamental
Functions of the inner ear
The inner ear functions to have mechanical force of sounds transduced into neural activity: the action potentials that inform the brain
Middle ear
A collection of tiny structures made of membranes, muscle, and bone-essentially a tiny biological microphone-links the ear canal to the neural receptor cells of the inner ear. This middle ear consists of the taut tympanic membrane (eardrum, sealing the end of the ear canal plus a chain of tiny bones, called ossicles, that mechanically couple the tympanic membrane to the inner ear at a specialized patch of membrane called the oval window. These ossicles, the smallest bones in the body, are called the malleus (hammer), incus (anvil), and the stapes (stirrup)
Temporal coding theory
A complementary account called the temporal coding theory proposes that the frequency of auditory stimuli is encoded in the rate of firing of auditory neurons. For example, a 500 Hz sound might cause some auditory neurons to fire 500 action potentials per second. Volleys of action potentials being produced at this rate, by a number of neurons with similar tunings, provide the brain with a reliable additional source of pitch information
What causes a problem with sound waves reaching the cochlea, trouble converting sound waves into action potentials, and dysfunction of brain mechanisms that process soundwaves into action potentials, and dysfunction of the brain mechanisms that process sound ?
1. Before anything even happens in the nervous system, the ear may fail to convert the sound vibrations in the air into waves of fluid within the cochlea. This form of hearing loss, called conduction deafness, often comes about when the ossicles of the eardrum can no longer be conveyed to the oval window of the cochlea. 2. Even if the vibration is successfully conducted to the cochlea, the sensory apparatus of the cochlea-the hair cells-may fail to respond to the ripples created in the basilar membrane and thus fail to create the action potentials to inform the brain about sounds. This form of hearing loss, termed sensorineural deafness, is most often due to the permanent damage or destruction of hair cells by a variety of causes. Some people are born with genetic abnormalities that interfere with the function of hair cells. Many more people acquire sensorineural deafness during their lives as a result of being exposed to extremely loud sounds - over amplified music, nearby gunshots, and industrial noises are important examples-or because of medical problems such as infections and adverse drug effects (certain antibiotics, such as streptomycin, are particularly ototoxic). If you don't think it can happen to you, think again. Anyone listening to something for more than 5 hours per week at 89 dB or louder is already exceeding workplace limits for hearing safety, yet many personal music players and music at concerts and clubs exceed 100 dB. Fortunately, earplugs are available that attenuate all frequencies equally, making concerts a little quieter without muffling the music. Long-term exposure to loud sounds can cause tinnitus-persistent ringing in the ear- and/or a profound loss of hearing for the frequencies being listened to at such high volume. 3. For the action potential sent from the cochlea to be of any use, the auditory areas of the brain must process and interpret them in meaningful ways. Central deafness occurs when auditory brain areas are damaged by, for example, strokes, tumors, or traumatic injuries. As you might expect from our earlier discussion of auditory processing in the brain, this type of deafness almost never involves a simple loss of auditory sensitivity. Afflicted individuals can often hear a normal range of pure tones but are imparied in the perception of complex, behaviorally relevant sounds. An example in humans is word deafness: selective trouble with speech sounds despite normal speech and normal hearing for nonverbal sounds. In cortical deafness- a rare syndrome involving bilateral lesions of auditory cortex- patients have more-complete impairment, struggling to recognize all complex sounds, whether verbal or nonverbal.
What happens as a result of the hair cells transducing movements of the basilar membrane into electrical signals?
1. Even a tiny deflection of stereocilia produces a large and rapid depolarization of the hair cells. 2. This depolarization results from the operation of a special type of large and nonselective ion channel found on stereocilia. 3. Like spring-loaded trapdoors, these channels are mechanically popped open as stereocilia bend, allowing an inrush of potassium and calcium ions. 4. Just as at the base of the hair cell, which in turn causes synaptic vesicles there to fuse with the presynaptic membrane and release neurotransmitter, stimulating adjacent nerve fibers. 5. The stereocilia channels snap shut again in a fraction of a millisecond as the hair cell sways back. 6.This ability to rapidly switch on and off allows hair cells to accurately track the rapid oscillation of the basilar membrane with exquisite sensitivity
Functions of the basilar membrane
A crucial feature of the basilar membrane is that it is tapered- it's much wider at the apex of the cochlea than at the base. Thanks to its taper, each successive location along the basilar membrane responds most strongly to a different frequency of sound. High frequencies thus produce their greatest effects near the base, where the membrane is narrow and comparatively stiff; low-frequency sounds produce a larger response near the apex, where the membrane is wider and floppier.
How does the ear provide a sense of balance?
1. The receptors of the vestibular system are hair cells-just like the ones in the cochlea -whose bending ultimately produces action potentials. The ciliar of these hair cells are embedded in a gelatinous mass inside an enlarged chamber called the ampulla that lies at the base of the semicircular canals. 2. Movement of the head in one axis sets up a flow of the fluid in the semicircular canal that lies in the same plane, deflecting the stereocilia in the ampulla and signaling the brain that the head has moved. Working together, the three semicircular canals accurately track the rotation of the head. 3. The utricle and saccule each contain an otolithic membrane (a gelatinous sheet studded with tiny crystals, otolith literally means 'ear stone') which, thanks to its mass, lags slightly when the head moves 4. This bends nearby hair cells stimulating them to track slight-line acceleration and deceleration-the final signals that the brain needs in order to calculate the position and movement of the body in three-dimensional space
Place Coding Theory
According to the place coding theory, the pitch of a sound is determined by the location of activated hair cells along the length of the basilar membrane. So, activation of of receptors near the base of the cochlea (which is narrow and stiff and responds to high frequencies) signal treble, and activation of receptors nearer the apex (which is wide and floppy and responds to low frequencies) signals bass
Amplitude
Amplitude is usually measured as sound pressure in dynes per square centimeter (dyn/cm squared). Our perception of amplitude is loudness, expressed as decibels (dB). The decibel scale is logarithmic: one decibel is the threshold for human hearing, a whisper is about 20 dB, and a departing jetliner a couple hundred feet overhead - a sound a million times as intense - is about 120 dB
Functions of the pinna
Aside from their occasional utility as handles and jewelry hangers, the pinnae funnel sound waves into the second part of the external ear: the ear canal (or auditory canal)
What is the function of tonic organization ?
At every level of the auditory system, from cochlea to auditory cortex, auditory pathways display tonotopic organization; that is, the pathways for the different tones are spatially arranged like a topographic map (topos is Greek for 'place') from low frequency (sounds we perceive as lower-pitched or 'bass') to high frequency (perceived as higher-pitched or 'treble'). Furthermore, at the higher levels of the auditory system, auditory neurons are not only excited by specific frequencies, but also inhibited by neighboring frequencies, resulting in much sharper tuning of the frequency responses of these cells. This perception helps us discriminate tiny differences in the frequencies of sounds.
What does brain-imaging show us ?
Brain-imaging studies in humans have confirmed that many sounds (tones, noises, and so on) activate the primary auditory cortex (A1), which is located on the upper surface of the temporal lobes. Speech sounds produce similar activation, but they also activate other, more specialized auditory areas. Interestingly, at least some of these regions are activated when hearing people try to lip-read - that is, to understand someone by watching that person's lips without auditory cues. This suggests that the auditory cortex integrates other, nonauditory, information with sounds.
What does experimental evidence on the place coding theory and temporal coding theory reveal ?
Experimental evidence indicates that we rely on both of these processes to discriminate the pitch of sounds. Temporal coding is most evident at lower frequencies up to about 4,000 Hz: auditory neurons can fire a maximum of only about 1,000 action potentials per second, but to a limited extent they can encode sound frequencies that are multiples of the action potential frequency. Beyond about 4,000 Hz, however, this encoding becomes impossible, and pitch discrimination relies on place coding of pitch along the basilar membrane.
What occurs when auditory inputs are distributed within the brain ?
First, the auditory fibers terminate the (sensibly named) cochlear nuclei, where some initial processing occurs. Output from the cochlear nuclei primarily projects to the superior olivary nuclei, each of which receives input from both right and left cochlear nuclei. This bilateral input makes the superior olivary nucleus the first brain site at which binaural (two-ear) processing occurs.
Frequency
Frequency is the number of cycles per second, measured in hertz (Hz). So, middle A on a piano has a frequency of 440 Hz. Our perception of frequency is termed pitch
Inner Hair Cells vs. Outer Hair Cells
In the human cochlea, the hair cells are organized into a single row of about 3,500 inner hair cells (IHCs, called inner because they are closer to the central axis of the coiled cochlea) and about 12,000 outer hair cells (OHCs) in three rows. Fibers of the vestibulocochlear nerve (cranial nerve VIII) contact the bases of the hair cells. Some of these fibers do indeed convey sound information to the brain, but the neural connections of the cochlea are a little more complicated than this. In fact, there are four kinds of neural connections with hair cells, each relying on a different neurotransmitter.
Interaural intensity differences (IIDs)
Interaural intensity differences are differences in loudness at the two ears (interaural means 'between the ears'). Depending on the species- and the placement and characteristics of their pinnae-intensity differences occur because the head casts a sound shadow, preventing sounds originating on one side (called off-axis sounds) from reaching both ears with equal loudness. The head shadow (or sound shadow) effect is most pronounced for higher-frequency sounds
Interaural temporal differences (ITDs)
Interaural temporal differences are differences between the two ears in the time of arrival of sounds. They arise because one ear is always a little closer to an off-axis sound than the other ear is. Two kinds of temporal (time) differences are present in a sound: onset disparity, which is the difference between the two ears in hearing the beginning of the sound; and ongoing phase disparity, which is the continuing mismatch between the two ears in the arrival of all the peaks and troughs that make up the sound wave
Inner ear and its connection to our sense of balance
Like hearing, our sense of balance is the product of the inner ear, relying on several small structures that adjoin the cochlea and are known collectively as the vestibular system (from the Latin 'vestibulum', reflecting the fact that the system lies in hollow spaces in the temporal bone). In fact, it is generally accepted that the auditory organ evolved from the vestibular system, although the ossicles probably evolved from parts of the jaw. The most obvious components of the vestibular system are the three fluid-filled semicircular canals, plus two bulbs called the saccule and the utricle that are located near the ends of the semicircular canals. Notice that the three canals are oriented in the three different planes in which the head can rotate- nodding up and down (technically known as pitch), shaking from side to side (yaw), and tilting left and right (roll).
How does music affect the auditory cortex ?
Music also shapes the responses of the auditory cortex. It might not surprise you to learn that the auditory cortex of trained musicians shows a bigger response to musical sounds than nonmusicians. After all, when two people differ in any skill, their brains must be different in some way, and maybe people born with brains that are more responsive to complex sounds are also more likely to become musicians. The surprising part is that the extent to which a musician's brain is extra sensitive to musical notes is correlated with the age at which they began their serious training in music: the earlier the training began, the bigger the difference in auditory cortex in adulthood. This finding indicates that intense musical experience in development alters the functioning of the auditory cortex later in life. By adulthood, the portion of the primary auditory cortex where music is first processed, called Heschl's gyrus, is more than twice as large in professional musicians as in nonmusicians, and more than twice as strongly activated by music. Furthermore, cortical regions that process music are reportedly influenced by the brain's mesolimbic reward system to attach a reward to value to music that is new to us
How do auditory signals run from cochlea to the cortex ?
On each side of your head, about 30,000-50,000 auditory fibers from the cochlea make up the auditory part of the vestibulocochlear nerve (cranial nerve VIII), and most of these afferent fibers carry information from the IHCs (each of which stimulates several nerve fibers) to the brain. If we record from these IHC afferents, we find that each one has a maximum sensitivity to sound of a particular frequency but will also respond to neighboring frequencies if the sound is loud enough.
What occurs when vestibular information enters the brainstem ?
On entering the brainstem, many of the vestibular fibers of the vestibulocochlear nerve (cranial nerve VIII) terminate in the vestibular nuclei, while some fibers project directly in a complex manner to motor areas throughout the brain, including motor nuclei of the eye muscles, the thalamus, and the cerebral cortex.
Functions of the middle ear (pt. 1)
Sound waves in the air strike the tympanic membrane and cause it to vibrate with the same frequency as the sound; as a result, the ossicles start moving too. Because of how they are attached to the eardrum, the ossicles concentrate and amplify the vibrations, focusing the pressures collected from the relatively large tympanic membrane onto the small oval window. This amplification is crucial for converting vibrations in air into movements of fluid in the inner ear
Timbre
The characteristic sound quality of a musical instrument, as determined by the relative intensities of its various harmonics; where different instruments play the same note, the notes differ in the relative intensities of the various harmonics and there are subtle qualitative differences between instruments in the way they commence, shape, and sustain the sound; these differences are what give each instrument its characteristic voice or timbre
What are the different categories of the fibers of the vestibulocochlear nerve ?
The fibers are distinguished as follows: IHC afferent: convey to the brain the action potentials that provide the perception of sounds. IHC afferents make up about 95% of the fibers leading to the brain IHC efferent: lead from the brain to the IHCs-through which the brain can control the responsiveness of IHCs OHC afferents: convey information to the brain about the mechanical state of the basilar membrane, but not the perception of sounds themselves OHC efferents: lead from the brain and enable it to activate a remarkable property of OHCs, making them change their length almost instantaneously. Through this electromechanical action, the brain continually modifies the stiffness of regions of the basilar membrane, resulting in both sharpened tuning and pronounced amplification
What is another function of the ridges and valleys of the ear ?
The structure of the external ear provides yet another localization cue. As we mentioned earlier, the hills and valleys of the external ear selectively reinforce some frequencies in a complex sound and diminish others. This process is known as spectral filtering and the frequencies that are affected depend on the angle at which the sound arrives at those peaks and valleys. That angle varies, of course, depending on where the sound came from; these spectral cues provide critical information about the vertical localization (or elevation) of a sound source. Without them, you would have a hard time knowing whether a sound from straight in front of you came from the ground or from the treetops. The various binaural and spectral cues used for sound localization converge and are integrated in the inferior colliculus.
Functions of the superior olivary nuclei
The superior olivary nuclei pass information derived from both ears to the inferior colliculi, which are the primary auditory centers of the midbrain. Outputs of the inferior colliculi go to the medial geniculate nuclei of the thalamus. Pathways from the medial geniculate nuclei extend to several auditory cortical areas.
What are the unique capabilities of the auditory cortex ?
The unique capabilities of the auditory cortex result from a sensitivity that is fine-tuned by experience as we grow. Human infants have diverse hearing capabilities at birth, but their hearing for complex speech sounds in particular becomes more precise and rapid through exposure to the speech of their families and other people. Newborns can distinguish all the different sounds that are made in any human language. But as they develop, they get better at distinguishing sounds in the language(s) they hear, and worse at distinguishing sounds that occur in other languages. Similarly, early experience with binaural hearing, compared with equivalent monaural (one-eared) hearing, has a significant effect on the ability of children to localize sources later in life. Studies with lab animals confirm that experience with sounds of particular frequency can cause a rapid returning of auditory neurons
What would happen if we lacked a sense of balance ?
Without our sense of balance, it would be a challenge to simply stand on two feet. When you use an elevator, you clearly sense that your body is rising or falling, despite the sameness of your surroundings. When you turn your head, take a tight curve in your car, or bounce through the seas in a boat, your continual awareness of motion allows you to plan further movements and anticipate changes in perception due to movement of your head. And of course, too much of this sort of stimulation can make you lose your lunch.
Functions of the auditory system
Your auditory system detects change in the vibration of air molecules that are caused by sound sources, sensing both the intensity of sounds (measured in decibels [dB] and perceived as loudness) and their frequency (measured in cycles per second, or hertz [Hz], and perceived as pitch)