Quizlet Chapter 5 HF

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

The worker's concerns illustrate the effects of three different types of sound. What are they?

A worker at the small manufacturing company was becoming increasingly frustrated by the noise at her workplace. It was unpleasant and stressful, and she came home each day with a ringing in her ears and a headache. What particularly concerned her was an incident the day before when she could not hear the emergency alarm go off on her own equipment, a failure of hearing that nearly led to an injury. Asked by her husband why she did not wear earplugs to muffle the noise, she said, "They're uncomfortable. I'd be even less likely to hear the alarm, and besides, it would be harder to talk with the worker on the next machine, and that's one of the few pleasures I have on the job." She was relieved that an inspector from Occupational Safety and Health Administration (OSHA) would be visiting the plant in the next few days to evaluate her complaints. The worker's concerns illustrate the effects of three different types of sound: the undesirable noise of the workplace, the critical warning of the alarm, and the important communications through speech. Our ability to process these three sources of acoustic information, whether we want to (alarms and speech) or not (noise), and the influence of this processing on performance, health, and comfort are the focus of the first part of this chapter. We conclude by discussing three other sensory channels: tactile, proprioceptive-kinesthetic, and vestibular, as well as their integration. These senses play a smaller, but significant, role in the design of human-machine systems.

Which has the highest frequency?

-

Which sound has the greatest amplitude

-

Psychophysical Scaling of Sound- how man sone, how many decibals for each sone, how many hz, Loudeness doubles with each how many decibels?

1 sone = 40 dB tone of 1,000 Hz; Loudness doubles with each 10 dB increase

Alarm Design- what four things are needed for the design?

1.Conduct environment/task analysis - must understand what sounds/noises (and their qualities) are associated with the job 2.Make sure alarms are within human's capability of discrimination by varying on different dimensions: •Pitch (low to high), Envelope (rising/falling pitch), Timbre (quality), and Rhythm (synchronous vs. asynchronous) 3.Design specific qualities of sound •For example: Use pulses to create unique sound and to give perception of an approaching, then receding sound to create sense of urgency 4.Establish repeating sequence •After initial alert, may be less intense

Criteria for good alarms? 5 items

1.Must be heard above background noise (approx 30 dB above) 2.Avoid excessive intensity •Should not be above the danger level for hearing (85-90 dB) •using a very different frequency may help (esp if conflicts with crit #1) 3.Should not be too startling 4.Should not disrupt processing of other signals -Do not want alarm to mask speech or other important signals 5. Should be informative, not confusing Should communicate the appropriate actions

Haptic Responding Experiment

?- notes from class

The worker's concerns illustrate the effects of what three different types of sound?

A worker at the small manufacturing company was becoming increasingly frustrated by the noise at her workplace. It was unpleasant and stressful, and she came home each day with a ringing in her ears and a headache. What particularly concerned her was an incident the day before when she could not hear the emergency alarm go off on her own equipment, a failure of hearing that nearly led to an injury. Asked by her husband why she did not wear earplugs to muffle the noise, she said, "They're uncomfortable. I'd be even less likely to hear the alarm, and besides, it would be harder to talk with the worker on the next machine, and that's one of the few pleasures I have on the job." She was relieved that an inspector from Occupational Safety and Health Administration (OSHA) would be visiting the plant in the next few days to evaluate her complaints. The worker's concerns illustrate the effects of three different types of sound: the undesirable noise of the workplace, the critical warning of the alarm, and the important communications through speech. Our ability to process these three sources of acoustic information, whether we want to (alarms and speech) or not (noise), and the influence of this processing on performance, health, and comfort are the focus of the first part of this chapter. We conclude by discussing three other sensory channels: tactile, proprioceptive-kinesthetic, and vestibular, as well as their integration. These senses play a smaller, but significant, role in the design of human-machine systems.

What happens when experienced users use alarms beyond those that may have been originally intended by the designer

An important facet of alarms is that experienced users often employ them for a wide range of uses beyond those that may have been originally intended by the designer (i.e., to alert to a dangerous condition of which the user is not aware [200]). For example, in one study of alarm use in hospitals by anesthesiologists noted that anesthesiologists use alarms as a means of verifying the results of their decisions or as simple reminders of the time at which a certain procedure must be performed [184]. One can imagine using an automobile headway monitoring alert of "too close" as simply a means of establishing the minimum safe headway to "just keep the alert silent."

Nature of masking- principle of design- three items

As our worker at the beginning of the chapter discovered, sounds can be masked by other sounds. The nature of masking is actually quite complex [175], but a few of the most important principles for design are the following: 1. The minimum intensity difference necessary to ensure that a sound can be heard is around 15 dB (intensity above the mask). 2. Sounds tend to be masked more by sounds in a critical frequency band surrounding the sound that is masked. 3. Low-frequency sounds mask high-frequency sounds more than the converse. Thus, a woman's voice is more likely to be masked by other male voices than a man's voice would be masked by other female voices even if both voices are speaking at the same intensity level.

Parts of the Ear: Tympanic Membrane (ear drum)- what does it do?

At end of ear canal, vibrates to sound pressure (like a drum head)

Parts of the Ear: Cochlea- What function does it have? What does it look like?

Cochlea - "snail-like organ" where mechanical energy is transduced to electrical nerve energy, by way Hair Cells along the waving Basilar Membrane that "fire" when they are bent against the rigid Tectorial Membrane of the Organ of Corti, which sends a signal along the Auditory Nerve to the brain.

Anatomy of the Ear- What does the outer ear do? What does the middle ear do, what does the inner ear do?

Converts sound energy (outer ear) to mechanical energy (middle ear) to electrical nerve energy (inner ear), then sends signal to the brain List parts:

False Alarms- Cry Wolf Syndrome- What is it? How do you avoid it. (6 items)?

Cry Wolf Syndrome - Human operator fails to respond to alarm due to the large number of false alarms in the past. To avoid "Cry Wolf Syndrome": • Set the alarm criterion to be sensitive enough to minimize misses, without increasing false alarms. • May use more complex algorithms to determine true threshold. • may use more than one signal measure • Train operators on the tradeoffs of false alarms/misses • understand actual false alarm rates • Use multiple alert levels (denote different urgency states)

Frequency what is the definition- what is it a property of?

Cycles per second(Hertz) perceived as pitch Properties of Sound

What are the dangers of Occupational Noise? 4 items.

Dangers of excessive noise: • Hearing loss - caused by exposure to loud noises. Some hearing loss is expected with age (higher freqs) • Loss of sensitivity while noise is present • Temporary Threshold Shift (TTS) - Loss of hearing that lingers after noise is terminated (post-rock concert) - Tinnitus or ringing in the ears - 100 dB for 100 min causes a 60 dB TTS • Permanent Threshold Shift (PTS) - Occupational Deafness caused by long term exposure (esp high freqs)

Designing alarms- environmental and task analysis can identify what? What are three things to do for effective alarms?

Designing alarms well can avoid, or at least minimize, the potential costs described above. First, as we have noted, environmental and task analysis can identify the quality and intensity of other sounds (noise or communications) that might characterize the environment in which the alarm is presented to guarantee detectability and minimize disruption of other essential tasks. Second, to guarantee informativeness and to minimize confusability, designers should try to make alarm sounds as different from each other as possible by capitalizing on the various dimensions along which sounds differ. These dimensions include: pitch (fundamental pitch or frequency band), envelope (e.g., rising, woop woop; constant beep beep), rhythm (e.g., synchronous da da da versus asynchronous da da da da), and timbre (e.g., a horn versus a flute). Two alarms will be most discriminable (and least confusable) if they are constructed at points on opposite ends of all four of the above dimensions, similar to selecting colors from distant points in the color space. Third, combine the elements of sound to create the overall alarm system. Patterson [182] recommends the procedure outlined in Figure 5.10. The top of Figure 5.10 shows the smallest components of the sound—pulses—that occur over 100 to 500 msec. These show an acoustic wave with rounded onsets and offsets. The middle row shows bursts of pulses that play out over 1 to 2 seconds, with a distinctive rhythm and pitch contour. The bottom row shows how these bursts of varying intensity combine into the overall alarm, which might play over 10 to 40 seconds. At the top of the figure, each individual pulse in the alarm is configured with an envelope rise time that is not too abrupt (i.e., at least 20 msec) to avoid the "startle" created by more abrupt rises [183]. The set of pulses in the alarm sequence, shown in the middle of the figure, are configured with two goals in mind: (1) The pauses between each pulse can be used to create a unique rhythm that can help minimize confusion; and (2) the increase then decrease in intensity gives the perception of an approaching then receding sound, which creates a psychological sense of urgency Finally, the bottom row of Figure 5.10 shows how repeated presentations of the bursts can be implemented. The first two presentations may be at high intensity to guarantee their initial detection (first sequence) and identification (first or second sequence). Under the assumption that the operator has probably been alerted, the third and fourth sequences can be less intense to minimize annoyance and possible masking of other sounds (e.g., the voice communications that may be initiated by the alarming condition). An intelligent alarm system may infer, after a few sequences, that no action has been taken and then repeat the sequence at a higher intensity.

Cons of False alarms.

False alarms, such as those discussed in terms of human signal detector in Chapter 4, also plague warning systems because warnings do not always indicate an actual hazard. When sensing low-intensity signals from the environment (a small increase in temperature, a wisp of smoke), the system sometimes makes mistakes, inferring that nothing has happened when it has (the miss) or inferring that something has happened when it has not (the false alarm [189]). From a human performance perspective, the obvious concern is that users may come to distrust the alarm system and perhaps ignore it even when it provides valid information [193, 194]. More serious yet, users may attempt to disable the annoying alarms. Many of these concerns are related to the issue of trust in automation.

Five steps can help mitigate the problems of false alarms. What are they?

Five steps can help mitigate the problems of false alarms. First, it is possible that the alarm criterion itself has been set to such an extremely sensitive value that readjustment to allow fewer false alarms will still not appreciably increase the miss rate. Second, more sophisticated decision algorithms within the system may be developed to improve the sensitivity of the alarm system, a step that was taken to address the problems with the aircraft ground proximity warning system. Third, users can be trained about the inevitable tradeoff between misses and false alarms and therefore can be taught to accept the false alarm rates as an inevitable consequence of automated protection in an uncertain probabilistic world rather than as a system failure. (This acceptance will be more likely if alarms are made more noticeable by means other than shear loudness [197, 198]). Fourth, designers should try to provide the user with the "raw data" or conditions that triggered the alarm, at least by making available the tools that can verify the alarm's accuracy. Finally, a graded or likelihood alarm systems in which more than a single level of alert can be provided. Hence, two (or more) levels can signal to the human the system's own confidence that the alarming conditions are present. That evidence in the fuzzy middle ground (e.g., the odor from a slightly burnt piece of toast), which previously might have signaled the full fire alarm, now triggers a signal of noticeable but reduced intensity [199]. This mid-level signal might be liked to a caution, with the more certain alert likened to a warning.

Acceleration- What happens at Gforce Tolerance of +/- 2 Gz, +/- 3-4 Gz, and + 5.5 Gz. What are prevention and Protection against High Gforce tolernace.

High G-force tolerances •+/- 2 Gz - pressure on butt, drooping face, noticeable weight increase •+/- 3-4 Gz - Difficult to move, loss of fine motor movements, speech affected •+ 5.5 Gz - Negative blood pressure -> GLOC or grayout (passengers may blackout sooner) (video example) •Higher tolerances (>10) possible in Gx plane (forward acc) - weight on chest, difficulty breathing Prevention/Protection •G-suit - squeezes blood out of extremities - increases tolerance by 2 G •Active Straining Maneuver (Blue Angels) - Pull head down, slow forceful breathing, tensing of muscles - increase tolerance by 1.5 G

In addition to amplitude and frequency, two other critical dimensions of the sound stimulus are associated with temporal characteristics. What is it?

In addition to amplitude and frequency, two other critical dimensions of the sound stimulus are associated with temporal characteristics, sometimes referred to as the envelope in which a sound occurs, and its location. The temporal characteristics are what may distinguish the wailing of a siren from the steady blast of a car horn, and the location (relative to the hearer) is, of course, what might distinguish the siren of a fire truck pulling up behind from that of a fire truck about to cross the intersection ahead [163]. The envelope of sound is particularly critical in describing the sound of speech. Figure 5.2(top) shows the timeline of sound waves of someone saying the /d/ sound as in "day". Such signals are more coherently presented by the power spectrum, as shown in Figure 5.2(middle). However, for speech, unlike noise or tones, many of the key properties are captured in the time-dependent changes in the power spectrum—the sound envelope. To represent this information graphically, speech is typically described in a spectrogram, shown in Figure 5.2(bottom). One can think of each vertical slice of the spectrogram as the momentary power spectrum, existing at the time labeled on the horizontal axis. Darker areas indicate more power. The spectral content of the signal changes as the time axis moves from left to right. The particular speech signal shown at the bottom of Figure 5.2(bottom) begins with a short burst of relatively high frequency sound and finishes with a longer lower frequency component. Collectively this pattern characterizes the sound of a human voice saying the /d/ sound. As a comparison, Figure 5.3 shows the timeline, power spectrum and spectrogram of an artificial sound, that of a collision warning alert.

Sound- what is the definition- what is it a property of?

Is the vibration of air molecules- Properties of Sound

Is all noise bad?

It is important to identify certain circumstances in which softer noise may actually be helpful. For example, low levels of continuous noise (the hum of a fan) can mask the more disruptive and startling effects of discontinuous or distracting noise (the loud ticking of the clock at night or the conversation in the next room). Soft background music may accomplish the same objective. These effects also depend on the individual, with some people much more prone to being annoyed by noise [213]. Under certain circumstances, noise can perform an alerting function that can maintain a higher level of vigilance [214] (see also Chapter 6). For this reason, many seek out coffee shops for their engaging level of noise. More generally, the background soundscape of a design studio, hotel lobby, restaurant, or home can have broad implications for productivity and positive feelings This last point brings us back to reemphasizing one final issue that we have touched on repeatedly: the importance of task analysis. The full impact of adjusting sound frequency and intensity levels on performance can not be predicted without a clear understanding of who will listen to them, who must listen to them, what sounds will be present, when the sounds will be present, and how sound affects task performance. Furthermore, a task analysis can show that one person's noise may be another person's "signal" (as is often the case with conversation).

Cognitive Influence on Auditory Perception- op-down influences associated with what?

Just as cognition influences visual perception, it also influences auditory perception. Top-down influences associated with expectations influence what we hear, particularly as we localize sounds, interpret alarms, and understand what others are saying. Such cognitive influences also influence how annoying a particular sound might be.

The Vestibular Senses- where are these located?

Located deep within the inner ear are two sets of receptors, located in the semicircular canals and in the vestibular sacs. These receptors convey information to the brain regarding the angular and linear accelerations of the body respectively. Thus, when I turn my head with my eyes shut, I "know" that I am turning, not only because kinesthetic feedback from my neck tells me so but also because there is an angular acceleration experienced by the semicircular canals. Associated with the three axes along which the head can rotate, there are three semicircular canals aligned to each axis. Correspondingly, the vestibular sacs (along with the tactile sense from the "seat of the pants") inform the passenger or driver of linear acceleration or braking in a car. These organs also provide the constant information about the accelerative force of gravity downward, and hence they are continuously used to maintain our sense of balance (knowing which way is up and correcting for departures). When gone, as in outer space, designers might create "artificial gravity", by rotating the space craft around an axis.

Loudness and Pitch- define loudness- There are two important reasons why loudness and intensity do not directly correspond, what are they?

Loudness is a psychological experience that correlates with, but is not identical to, the physical measurement of sound intensity. There are two important reasons why loudness and intensity do not directly correspond: the psychophysical scale of loudness and the modifying effect of pitch.

Equal Loudness Curves- loudness is impacted by what? Humans are sensetive to sounds between how many hz? All tones on the contour are how loud? 1 phon equals what? If all tones are played at the same volume how many hz will appear the loudest? Is that sound heard by everyone?

Loudness is affected by sound frequency. Humans are sensitive to sounds between 20 Hz and 20,000 Hz, but most sensitive to 1,000 - 4,000 Hz range. All tones along a contour are equally loud. 1 phon = perceived loudness of a 1 dB, 1000 Hz tone If all of these tones are played at the same volume, then the 2000 Hz tone should appear to be the loudest. The 20kHz tone will not be heard by some people.

5.6.1 Touch: Tactile and Haptic Senses- e combination of tactile with auditory and visual is often referred to as "multi-modal". We see the importance of these sensory channels in the following examples: 6 items.

Lying just under the skin are sensory receptors that respond to pressure on the skin and relay their information to the brain regarding the subtle changes in force applied by the hands and fingers (or other parts of the body) as they interact with physical things in the environment. Along with the sensation of pressure, these senses, tightly coupled with the proprioceptive sense of finger position, also provide haptic information regarding the shape of manipulated objects and things [216].The combination of tactile with auditory and visual is often referred to as "multi-modal". We see the importance of these sensory channels in the following examples: 1. A problem with the membrane keyboards sometimes found on calculators is that they do not offer the same "feel" (tactile feedback) when the fingers are positioned on the button as do mechanical keys (see Chapter 9). 2. Gloves, to be worn in cold weather (or in hazardous operations) should be designed to maintain some tactile feedback if manipulation is required [217]. 3. Early concern about the confusion that pilots experienced between two very different controls—the landing gear and the flaps—was addressed by redesigning the control handles to feel quite distinct. The landing gear felt like a wheel—the plane's tire—while the flap control felt like a rectangular flap. Incidentally this design also made the controls feel and look somewhat like the system that they activate; see Chapter 9 where we discuss control design. 4. The tactile sense is well structured as an alternative channel to convey both spatial and symbolic information for the blind through the braille alphabet. 5. Designers of virtual environments, which we discuss in Chapter 10, attempt to provide artificial sensations of touch and feel via electrical stimulation to the fingers, as the hand manipulates "virtual objects" [218], or use tactile stimulation to enable people to "see" well enough to catch a ball rolled across when they are blindfolded [219]. 6. In situations of high visual load, tactile displays can be used to call attention to important discrete events [220]. Such tactile alerts cannot convey as much information as more conventional auditory and visual alerts, but are found to be more noticeable than either of the others, particularly in workplace environments often characterized by a wide range of both relevant and irrelevant sights and sounds.

Auditory Influence on Cognition: Noise and Annoyance. Measurement of the irritating qualities of environmental noise levels follows somewhat different procedures from

Noise in residential or city environments, while presenting less of a health hazard than at the workplace, is still an important human factors concern, and even the health hazard is not entirely absent. Meecham [208], for example, reported that the death rate from heart attacks of elderly residents near the Los Angeles Airport was significantly higher than the rate recorded in a demographically equivalent nearby area that did not receive the excessive noise of aircraft landings and takeoffs Measurement of the irritating qualities of environmental noise levels follows somewhat different procedures from the measurement of workplace dangers. In particular, in addition to the key component of intensity level, there are a number of other "irritant" factors that increase annoyance. For example, high frequencies are more irritating than low frequencies. Airplane noise is more irritating than traffic noise of the same level. Nighttime noise is more irritating than daytime noise. Noise in the summer is more irritating than in the winter (when windows are likely to be closed). While these and other considerations cannot be precisely factored into an equation to predict "irritability," it is nevertheless possible to estimate their contributions in predicting the effects of environmental noise on resident complaints. One study found that the percentage of people "highly annoyed" by residential noise follows a logistic function of the mean day and night sound intensity, see Equation 5.7. For noise levels above 70 dB, it is roughly linear, see Equation 5.8 [209]. A noise level of 80 dB would lead approximately 52% of people (20+3.2×10) to be highly annoyed Noise concentrated at a single frequency is more noticeable and annoying than when distributed more broadly. Apple capitalized on this phenomenon when it created asymmetric fan blades for the cooling fans for its laptops, helping the computer maintain an illusion of quiet operation and avoiding annoyance

The vestibular senses play two important roles related to what?

Not surprisingly, the vestibular senses are most important for human-system interaction when the systems either move directly (as vehicles) or simulate motion (as vehicle simulators or virtual environments). The vestibular senses play two important (and potentially negative) roles here, related to spatial disorientation and to motion sickness.

nstead, one may simply transmit the symbolic contents of the message (e.g., the letters of the words) and then allow a speech synthesizer at the other end to reproduce the necessary sounds. When, the issue of importance becomes the level of fidelity of the voice synthesizer necessary to what three things?

Of course, with the increasing availability of digital communications and voice synthesizers, the issue of transmitting voice quality with minimum bandwidth is lessened in its importance. Instead, one may simply transmit the symbolic contents of the message (e.g., the letters of the words) and then allow a speech synthesizer at the other end to reproduce the necessary sounds. (This eliminates the uniquely human, nonverbal aspects of communications—a result that may not be desirable when talking on the telephone.) Then, the issue of importance becomes the level of fidelity of the voice synthesizer necessary to (1) produce recognizable speech, (2) produce recognizable speech that can be heard in noise, and (3) support "easy listening." The third issue is particularly important, as Pisoni [204] has found that listening to synthetic speech takes more mental resources than does listening to natural speech. Thus, listening to synthetic speech can produce greater interference with other ongoing tasks that must be accomplished concurrently with the listening task (see Chapter 6) which, in turn, will be more disrupted by the mental demands of those concurrent tasks. The voice, unlike the printed word, is transient. Once a word is spoken, it is gone and cannot be referred back to. The human information-processing system is designed to prolong the duration of the spoken word for a few seconds through what is called echoic memory. However, beyond this time, spoken information must be actively rehearsed, a demand that competes for resources with other tasks. Hence, when displayed messages are more than a few words, they should be delivered visually or at least backed up with a redundant and more permanent "visual echo." Besides obvious solutions of "turning up the volume" (which may not work if this amplifies the noise level as well and so does not change the signal-to-noise ratio) or talking louder, there may be other more effective solutions for enhancing the amplitude of speech or warning sound signals relative to the background noise. First, careful consideration of the spectral content of the masking noise may allow one to use signal spectra that have less overlap with the noise content. For example, the spectral content of synthetic voice messages or alarms can be carefully chosen to lie in frequency regions where noise levels of the ambient environment are lower. Since lower frequency noise masks higher frequency signals, more than the other way around, this relation can also be exploited by trying to use lower frequency signals. Also, synthetic speech devices or earphones can often be used to bring the source of signal closer to the operator's ear than if the source is at a more centralized location where it must compete more with ambient noise.

Parts of the Ear: Ossicles, Malleus (hammer), Incus (anvil), Stapes (stirrups), Oval Window- what do they do?

Ossicles - bones of middle ear that convert sound to mechanical energy. -Malleus (hammer) is the largest bone and receives vibration from ear drum, which then strikes the Incus (anvil), which is hinged to the smallest bone, the Stapes (stirrups), which presses on the Oval Window of the cochlea.

There are two different approaches to measuring speech communications. What are they? What is SII and how does it differ from AI?

Our example at the beginning of the chapter illustrated the worker's concern with her ability to communicate with her neighbor in the workplace. A more tragic illustration of communications breakdown contributed to the 1979 collision between two jumbo jets on the runway at Tenerife airport in the Canary Islands, in which over 500 lives were lost [173]. One of the jets, a KLM 747, was poised at the end of the runway, engines primed, and the pilot was in a hurry to take off while it was still possible before the already poor visibility got worse and the airport closed operations. Meanwhile, the other jet, a Pan American airplane that had just landed, was still on the same runway, trying to find its way in the fog. The air traffic controller instructed the pilot of the KLM: "Okay, stand by for takeoff and I will call." Unfortunately, because of a less than perfect radio channel and because of the KLM pilot's extreme desire to proceed with the takeoff, he apparently heard just the words "Okay . take off." The takeoff proceeded until the aircraft collided with the Pan Am 747, which had not yet cleared the runway. There are two different approaches to measuring speech communications, based on bottom-up and top-down processing respectively. The bottom-up approach derives some objective measure of speech quality. It is most appropriate in measuring the potential degrading effects of noise. Thus, the speech intelligibility index (SII), similar to articulation index (AI), represents the signalto-noise ratio (dB of speech sound minus dB of background noise) across a range of the frequency spectrum where useful speech information is located. Figure 5.12 shows how to calculate AI with four different frequency bands. This measure can be weighted by the different frequency bands, providing greater weight to the ratios within bands that contribute relatively more heavily to the speech signal. Remember that the higher frequency is home of the most important consonants. SII differs from the Articulation Index (AI) because it considers a broader range of factors affecting speech intelligibility, such as upward spreading of masking to higher frequencies, level distortion, and how the importance of noise frequencies depend on the types of speech (e.g., isolated words compared to sentences). SII also reflects the benefit of visual information associated with faceto-face communication. These additional considerations lead to a much more complicated analysis procedure that has been implemented in readily available software tools This calculation makes it possible to predict how the background noise will interfere with speech communication and how much amplification or noise mitigation might avoid these problems. When the SII is below 0.2, communication is severely impaired and few words are understood and above 0.5 the communication is typically good with most words being heard correctly SII does not consider reverberation and the detrimental effects of elements of the sound signal that arrive after the direct sound, where the direct sound is the sound wave that first hits the ear. Sound that arrives after the direct sound comes from reflections from the surroundings, such as walls or the ceiling. Sounds arriving 50 ms after the direct sound interfere with intelligibility and are quantified in terms of C50, which is sometimes termed sound clarity. C50 is the ratio of the signal (sound in the first 50 ms) and noise (sound following the initial 50 ms). A signal to noise ratio of 4 dB is considered very good.

Permanent Threshold Shift (PTS)- what is it and when does it occur? Increases with what? Give an example?

Permanent Threshold Shift (PTS) is the third form of noiseinduced hearing loss experienced by our worker, and it has the most serious implications for worker health. This measure describes the "occupational deafness" that may set in after workers have been exposed to months or years of high-intensity noise at the workplace. Also, PTS tends to be more pronounced at higher frequencies, usually greatest at around 4,000 Hz [172]. Workplace noise that is concentrated in a certain frequency range has a particularly strong effect on hearing while in that frequency range. Note in Figure 5.2 that the consonant /d/ is located in that range, as are many other consonants. Consonants are most critical for discrimination of speech sounds. Hence a PTS can be particularly devastating for conversational understanding. Like the TTS, the PTS is greater with both louder and longer prior exposure to noise [176]. Age contributes to a large portion of hearing loss, particularly in the high-frequency regions, a factor that should be considered in the design of alarm systems, particularly in nursing homes.

Parts of the Ear: Pinna- What does it do?

Pinna - collects sound, helps localization (Holds up glasses)

Proprioception & Kinesthesis what are the defined as. What happens when they provide conflicting info?

Proprioception - Receptors in the limbs provide information of limb position in space. Kinesthesis - Receptors in the muscles provide information about limb motion. This subject's proprioception and vision are providing conflicting information about his limb position. This not only makes this stacking task difficult, but could lead to motion sickness symptoms.

Psychophysical scaling- relats to what. What is an example of it? Designers should assume what in relation to psychophysical scaling? Equal increases in sound intensity do not create equal what?

Psychophysical scaling relates the physical stimulus to what people perceive. A simple form of discrimination characterizes the ability of people to notice the change or difference in simple dimensional values, for example, a small change in the height of a bar graph, the brightness of an indicator, or the intensity of a sound. In the classic study of psychophysics (the relation between the psychological sensations and physical stimulation), such difference thresholds are called Just Noticeable Difference, or JND. Designers should assume that people cannot reliably detect differences that are less than a JND. For example, if a person is meant to detect fluctuations in a sound, those fluctuations should be scaled so that those fluctuations are greater than a JND. Along many sensory continua, the JND for judging intensity differences increases in proportion to the absolute amount of intensity, a simple relationship described by Weber's law (Equation 5.6). Here ∆I is the change in intensity, I is the absolute level of intensity, and K is a constant, defined separately for different sensory continua (such as the brightness of lights, the loudness of sounds, or the length of lines). Importantly, Weber's law also describes the psychological reaction to changes in other non-sensory quantities. For example, how much a change in the cost of an item means to you (i.e., whether the cost difference is above or below a JND) depends on the cost of the item. You may stop riding the bus if the bus fare is increased by $1.00, from $0.50 to $1.50; the increase was clearly greater than a JND of cost. However, if an air fare increased by the same $1.00 amount (from $432 to $433), this would probably have little influence on your choice of whether or not to buy the plane ticket. The $1.00 increase is less than a JND compared to the $432 cost. Equal increases in sound intensity (on the decibel scale) do not create equal increases in loudness; for example, an 80 dB sound does not sound twice as loud as a 40 dB sound. Instead, the scale that relates physical intensity to the psychological experience of loudness, expressed in units called sones, is shown in Figure 5.8. One sone is established arbitrarily as the loudness of a 40 dB tone of 1,000 Hz. A tone twice as loud will be two sones. As an approximation, we can say that loudness doubles with each 10 dB increase in sound intensity. For example, an increase in 20 dB would be associated with approximately a sound four times as loud. However, the loudness of the intensity levels are also influenced by the frequency (pitch) of the sound, and so we must now consider that influence.

5.1.2 Sound Intensity- Similar to light reaching the eye, sound reaching the ear what three things happen. Sound energy at the source is defined by what two units, How does sound travel in an open space? As a ratio, the decibel scale can be used in two ways. What are they?

Similar to light reaching the eye, sound reaching the ear begins at a source, spreads through space, and reflects off surfaces. Sound energy at the source is defined in terms of watts (W) and the pressure of the sound waves (P) squared is proportional to this energy. In an open space, sound energy spreads across a sphere as it radiates, and so the intensity of the sound energy per square meter decreases with the square of the distance. The ear transforms sound pressure variation (Figure 5.3) into the sensation of hearing and it is this sound pressure variation that can be measured. When describing the effects on hearing, the amplitude is typically expressed as a ratio of sound pressure, P, to a reference P0. Table 5.1 summarizes these measures and their units. As a ratio, the decibel scale can be used in two ways: as a measure of absolute intensity relative to a standard reference (P and P0) and as a ratio of two sounds (P1 and P2). As a measure of absolute intensity, P is the sound pressure being measured, and P0 is a reference value near the threshold of hearing (i.e., the faintest sound that can be heard under optimal conditions). This reference value is a pure tone of 1,000 Hz at 20 micro Pascals or Newtons/m2 . Decibels represent the ratio of a given sound to the threshold of hearing. Most commonly, when people use dBs to refer to sounds they mean dB relative to this threshold. Table 5.2 shows examples of the sound pressure levels of everyday sounds along the decibel scale, as well as their sound pressure level. Because it is a ratio measure, the decibel scale can also characterize the ratio of two audible sounds; for example, the OSHA inspector at the plant may wish to determine how much louder the alarm is than the ambient background noise. Using the ratio of the sound pressure levels, we might say it is 15 dB more intense. As another example, we might characterize a set of earplugs as reducing the noise level by 20 dB. Because sound is measured in dB, which is a ratio on a logarithmic scale, the combined sound pressure level produced by multiple sound sources cannot be determined by simply adding the dB values. For example, the combination of a passing bus (90 dB) and a passing car (70 dB) is not 160 dB. Instead, Equation 5.1 must be used, where Ln is the sound pressure level in dB of each of N sources.

Sound Field- Because the power of the sound is spread over an area that is proportional to the square of the distance, this causes what? Reflection of sound off walls or other surfaces is termed as what?

Similar to light, sound propagates from its source and reflects off the surfaces it hits. Figure 5.4 and Equation 5.2 shows how the intensity of sound changes with distance. Because the power of the sound is spread over an area that is proportional to the square of the distance, each doubling of the distance leads to a 6 dB decline in the intensity of the sound (See Equation 5.2). Equation 5.2 relies on the assumption that sound propagates uniformly from a point source. Such a situation is called a free field, but when substantial sound energy reflects off nearby surfaces it is a reverberant field. Near field is where the sound source is so close that it does not act as a uniformly radiating source. The near field concept is important because measuring sound in this range will be imprecise. The wavelength of the relevant sound waves and the dimensions of the source define the near field. The near field is the largest of either the length of the relevant wavelength (e.g., 2 meters for 161.5 Hz assuming speed of sound at sea level of 343 m/sec) or twice the longest dimension of the sound source (e.g., approximately 0.5 meters for a lawn mower). Reflection of sound off walls or other surfaces is termed reverberation and has three important effects. First, it provides a sense of space and affects the feel of a room. The reverberations of a small room provide a sense of intimacy. Second, reverberation can interfere with speech communication if the reflected sound is delayed by more than 50 ms, but it can improve it if the delay is less than 50 ms. Third, reverberations increase the sound pressure level beyond that predicted by Equation 5.2. For that reason, measuring sound produced by a machine where it will actually be used is an important aspect of estimating whether it poses a risk to workers' hearing, a topic we discuss in the next section

Sopite Syndrome- what is it? Why is it dangerious- what are some examples- is there remediation? What is it a major cause of?

Sopite Syndrome - motion induced drowsiness • Subset of motion sickness symptoms, but sometimes the sole manifestation • Dangerous because victims often not aware of its onset or the likelihood of onset • Found to affect passengers and operators of cars, trucks, ships, helicopters, planes, and simulators • No known prevention techniques (many motion sickness medications increase drowsiness) • May be a major cause of accidents and military pilot pilot training washout

5.1.1 Amplitude, Frequency, Envelope, and Location- height of each bar is what- and wave is plotted as what? The power spectrum is important because of what? What is the lowers perceptual frequency? What is the highest?

Sound can be represented as a sum of many sine waves, each with a different amplitude and frequency. Figure 5.1a shows the variation of sound pressure over time, and 5.1b shows three of the many sine waves that make up this complex signal. The line at the top of Figure 5.1b shows a high frequency signal and the line at the bottom shows a low frequency signal. These are typically plotted as a power spectrum, as shown in 5.1c. The horizontal position of each bar. represents frequency of the wave, expressed in cycles/second or Hertz (Hz). The height of each bar reflects the amplitude of the wave and is typically plotted as the square of the amplitude, or the power. Figure 5.1d shows the power spectrum from the full range of the many sine waves that make up a more complex sound signal. The power spectrum is important because it shows the range of frequencies in a given sound. Similar to light, people can only perceive a limited range of sound frequencies. The lowest perceptual frequency is approximately 20 Hz and the highest 20,000 Hz. Above this range is ultrasound and below this range, sound is felt more than heard. People are most sensitive to sounds in the range of 2,000-5,000 Hz.

Decibel Scale- What is the intensity in Decibals and TOH levels of the following- Jet at take-of, Threshold of pain, Front row of a rock concert, Walkman at maximum volume, Vacuum cleaner, Busy street, Normal conversation, Quiet office, Whisper, Normal breathing, and Threshold of hearing

Sound intensity (dB) = 20 log (P1/P2); where P2 is the threshold of hearing Jet at take-off; ear damage likely- Intensity 140 dB # of Times > TOH 10^14 Threshold of pain Intensity- 130 dB # of Times > TOH 10^13 Front row of a rock concert Intensity- 110 dB # of Times > TOH 10^11 Walkman at maximum volume Intensity- 100 dB # of Times > TOH 10^10 Vacuum cleaner Intensity- 80 dB # of Times > TOH 10^8 Busy street Intensity- 70 dB # of Times > 10^7 Normal conversation Intensity- 60 dB # of Times > 10^6 Quiet office Intensity- 40 dB # of Times > 10^4''Whisper Whisper Intensity-20 dB # of Times > 10^2 Normal breathing Intensity- 10 dB # of Times > 10^1 Threshold of hearing Intensity- 0 dB # of Times > 10^0

Sound Sources and Noise Mitigation- Sound pressure levels can be measured by what? Difference between A and C scale. What does B and Z scale measure. Who protects workers? What equation do they use for noise? What does the noise have to exceed to be determined as hazardous?

Sound pressure levels can be measured by a sound intensity meter. This meter has a series of scales that weight frequency ranges differently. In particular, the A scale weights frequencies of the sound to reflect the characteristics of human hearing, providing greatest weighting at those frequencies where we are most sensitive. The C scale weights all frequencies nearly equally and therefore is less closely correlated with the characteristics of human hearing and used for very high sound pressure levels. As you might expect, the B scale weights frequencies in a manner between the A and C weightings, and the Z scale weights all frequencies equally. Only the A scale is the commonly used to assess noise levels in the workplace, and is typically indicated as dBA. During the last few decades in the United States, the Occupational Safety and Health Administration (OSHA) has taken steps to try and protect workers from the hazardous effects of prolonged noise in the workplace by establishing standards that can be used to trigger remediating action (CFR 29 1910.95; [164]). Even brief exposure to sounds over 100 dB can cause permanent hearing damage, but 85 dB sounds can lead to hearing damage with prolonged exposure. Intense sounds can damage hearing and the longer people are exposed to intense sound the greater damage. Of course, many workers do not experience continuous noise of these levels but may be exposed to bursts of intense noise followed by periods of greater quiet. How would you combine these exposures to estimate the risk to a worker? OSHA provides means of converting the varied time histories of noise exposures into the single equivalent standard—the time weighted average. The time weighted average (TWA) of noise combines intensity of exposure with exposure duration. If the TWA exceeds 85 dBA (a weighted measure of noise intensity), the action level, employers are required to implement a hearing protection plan in which ear protection devices are made available, instruction is given to workers regarding potential damage to hearing and steps that can be taken to avoid that damage, and regular hearing testing is implemented. If the TWA is above 90 dBA, the permissible exposure level, then the employer is required to takes steps toward noise reduction through procedures that we will discuss. Beyond the workplace, the popularity of portable music players has introduced a new source of sound that can damage hearing: 58% of one sample of college students found that they expose themselves to a TWA greater than 85 dBA [165].

Amplitude what is the definition- what is it a property of?

Sound pressure perceived as loudness Properties of Sound

The Source: Equipment and tool selection- what methods can be used to reduce noise?

The Source: Equipment and tool selection can often reduce the noise to an appropriate level (for case studies see [166]). Ventilation or fans, or handtools vary in the sounds they produce, and so noise can be reduced simply by buying quiet equipment. The noise of vibrating metal, the source of loud sounds in many industrial settings, can be attenuated by using damping material, such as rubber. One should consider also that the irritation of noise is considerably greater in the high-frequency region (the shrill pierced whine) than in the mid- or low-frequency region (the low rumble). Hence, to some extent the choice of tool can reduce the irritating quality of its noise.

Motion Disturbances- What is Spatial Disorientation? What is Vection, What is somatogravic Illusion? What is Motion Sickness? What are treatments for motion disturbances? 2 items.

Spatial Disorientation - vestibular illusion which tricks the brain into thinking body is a different position than it actually is. Vection - the illusion of self-motion induced my visual cuesSomatogravic Illusion - acceleration creates illusion that plane is nose-up, deceleration feels like the plane is nose-down Motion Sickness - nausea, disorientation and fatigue attributed to disturbance of vestibular system caused when vision and inner ear send conflicting (decoupled) signals Treatments - • Medications - Antihistamines (Dramamine), Dopamine blockers or anti-psychotics (Thorazine), anti-nausea (serotonin) and Scopolamine (anticholinergic) • Behavioral strategies - sit facing front with front window view, eat bland foods such as bread, bananas, rice. If on a boat, stay in middle (less rocking) and look forward at the horizon, not at the waves.

Spatial disorientation- define. what is an example.

Spatial disorientation, which are illusions of motion, occur because certain vehicles, particularly aircraft, place the occupants in situations of sustained acceleration and non-vertical orientation for which the human body is not naturally adapted. Hence, for example, when the pilot is flying in the clouds without sight of the ground or horizon, the vestibular senses may sometimes be "tricked" into thinking that up is in a different direction from where it really is. This presents a real danger that has contributed to the loss of control of the aircraft [

Specific design criteria for alarms include what? 6 items.

Specific design criteria for alarms include those shown in Figure 5.10. Generally, alarm design should avoid two opposing problems: detection, as experienced by our factory worker at the beginning of the chapter and annoyance, as experienced by the pilot. 1. Most critically, the alarm must be heard above the ambient background noise. This means that the noise spectrum must be carefully measured at the location where everyone must respond to the alarm. Then, the alarm should be set at least 15 dB above the noise level, and to guarantee detection, set at 30 dB above the noise level. It is also wise to include components of the alarm at several different frequencies, well distributed across the spectrum, in case the particular malfunction that triggered the alarm creates its own noise (e.g., the whine of a malfunctioning engine), which exceeds the ambient noise level. Figure 5.11 shows this specification for a flight deck warning. 2. The alarm should not exceed the danger level for hearing, whenever this condition can be avoided. (Obviously, if the ambient noise level is close to the danger level, one has no choice but to make the alarm louder by criterion 1, which is most important.) This danger level is around 85 to 90 dB. Careful selection of frequencies of the alarm can often be used to meet both of the above criteria. For example, if the ambient noise level is very intense (90 dB), but only in the high frequency range, it would be counterproductive to try to impose a 120-dB alarm in that same frequency range when several less intense components in a lower frequency range could be heard. 3. Ideally, the alarm should not startle. As noted in Figure 5.10, this can be addressed by tuning the rise time of the alarm pulse so that the rise time is at least 20 ms. 4. In contrast to the experience of the British pilot, the alarm should not interfere with other signals (e.g., other simultaneous alarms) or any background speech communications that may be essential to deal with the alarm. This criterion implies that a careful task analysis should be performed of the conditions under which the alarm might sound and of the necessary communications tasks to be undertaken as a consequence of that alarm. 5. The alarm should be informative, signaling to the listener the nature of the emergency and, ideally, some indication of the appropriate action to take. The criticality of this informativeness criterion can be seen in one alarm system that was found in an intensive care unit of a hospital (an environment often in need of alarm remediation [182, 184]). The unit contained six patients, each monitored by a device with 10 different possible alarms: 60 potential signals that the staff may have had to rapidly identify. Some aircraft have been known to contain at least 16 different auditory alerts, each of which, when heard, is supposed to trigger identification of the alarming condition in the pilot's mind. Such alarms are often found to be wanting in this regard. 6. In addition to being informative, the alarm must not be confusable with other alarms that may be heard in the same context. As you will recall from our discussion of vision in Chapter 4, this means that the alarm should not impose on the human's restrictive limits of absolute judgment. Just four different alarms may be the maximum allowable to meet this criterion if these alarms differ from each other on only a single physical dimension, such as pitch.

Speech Perception: Speech Measures- 1. Articulation, Speech Intelligibility Index. Define them. What is the McGurk Effect?

Speech communication measures: • Articulation Index (bottom up) - signal to noise ratio •(speech dB - background noise dB) • Higher frequencies are more vulnerable to being masked by noise • Speech Intelligibility Index (top down) - percentage of items correctly heard McGurk Effect - demonstrates top down processing of speech and the importance of redundant visual information for perception https://www.youtube.com/watch?v=G-lN8vWm3m0

Speech distortion- While the AI can objectively characterize the damaging effect of noise on bottom-up processing of speech, it cannot do what? While the bottom-up influences of these effects cannot be as accurately quantified as the effects of noise, there are what?

Speech distortion. While the AI can objectively characterize the damaging effect of noise on bottom-up processing of speech, it cannot do the same thing with regard to distortions. Distortions may result from a variety of causes, for example, clipping of the beginning and ends of words, reduced bandwidth of high-demand communications channels, echoes and reverberations, and even the low quality of some digitized synthetic speech signals [204]. While the bottom-up influences of these effects cannot be as accurately quantified as the effects of noise, there are nevertheless important human factors guidelines that can be employed to minimize their negative impact on voice recognition. One issue that has received particular attention from acoustic engineers is how to minimize the distortions resulting when the high-information speech signal must be somehow "filtered" to be conveyed over a channel of lower bandwidth (e.g., through digitized speech). For example, a raw speech waveform may contain over 59,000 bits of information per second [205]. Transmitting the raw waveform over a single communications channel might overly restrict that channel, which perhaps must also be shared with several other signals at the same time. There are, however, a variety of ways to reduce the information content of a speech signal. One may filter out the high frequencies, digitize the signal to discrete levels, clip out bits of the signal, or reduce the range of amplitudes by clipping out the middle range. Human factors studies have been able to inform the engineer which way works best by preserving the maximum amount of speech intelligibility for a given resolution in information content. For example, amplitude reduction seems to preserve more speech quality and intelligibility than does frequency filtering, and frequency filtering is much better if only very low and high frequencies are eliminated [205].

Sense of Touch:Tactile and Haptic- What is tactile? (three receptors) What is Haptic?

Tactile - Cutaneous or somatosensory sense provided by receptors just under the skin. Types of Receptors: Thermoreceptors - detect heat/cold Mechanoreceptors - detect pressure Nociceptors - detect noxious stimuli (caustic substances) Haptic - Shape information provided through manipulation of fingers This device provides haptic information to aid in performing a tracking task. The user feels the button pop out and must move the stick in the same direction to maintain course.

Tactile Situation Awareness System

Tactile stimulation used to prevent spatial disorientation

Temporary Threshold Shift (TTS)- what is it and when does it occur? Increases with what? Give an example?

Temporary Threshold Shift (TTS) is the second form of noiseinduced hearing loss [172], which occurs after exposure to intense sounds. If our worker steps away from the machine to a quieter place to answer the telephone, she may still have some difficulty hearing because of the "carryover" effect of the previous noise exposure. This temporary threshold shift is large immediately after the noise is terminated but declines over the following minutes as hearing is "recovered" (Figure 5.7). The TTS is typically expressed as the amount of loss in hearing (shift in threshold in dB) that is present two minutes after the source of noise has terminated. The TTS increases with a longer and greater noise exposure. The TTS can be quite large. For example, the TTS after being exposed to 100 dB noise for 100 minutes is 60 dB, meaning you might not be able to hear a normal conversation after a loud concert.

The Environment: Sound path-Sound path or path from the sound source to the human can also be altered in several ways? How can it be altered?

The Environment: Sound path or path from the sound source to the human can also be altered in several ways. Changing the environment near the source, for example, is illustrated in Figure 5.5, which shows the attenuation in noise achieved by surrounding a piece of equipment with a plexiglass shield. Sound absorbing walls, ceilings, and floors can also be very effective in reducing the noise coming from reverberations. Finally, there are many circumstances when repositioning workers relative to the source of noise can be effective. The effectiveness of such relocation is considerably enhanced when the noise emanates from only a single source. This is more likely to be the case if the source is present in a more sound-absorbent environment (less reverberating).

The Listener: Ear protection- two generic types- what should be considered when utilizing ear protection in the workplace? How do we compensate when we can't hear?

The Listener: Ear protection is a possible solution if noise cannot be reduced to acceptable levels at the source or path. Ear protection devices that must be made available when noise levels exceed the action level are of two generic types: earplugs, which fit inside the ear, and ear muffs, which fit over the top of the ear. As commercially available products, each is provided with a certified noise reduction ratio (NRR), expressed in decibels, and each may also have very different spectral characteristics (i.e., different decibel reduction across the spectrum). For both kinds of devices, it appears that the manufacturer's specified NRR is typically greater (more optimistic) than is the actual noise reduction experienced by users in the workplace [167]. This is because the manufacturer's NRR value is typically computed under ideal laboratory conditions, whereas users in the workplace may not always wear the device properly. Comfort must be considered in assessing hearing protection in the workplace. Devices that are annoying and uncomfortable may go unused in spite of their safety effectiveness (see Chapter 14). Interestingly, concerns such as that voiced by the worker at the beginning of this chapter that hearing protection will not allow her to hear conversations are not always well grounded. The ability to hear conversation is based on the signal-to-noise ratio. Depending on the precise spectral characteristics and amplitude of the noise and the signal and the noise-reduction function, wearing such devices may actually enhance rather than reduce the signal-tonoise ratio, even as both signal and noise intensity are reduced. The benefit of earplugs to increasing the signal-to-noise ratio is greatest with louder noises, above about 80 to 85 dB. Finally, it is important to note that the adaptive characteristics of the human speaker may themselves produce some unexpected consequences on speech comprehension. We automatically adjust our voice level, in part, on the basis of the intensity of sound that we hear, talking louder when we are in a noisy environment [170, 171] or when we are listening to loud stereo music through headphones. Hence, it is not surprising that speakers in a noisy environment talk about 2 to 4 dB softer (and also somewhat faster) when they are wearing ear protectors than when they are not. This means that hearing conversations can be more difficult in environments in which all participants wear protective devices unless speakers are trained to avoid this automatic reduction in the loudness of their voice.

Detection and Localization- The auditory system is not as well suited for what? Unlike visual stimuli that require people to direct their eyes to the source of information, the auditory system is what?

The auditory system is not as well suited for precise spatial localization but nevertheless has some very useful capabilities in this regard, given the differences in the acoustic patterns of a single sound, processed by the two ears [177, 178]. The ability to identify location of sounds is better in azimuth (e.g., left-right) than it is in elevation, and front-back confusions are also prominent. Overall, sound localization is less precise than visual localization. Unlike visual stimuli that require people to direct their eyes to the source of information, the auditory system is omnidirectional; that is, unlike visual signals, we can sense auditory signals no matter how we are oriented. Furthermore, it is much more difficult to "close our ears" than it is to close our eyes [180]. For these and other reasons, auditory warnings induce a greater level of compliance than do visual warnings.

Alarms- how to design effective alarms?

The design of effective alarms, like the one that was nearly missed by the worker in our opening story, depends very much on matching the modality of the alarm (e.g., visual or auditory) to the requirements of the task. If a task analysis indicates that an alarm signal must be sensed, like a fire alarm, it should be given an auditory form, although redundancy in the visual or tactile channel may be worthwhile when there is a high level of background noise or for people who do not hear well. While the choice of modality seems straightforward, the issue of how auditory alarms should be designed is more complicated. Consider the following quotation from a British pilot, taken from an incident report, which illustrates many of the problems with auditory alarms. I was flying in a jetstream at night when my peaceful revelry was shattered by the stall audio warning, the stick shaker, and several warning lights. The effect was exactly what was not intended; I was frightened numb for several seconds and drawn off instruments trying to work out how to cancel the audio/visual assault, rather than taking what should be instinctive actions. The combined assault is so loud and bright that it is impossible to talk to the other crew member and action is invariably taken to cancel the cacophony before getting on with the actual problem.

Anatomy of the Ear- three primary components

The ear has three primary components responsible for differences in our hearing experience. As shown in Figure 5.6, the pinna both collects sound and, because of its asymmetrical shape, provides some information regarding where the sound is coming from (i.e., behind or in front). Mechanisms of the outer and middle ear (the ear drum or tympanic membrane, and the hammer, anvil, and stirrup bones) conduct and amplify the sound waves into the inner ear and are potential sources of breakdown or deafness (e.g., from a rupture of the eardrum or buildup of wax). The muscles of the middle ear respond to loud noises and reflexively contract to attenuate the amplitude of intense sound waves before it is conveyed to the inner ear. This aural reflex thus offers some ion to the inner ear . The inner ear, consisting of the cochlea, within which lies the basilar membrane, is where the physical movement of sound energy is transduced to electrical nerve energy that is then passed through the auditory nerve to the brain. This transduction is accomplished by displacement of tiny hair cells along the basilar membrane as the membrane moves differently to sounds of different frequency. Intense sound experience can lead to selective hearing loss at particular frequencies as a result of damage to the hair cells at particular locations along the basilar membrane. Finally, the neural signals are compared between the two ears to determine the delay and amplitude differences between them. These differences provide another cue for sound localization, because these features are identical only if a sound is presented directly along the midplane of the listener.

The steps that should be taken to remediate the effects of noise might be very different, depending on the particular nature of the noise-related problem and the level of noise that exists before remediation. On the one hand, if noise problems relate to communication difficulties when the noise level is below 85 dBA than what should you do? On the other hand, if noise is above the action levels (a characteristic of many industrial workplaces), then what should you do? Finally, if noise is a source of irritation and stress in the environment (e.g., residential noise from an airport or nearby freeway), then what should you do? Which is the most preferred method and which is the leasr?

The noise level at a facility cannot always be expressed by a single value but may vary from worker to worker, depending on his or her location relative to the source of noise. For this reason, TWAs might be best estimated using noise dosemeters, which are worn by individual workers and collect the data necessary to compute the TWA over the course of the day. The steps that should be taken to remediate the effects of noise might be very different, depending on the particular nature of the noise-related problem and the level of noise that exists before remediation. On the one hand, if noise problems relate to communication difficulties when the noise level is below 85 dBA (e.g., an idling truck), then signal enhancement procedures may be appropriate, such as increasing the volume of alarms. On the other hand, if noise is above the action levels (a characteristic of many industrial workplaces), then noise reduction procedures must be adopted because enhancing the signal intensity (e.g., louder alarms) will do little to alleviate the possible health and safety problems. Finally, if noise is a source of irritation and stress in the environment (e.g., residential noise from an airport or nearby freeway), then many of the sorts of solutions that might be appropriate in the workplace, like wearing earplugs, are obviously not applicable. We may choose to reduce noise in the workplace by focusing on the source, the path or environment, or the listener. The first is the most preferred method; the last is the least.

Auditory Environment- he stimulus for hearing is sound is what? The amplitude of sound waves contributes to what?

The stimulus for hearing is sound, a compression and rarefaction of the air molecules, which is a wave with amplitude and frequency. This is similar to the fundamental characteristics of light discussed in Chapter 4, but with sound, the waves are acoustic rather than electromagnetic. The amplitude of sound waves contributes to the perception of the loudness of the sound and its potential to damage hearing, and the frequency contributes to the perception of its pitch. Before we discuss the subjective experience of loudness and pitch, we need to understand the physics of sound and its potential to damage hearing.

motion sickness- what plays a key role in motion sickness?

The vestibular senses also play a key role in motion sickness. Normally, our visual and vestibular senses convey compatible and redundant information to the brain regarding how we are oriented and how we are moving. However, there are certain circumstances in which these two channels become decoupled so that one sense tells the brain one thing and the other tells it something else. These are conditions that invite motion sickness [221, 222, 223]. One example of this decoupling results when the vestibular cues signal motion and the visual world does not. When riding in a vehicle with no view of the outside world (e.g., a toddler sitting low in the backseat of the car, a ship passenger below decks with the portholes closed, or an aircraft passenger flying in the clouds), the visual view forward, which is typically "framed" by a manufactured rectangular structure, provides no visual evidence of movement (or evidence of where the "true" horizon is). In contrast, the continuous rocking, rolling, or swaying of the vehicle provides very direct stimulation of movement to the vestibular senses to all three of these passengers. When the two senses are in conflict, motion sickness often results (a phenomenon that was embarrassingly experienced by the second author while in the Navy at his first turn to "general quarters" with the portholes closed below decks). Automated vehicles may produce a similar effect when people turn their attention inside the vehicle rather than being focussed on the road and so designers should consider tuning vehicle dynamics and passenger feedback systems to mitigate this risk

Masking, Temporary Threshold Shi, and Permanent Threshold Shift- who has higher frequency in speaking- male or female?

The worker in our story was concerned about the effect of noise on her ability to hear at her workplace. When we examine the effects of noise, we consider three components of the potential hearing loss: masking, a loss of sensitivity to a signal while the noise is present; temporary threshold shift, transient loss of sensitivity due to exposure to loud sounds; permanent threshold shift, permanent loss of hearing due to aging or repeated exposure to loud sounds. Masking of one sound by other sounds depends on both the intensity (power) and frequency of that signal [172]. These two variables are influenced by the speaker's gender and by the nature of the sound. First, since the female voice typically has a higher base frequency than the male, it is not surprising that the female voice is more vulnerable to masking of noise. Likewise consonant sounds, like s and ch, have distinguishing features at very high frequencies, and high frequencies are more vulnerable to masking by low frequencies than the converse. Hence, it is not surprising that consonants are much more susceptible to masking and other disruptions than are vowels. This characteristic is particularly disconcerting because consonants typically transmit more information in speech than do vowels. One need only think of the likely possibility of confusing "fly to" with "fly through" in an aviation setting to realize the danger of such consonant confusion [173]. Miller and Nicely [174] provide a good analysis of the confusability between different consonant sounds.

There are also signal-enhancement techniques that emphasize more the redundancy associated with top-down processing. What are some examples?

There are also signal-enhancement techniques that emphasize more the redundancy associated with top-down processing. As one example, voice communication is far more effective in a face-to-face mode than it is when the listener cannot see the speaker [206]. This is because of the contributions made by many of the redundant cues provided by the lips [207], cues of which we are normally unaware unless they are gone or distorted. (To illustrate the important and automatic way we typically integrate sound and lip-reading, recall, if you can, the difficulty you may have in understanding the speech of poorly dubbed foreign films when speech and lip movement are not synchronized in a natural way.) Another form of redundancy is involved in the use of the phonetic alphabet ("alpha, bravo, charlie, ..."). In this case, more than a single sound is used to convey the content of each letter, so if one sound is destroyed (e.g., the consonant b), other sounds can unambiguously "fill in the gap" (ravo). In the context of communications measurement, improved top-down processing can also be achieved through the choice of vocabulary. Restricted vocabulary, common words, and standardization of communications procedures, such as that adopted in air traffic control (and further emphasized following the Tenerife disaster), will greatly restrict the number of possible utterances that could be heard at any given moment and hence will better allow perception to "make an educated guess" as to the meaning of a sound if the noise level is high.

Vestibular System- What is it? What are Semicircular Canals and Vestibular Sacs? What are Otolith Organs?

Vestibular System - detects acceleration forces, maintains upright posture/balance and controls eye position relative to head Semicircular Canals - detect angular acceleration (rotation) in 3 axes - Crista Ampullaris embedded in a jelly-like material (cupula) is supported by hair cells that bend and fire when the crista moves in response to head rotation. Semicircular canals - when head is rotated fluid passes over cupula (gelatinous dome) and deflect Crista Ampullaris (cone shape collection of hair cells), bending hair cells to trigger signal to brain. -Anterior canal (roll), Posterior canal (pitch), Lateral canal (yaw) Vestibular Sacs (Utricle & Saccule) - form the otolith organs and detect linear acceleration - hair cells embedded in jelly-like substance lag behind when the head moves. When motion becomes steady, otoliths (stones) catch up and hairs no longer bent. Otolith organs (formed by Utricle and Saccule) - detect linear acceleration when otoliths (small particles floating in gel) move and deflect hair cells. -Utricle - detect left/right, forward/back and responsible for signaling eye movements -Saccule - detect up/down and responsible for signaling muscles involved in posture stabilization

Voice alarms and meaningful sounds- benefits of voice alarms- limitations of voice alarms

Voice alarms and meaningful sounds, such as alarms composed of synthetic voice, provide one answer to the problems of discriminability and confusion. Unlike "symbolic" sounds, the hearer does not need to depend on an arbitrary learned connection to associate sound with meaning. The loud sounds Engine fire! or Stall! in the cockpit mean exactly what they seem to mean. Voice alarms are employed in several circumstances (the two aircraft warnings are an example). But voice alarms themselves have limitations that must be considered. First, they are likely to be more confusable with (and less discriminable from) a background of other voice communications, whether this is the ambient speech background at the time the alarm sounds, the task-related communications of dealing with the emergency, or concurrent voice alarms. Second, unless care is taken, they may be more susceptible to frequency-specific masking noise. Third, care must be taken if the meaning of such alarms is to be interpreted by listeners in a multilingual environment who are less familiar with the language of the voice. The preceding concerns with voice alarm suggest the advisability of using a redundant system that combines the alerting, distinctive features of the (nonspeech) alarm sound with the more informative features of synthetic voice [185]. Combining stimuli from multiple modalities often promotes more reliable performance although not necessarily a faster response. Such redundancy gain is a fundamental principle of human performance that can be usefully employed in alarm system design. Another possible design that can address some of the problems associated with comprehension and masking is to synthesize alarm sounds that sound like the condition they represent, called auditory icons or earcons [186, 187]. Belz, Robinson, and Casali [188], for example, found that representing hazard alarms to automobile drivers in the form of earcons (e.g., the sound of squealing tires representing a potential forward collision) significantly shortened driver response time relative to conventional auditory tones. In particular, to the extent that such signals sound like their action meanings, like the crumpling paper signaling delete or squealing tires signaling braking, auditory icons can be quite effective in signaling actions

Proprioception and Kinesthesis- Define then and provide examples.

We briefly introduced the proprioceptive channel in the previous section in the context of the brain's knowledge of finger position. In fact, a rich set of receptor systems, located within all of the muscles and joints of the body, convey to the brain an accurate representation of muscle contraction, joint angles, and limb position in space. The proprioceptive channel is tightly coupled with the kinesthetic channel, receptors within the joints and muscles, which convey a sense of the motion of the limbs as exercised by the muscles. Collectively, the two senses of kinesthesis and proprioception provide rich feedback that is critical for our everyday interactions with things in the environment. One particular area of relevance for these senses is in the design of manipulator controls, such as the joystick or mouse with a computer system, the steering wheel on a car, the clutch on a machine tool, and the control on an aircraft (see Chapter 9). As a particular example, an isometric control is one that does not move but responds only to pressure applied upon it. Hence, the isometric control cannot benefit from any proprioceptive feedback regarding how far a control has been displaced, since the control does not move at all. Early efforts to introduce isometric side-stick controllers in aircraft were, in fact, resisted by pilots because of this elimination of the "feel" associated with control movement.

At any given SII level, this percentage will vary- why? two letter strings, abcdefghij and wcignspexl, might both be heard at intensities with the same SII. But it is clear that more letters of the first string would be correctly understood Why?

While the merits of the bottom-up approach are clear, its limits in predicting the understandability of speech should become apparent when one considers the contributions of top-down processing to speech perception. For example, two letter strings, abcdefghij and wcignspexl, might both be heard at intensities with the same SII. But it is clear that more letters of the first string would be correctly understood [203]. Why? Because the listener's knowledge of the predictable sequence of letters in the alphabet allows perception to "fill in the gaps" and essentially guess the contents of a letter whose sensory clarity may be missing. This, of course, is the role of top-down processing. A measure that takes top-down processing into account is the speech intelligibility level (SIL). This index directly measures the percentage items correctly heard. At any given SII level, this percentage will vary as a function of the listener's expectation of and knowledge about the message communicated. Whether you can hear a message depends on the complementary relationship between bottom-up (as measured by SII) and top-down processing (as influenced by expectations). Sentences that are known to listeners can be recognized with just as much accuracy as random isolated words, even though the latter are presented with nearly twice the bottom-up sensory quality. Combining the information describing the top-down influences on hearing with the bottom-up influences described by AI or SII makes it possible to anticipate when speech communication will likely fail. Thus, for example, automated readings of a phone number should slow down, and perhaps increase loudness slightly for the critical and often random final four digits.

Vision Substitution System- What was the expirement?

White, Saunders, Scadden, Bach-y-Rita, & Collins' (1970) Vision substitution system converts camera image to pattern of vibration on user's back. Subjects are able to discriminate a wide variety of different stimulus patterns and perceive relative distance.

Timbre what is the definition- what is it a property of?

quality of sound- Properties of Sound

Noise Remediation- 8 items.

•Signal Enhancement - increase the signal to noise ratio (make signal louder relative to background) • Noise Exposure Regulations - OSHA standards based on Time Weighted Average (calculated with dosemeter) • if TWA > 85 dB (action level) employer must provide hearing protection • if TWA > 90 dB (permissible exposure level) employer must take noise reduction measures • The Source - Select equipment and tools that have built in sound dampening • The Environment - Use sound attenuating or sound absorbing materials to reduce transmission and reverberation •White Noise - Humming noise used to mask distracting sounds • The Listener - Ear protection such as earplugs (internal) or earmuffs (external)


Set pelajaran terkait

Chapter 5: The World Wide Web 7&8

View Set

Unit 11: ACSM Guidelines for Resistance Exercise Prescription

View Set

Chapter 6: Discounted Cash Flow Valuation

View Set

SPANISH 201 Lección 10 | Lesson Test

View Set