Chapter 1: Modes of Listening

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

first group

mode of listening that involves attachment to the experimmental, emotional, body connected and subjective side of listening and includes: active, ambient, auditory scene, entrained, headphone, live, and narrative listening

second group

mode of listening that is more detached from emotion and body and concentrates listening around analytical, technical, and objective strategies. Causal, Reduced, Semantic, Recall, Aspectual, and Structural listening, all fall into this second category.

reduced listening

analytical mode of listening originally proposed by composer and electronic music pioneer pierre schaeffer. focus is on the anatomy of particular sounds and sound shapes, the morphology of sound. the sound is born, lives, and dies in a particular way)

structural listening

analytical mode of listening that attempts to capture information related to how the various sources and events in a soundscape or piece of music are organized both on the micro and macro temporal levels. common to anyone who has done advanced study in music theory, music notation, and the analysis of music. a trained musician or expereienced listener wioll be able to follow the harmonic and rhythmic structure of the notes, notice the formal elements, the large-scale repetitions, and the sectional divisions in the composition, and understand how they help shape the work. schema. (But trying to apply a classical music schema to another form of music like jazz can skew the impression of jazz music)

sound source

A SOUND SOURCE is the physical object that accounts for a single stream in an auditory scene (e.g. a violin in a string quartet, a single loudspeaker in a stereo pair, the single human voice in a choir, a frog in a pond, etc.). To answer the question "what is the sound source?" identify the physics of the object that causes the vibration. Objects produce signature sounds and can be classed into a small number of categories: wood, metal, stretched skin or fabric, string, electronic, etc.

voice

A VOICE (or LAYER, or STREAM) is defined as a single thread of sound that might be made of just one sound source or multiple sound sources working in tandem. All music features a number of identifiable voices. A source is a sound that has a single cause, as in someone hitting a drum, or playing a violin, or starting a car. In music, sources are combined in ways to form a voice. For example, if a full choir of human singers were all singing the same melody, we would say there was only one "voice". Typically, composers writing for western style choirs write for four "voices" categorized by register and human voice type: the sopranos, the altos, the tenors, and the basses. It makes sense that the term voice is related to the human body. It turns out that we can use the human body to take account for many things musical. For example, in the sound example of the solo saxophonist, the player uses her mouth to force a reed to buzz and cause the air column in the saxophone tube to produce specific tones. This technique is very similar to the way our actual vocal chords work to produce tone. Can you agree that the saxophone sound is an extension of the human voice process? The saxophone is external to the body but remains close because the energy source is the human breath. The connection is undeniable and we infer the human connection even if we have no actual knowledge of the source.

section

A portion of a larger piece of music that might last from approximately 15-20 seconds to up to several minutes, is called a SECTION. freq domain (thousandths of a second, spectral freq detail), event domain (tenths of seconds, single note), phrase (seconds, many notes), section (many seconds, many phrases), piece(minutes, many sections)

ambient music

All the organized sound that meets our ears in public places can be considered as part of the ambient music soundscape. Music that is used as background support for training videos is an example of ambient music. Music that is heard when walking into a clothing store is ambient music. Music heard from a passing car is ambient music. Music from a DJ spinning disks in a bar setting is ambient music. Music and sound are often used as sonic architecture of one sort or another. The sonic aspect is only one element in a complete sensory "scene" that includes people, architecture, setting, conversation and communal interaction.

inharmonic

Any combination of frequencies (partials, overtones) in the frequency domain that cannot be reduced to a set of roughly whole number ratios which relate to a real or virtual fundamental frequency, are said to be inharmonic (contrast harmonic). When a waveform is complex and seriously inharmonic, it is identified as a form of noise. Bell sounds have distinct partials, which are often inharmonic (i.e. the majority of overtones or partials cannot be related to the fundamental frequency of the bell with whole number ratios). Since our ears combine all the sounds into a composite image of "bell" we are not always aware of the individual frequency components (partials) that may be very strongly present in the sound. In the case of a bell, the strongest partials are always those that ring out the longest and this is most often, but not always, the fundamental. Listen carefully to the resonant elements in final seconds of the next bell. One can hear clearly that certain simple tones continue on to the end as all the other components die away. Bells are frequently used as a metaphor for cleansing and clarifying processes -- the very thing that happens when we ring bells. In the next sound example, a single bell is struck several times -- each time the bell is struck the energy in the partials is reinvigorated and the composite of all the partials rings loudly again just as they begin to die away again (a process of constant renewal). In this example, a sinetone has been added to the beginning which rises and matches the various partials of the bell. This demonstration begins to uncover how tones produced by objects or electronics can be related to one another and form harmony or consonance. Balinese Gamelan ensembles feature many musicians playing with an orchestra of inharmonic gongs, bells, metal bars, and drums. As the instruments themselves are most often inharmonic, each Gamelan is unique and will be tuned with itself, making it impractical to exchange one instrument in one Gamelan with another. The tradition of composing with this kind of sound recognizes the existence of the fundamental frequencies -- but the resonance heard in the Gamelan is inharmonic, is in strong contrast to the resonance of the western ensembles were harmonic resonances are the rule.

auditory scene analysis

Auditory scene analysis is concerned with our abilities to distinguish one sound from another, to locate sounds in space, identify and segregate sound sources, or focus on one particular source in a complex web of sources. Psychologist, Albert Bregman, coined the term. Bregman says "auditory scene analysis is the process by which the auditory system separates the individual sounds in natural-world situations... auditory scene analysis is difficult because the ear has access only to the single pressure wave that is the sum of the pressure waves coming from all the individual sound sources (such as human voices, or foot steps)...the incoming auditory information has to be partitioned, and the correct subset allocated to individual sounds, so that an accurate description may be formed for each. This process of grouping and segregating sensory data into separate mental representations, called auditory streams, has been named auditory scene analysis." In a nutshell, auditory scene analysis is the process by which we take in an auditory signal and decode that signal into separate sources in part by coordinating with other senses considering such aspects as the distance, direction, loudness, and frequency of the various individual sound sources in a scene. The ways we use and experience sound in the natural world can be used and applied to the fictional space of music. The following orchestral excerpt features a complex auditory scene with many identifiable streams. Taken as a whole, the music is complicated and dense, yet each independent part can be segregated from the whole, and identified as unique.

frequency

Frequency is defined as a number of cycles per unit time and can be used to describe various cyclical processes, such as rotation, oscillations, or waves. Humans detect frequencies as the sensation of tone roughly between 12-20 cycles and upwards to 20,000 cycles per second. The frequency of acoustic waves is measured in cycles per second or Hertz (Hz). The wings of flying honey bees produce an audible tone at approximately 180Hz. Imagine the sound that a honey bee makes and you will be be roughly in the auditory range of this frequency. Oscillations below the tone perception threshold (c. 0-12Hz) are slow enough to be experienced as individual events. In contrast to the bee wing sound, imagine the sound made by the flapping wings of a flying pigeon (c. 6-12Hz) or the spinning of the rotor of a helicopter (c. 3-6Hz).

loudness

Loudness is concerned with how we perceive amplitude. The perception of relative loudness is not always directly correlated with amplitude (the technical term that deals with the amount of air pressure being displaced). Many factors can influence our perception of loudness: frequency, amplitude, timbre, and distance from the source. Loudness is a term that belongs to psychoacoustics. Simply put, it is how loud something appears to be at any given moment. Almost Inaudible <-> Quiet <-> Normal Listening Level <-> Loud <-> Extremely Loud The diagram below demonstrates with scientific accuracy that it requires more amplitude to perceive low and very high frequencies at the same loudness level as frequencies located in our human hearing "sweet spot".

entertainment

Marc Leman, in his book Embodied Music Cognition and Mediation Technology, says this about entrainment: "The hypothesis is that in a context where groups of people are together, corporeal imitation can lead to emergent behavior, as in concert halls where masses of people start waving their hands in the air at the same time, or when, after a concert, the applause becomes synchronized. In both cases, the emerging behavior of the audience results from the fact that subjects are imitating their neighbors... Arousal, attention, and the feeling of presence may be enhanced through this type of group resonance effect. Performers, too, seem to appreciate the effects of the allelo-mimetism in an audience, which they perceive through auditory, visual, and tactile channels. They perceive this effect as a global emerging effect of the audience, which can be very stimulating for their performance. Entrainment thus forms the dynamic multimodal framework from which musical magic, a peak experience of a group of people, may emerge. Given the widespread phenomenon of this so-called magic, it must be that humans are particularly sensitive to it."

morphology of sound

Morphology is the study of the form and structure of things. MORPHOLOGY OF SOUND is the careful tracking of how a piece of music or sound evolves through time. R. Murray Schafer defines sonic morphology as "the study of changing forms of sound across time or space". Morphology of sound involves a dissection of sound into related parts -- parts that can then be compared to similar parts in other sounds even when the other sounds are seemingly very different in nature and origin. The technique implies close examination of the sound and finite understanding of key sonic components over time. This is done to inform our brains of special attributes belonging to a sound event -- attributes that we might otherwise group into larger auditory scenes and thereby miss entirely. ExSound separates morphology of sound into two distinct spaces bounded by time, Global Morphology (large-scale structures) and Event Morphology (instant to instant structures).

repetition

Repetition is a key characteristic of all music. When a fragment or section of music is replayed and recognized by the listener, we hear musical form. Repetition can happen on all possible time scales related to music, including both small bits of music, or large sections of music. Repetition is an important consideration of form and perception, particularly when a grid-time space is used. Repetition of small fragments or phrases of music are common features of most music, but live musicians will never play the repeated pattern exactly in the same way. In that sense, there is no true repetition. Each repetition offers a chance to hear and experience the passage with slight changes. This effect where repetition is really understood as a constantly varied repeated pattern, is lost in the electronic world, where a sample -- a fixed recorded fragment -- might be repeated verbatim. With fixed electronic sound, the repetition is real and precise and always the same. Music that does not feature any form of repetition does exist and can be considered special. Removing repetition as a forming principle in music poses special problems, and challenges, and causes the listener to ask the question "how does this music hold itself together?"

soundscape

SOUNDSCAPE is a term coined by composer and environmentalist, R. Murray Schafer. Soundscape treats all the sound in the world as a macrocosmic musical composition. Soundscape recording involves capturing sound from the natural world "as is". Soundscape recordings must not be confused with the natural soundscapes we experience in the real world. When natural soundscapes are captured by microphones and recording technology something akin to a film is created. the soundscape recording is mediated through technology. In many instances, we respond to the quality and ingenuity of the recording, not just the source material. It is very difficult to capture the sound of a thunderstorm or ocean waves in a digital format. Spatial aspects are all but lost, and often must be simulated. A bad recording of a beautiful soundscape is akin to a poor recording of a great piece of music. Think of the sounds produced by natural world phenomenon. Water Sounds: ocean, rain, flowing rivers, waterfalls, melting ice Wind Sounds: breeze, gusts, tornado, hurricane Fire Sounds: small wood fire, forest fire, house burning Animal Sounds: insects, mammals Ecosystem Sounds: rainforest

tempo

TEMPO is defined as the number of beats per minute. In most music, the tempo varies from 60-120 beats per minute, roughly mirroring the range of the human pulse rate. Tempo is an essential element in all grid-based musics. Tempi can slow down gradually or accelerate, vacillate, or break down completely into open-time. (look at picture)

texture

TEXTURE refers to the structure of a weave of fiber or cloth. In music, texture can be categorized into different types of individual streams of musical activity over time. Textures in music often change and overlap, so it is sometimes difficult to pinpoint a single texture for a single section of music. Traditional music theory defines four general types of musical textures: Monophony Heterophony Homophony Polyphony monophonic <-> heterophonic <-> homophonic <-> polyphonic

tone

TONE, PITCH, and NOTE are three distinct terms that are closely linked and are often used synonymously. Each term denotes a sound that contains an audible fundamental frequency with or without overtones or partials. As long as there is an audible fundamental frequency, the individual partials can be harmonic, inharmonic, or a combination of the two. TONES can range from complex tones involving many partials, to simple tones involving only a single sinewave component. If a tone is very complex and very inharmonic, it no longer is perceived as a tone and is characterized as some type of noise signal. Noise is not considered a tone. In the next example, a repeating harmonic tone begins as a complex tone, and progresses toward a simple tone. The fundamental frequency of the tone never changes. This is also a good example of a timbral transition from bright-->dull. PITCH refers to a psychoacoustic phenomenon whereby our auditory system situates tones as pitches in some "high" and "low" sounding region. In most cases, the perception of pitch for complex harmonic tones will be the same as the perception of the fundamental frequency, as all the partials will support and emphasize the fundamental. But it is not unusual for certain instruments to be perceived in pitch as one octave higher than their actual frequency components show. In the case of complex inharmonic tones, the perception of pitch can be very different for different ears and situations. In the next example, a series of complex inharmonic tones are heard with all the tones sharing the same fundamental frequency. Although they have the same fundamental, the tones sound like they are shifting in PITCH (i.e. the perceived central pitch is not the necessarily the same as the root fundamental of the tone) A NOTE refers to a harmonic pitch or tone, but in a more abstract representational form. Most western instruments are designed to produce harmonic tones. Even though each instrument produces vastly different overall sound colors, each harmonic tone produced can be readily identified by its fundamental frequency. We call this reduced representation of sound a "note". A note signifies a fundamental frequency that belongs to some larger system of related fundamental frequencies or other notes. By reducing all harmonic sounds to a single abstract signifier, regardless of overall sound color, a powerful and controllable representation of sound is created. The implications of this are enormous and have empowered western composers for centuries in the creation of written musical scores that can be performed and be resounded over and over again. Below is a sound example featuring different instruments/sources playing the note "A-440". The timbres of the various sources are quite different from one another, but we perceptually group the sounds together because of the solid and shared fundamental frequency. Each sound has a different overtone structure which is the chief factor determining the timbre of the sound.

note

TONE, PITCH, and NOTE are three distinct terms that are closely linked and are often used synonymously. Each term denotes a sound that contains an audible fundamental frequency with or without overtones or partials. As long as there is an audible fundamental frequency, the individual partials can be harmonic, inharmonic, or a combination of the two. Tones can range from complex tones involving many partials, to simple tones involving only a single sinewave component. If a tone is very complex and very inharmonic, it no longer is perceived as a tone and is characterized as some type of noise signal. Noise is not considered a tone. In the next example, a repeating harmonic tone begins as a complex tone, and progresses toward a simple tone. The fundamental frequency of the tone never changes. This is also a good example of a timbral transition from bright-->dull. PITCH refers to a psychoacoustic phenomenon whereby our auditory system situates tones as pitches in some "high" and "low" sounding region. In most cases, the perception of pitch for complex harmonic tones will be the same as the perception of the fundamental frequency, as all the partials will support and emphasize the fundamental. But it is not unusual for certain instruments to be perceived in pitch as one octave higher than their actual frequency components show. In the case of complex inharmonic tones, the perception of pitch can be very different for different ears and situations. In the next example, a series of complex inharmonic tones are heard with all the tones sharing the same fundamental frequency. Although they have the same fundamental, the tones sound like they are shifting in PITCH (i.e. the perceived central pitch is not the necessarily the same as the root fundamental of the tone) A NOTE refers to a harmonic pitch or tone. As most western instruments produce harmonic tones, they can all play the same "note" and be in tune with one another. Below is a sound example featuring different instruments/sources playing the note "A-440". The timbres of the various sources are quite different from one another, but we perceptually group the sounds together because of the solid and shared fundamental frequency. Each sound has a different overtone structure which is the chief factor determining the timbre of the sound.

duration

The duration of a sonic event in the event domain can be measured in two ways: 1) the actual duration of the event in time, measured from the onset of the event to the offset of the event, and 2) a measurement of the temporal distance between successive onsets. This form of precise duration measurement is not the common way we determine duration in music. Perception of duration is often most defined by the time distance between the onset of one event to the onset of the next event, regardless of the actual full length of each individual event. In the next example, each bell continues to ring far beyond the onset of the next successive bell, but we calculate the durations as a series of onsets.

music notation

The most common type of MUSIC NOTATION is a reduction of the fundamental frequencies (pitch, note, tone) with indications for duration, temporal placement and loudness. These traditional western music scores are used by musicians to recreate musical performances, but they depend on something called performance practice -- the various traditions of performance that are handed down from musician to musician through practice, writing, and speaking.

harmonic

There is some ambiguity surrounding the use of this term. The term HARMONIC can refer to the Western music theory way of relating different tones with each other as in a system of harmony (harmonic structure), or to describe a special relationship between frequency components in a complex sound (harmonic sound), or to refer to a single frequency above a fundamental frequency in a complex sound (guitar harmonic, or harmonics). ExSound replaces the term harmonics with the term partial, to describe any single frequency component in a complex waveform. Complex waveforms can be broken down into individual frequency components (sine waves with varying amplitude over time). The individual component waveforms are called partials and they can be classed into two categories: harmonic partials and inharmonic partials. A partial is considered harmonic when its frequency is very close to a whole number multiple of an identified fundamental frequency. The fundamental frequency is often called F0 (F-zero). If F0 is 100 Hertz, then harmonic partials might be found at 2 times the fundamental (200 Hz), 3 times the fundamental (300 Hz), and upwards through the audible range. The physics of stretched strings and blown pipes produce harmonic waveforms, as does the human voice. Another important characteristic of harmonic partials in natural systems like stretched strings, is that their relative amplitude drops off at a rate inversely proportional to the partial number (i.e. partial number 2 is roughly 1/2 the amplitude of F0, partial 3 is 1/3 the amplitude of F0 etc.). The upper partials determine the color or timbre of the sound. The fundamental frequency (F0), also called the first partial, determines the pitch-class name. In the real world of music making and music instruments, there is no such thing as pure harmonic or inharmonic sounds. Rather, there are degrees of harmonicity and inharmonicity. Higher number partials such as 11, 13, 17, 23, 49 can be considered at the threshold of the inharmonic. In that sense, all musical instruments produce notes/pitches/tones that contain inharmonic components. Western music has focused on organizing sound around the fundamental (F0). This makes sense as the fundamental frequency (F0) is usually the most prominent perceptual component, its amplitude is much stronger than the other partials, and the higher partials reinforce that fundamental. Partials with relatively little amplitude are very hard to hear and can be shown to be totally imperceptible in many circumstances. But higher and lower amplitude partials can also strongly contribute to the over all spectrum of a sound and its sonic identity. In the "A4 Tones" sound example, there are several different harmonic spectra created by a variety of musical instruments. All the instruments are playing the same note/pitch/tone -- A 440. Notice how the color of the note changes considerably as the instrument changes. A good gauge of relative harmonicity is found in the ratio of any two frequencies. When two tones have a lower order frequency ratio like 1:2 or 2:3, they can be considered more harmonic than higher order ratios like 15:16. The more complex the frequency ratios, the more complex the interaction of the frequencies; this aspect of wave interaction is related to, but not synonymous with discussions surrounding consonance and dissonance. All harmonic sounds have significant inharmonic and noise components. Often times these inharmonic and noise-like components are very audible, but occur only at the onset of the sound, a temporal region of only a 100 milliseconds or thereabouts. In sounds produced by plucked strings such as guitars, the resonant harmonic partials quickly overtake the micro-noise stage, leaving only resonant harmonic partials to ring through.

auditory scene listening

adapted from psychologist Al Breman's auditory scene analysis (a study concerned with our abilities to distinguish one sound from another, to locate sounds in space, to segregate sound streams, and to focus on one particular sound stream in a complex web of sources. accomplished by coordinating with other senses (distance, direction, duration, loudness, speed, and frequency).

Casual listening

analytical mode of listening that is concerned with the original source of a recorded sound(bell, car engine, clarinet) and any information that the sound can give about its nature. (use when you cannot identify the source)

live listening

being present at a concert or other site where music performance is taking place. involves presence, participation, social interaction, and the conscious awareness of others participating in the listening experience. recognize the extra-musical aspects involved in live listening

semantic listening

coined by michel chion. treating sound in a linguistic manner, as acode or language that must be interpreted (morse code and spoken language). semantics is the study of meaning, primarily a philosophical and scientific study of the denotative and connotative aspects of language. music patterns often allude to temporal and dynamic behaviors of real-world phenomenon. when sound is contextualized alongside images or story, the connotative aspect can be extremely powerful. movie soundtracks combine sonic symbols with visual symbols to create a hyper-intensified emotional expereince. music that involves sung or spoken lyrics can combine with sound to intensify and broaden possible meanings.

aspectual listening

concept adapted from Drew Daniels of Matmos. recognizes that sound comes with many cultural references that inform and influence the way we hear music. once this aspect of music is understood, the experience of the music is forever altered

entrained listening

involves the body and the ways we move and sychronize our body motions with music while listening. (when we dance to music). most commercial music is designed to enhance entertainment

headphone listening

listening to music or sound in a closed, over the ears, headphone space.fully immersed in a binaural sound space. fine detail listening is enabled and extraneuous sonic influences are diminished or erased. isolation.

narrative listening

our ability to relate a musical unfolding to some kind of story or image. the maker of the sound might intend a reference to some specific image or action, but mostly, this effect is purely subjective and in the imagination of the listener. what kind of images a particular music evokes

Ambient Listening

presence of music as an element in the larger soundscape. senses are not focused on the music, but are unconsciously aware of the presence of music. envelopes other activities such as work, study, conversations, and community. hearing music play through loudspeakers in a public place. unconscious. common to use music as sonic architecture, as sound that frames the space of our actions and perception

Active Listening

present in the sound at all times. multi-sensaorial, full body experience. does not stop to analyze or reflect (music will keep going on). *smooth, bumpy, turbulent, never changing, ever changing, fast, slow, static, or silent.

soundscape listening

recognition that the natural soundscapes can be experienced and appreciated as a type of music that requires no other form of organization beyond how the sounds appear in the environment

recall listening

using memory alone to remember sound and hear back on a performance or recent expereince of sound. when we discuss sound it is most often through the filter of recall listening


संबंधित स्टडी सेट्स

Ch 2: Financial Statements, Taxes, and Cash Flow

View Set

Accounting 2036: Ch.11 Stockholder's Equity Review

View Set

Joey's Rendition of a Patho (Ch. 41) Quizlet

View Set

Characteristics Used to Name Muscles

View Set

Unit 1: Atomic Structure & Properties - Hw #3

View Set

Chapter 12: Inventory Management DO SECOND

View Set

Light, Pigments, and Photosynthesis

View Set

Ch 4 How to Form a Business SmartBook

View Set

ATI Chapter 32: Heart Failure and Pulmonary Edema

View Set

Honors U.S. History III- Final Exam Review

View Set