Music Midterm 1

¡Supera tus tareas y exámenes ahora con Quizwiz!

Characteristics of sound

Four basic qualities: Amplitude Frequency Wave Shape Phase

Synthesizer

Synthesizer is an electronic musical instrument containing several integrated components to create modify amplify and play sounds.

Pitch

Pitch is our perceptual interpretation of frequency. We have our greatest sensitivity to frequencies which lie within 200 to 2000 Hz, which takes up two-thirds of the distance on the basilar membrane. We perceive pitch logarithmically in relation to frequency.

Four basic waveforms

Sine Triangle Sawtooth Pulse

Wavelength

The distance in space required to complete a full cycle of a frequency.

How does a record player work?

The grooves on a record are etched amplitudes and frequency fluctuations of sound. Those micro-grooves are then played by having a small needle which when passing through, experiences these fluctuations and vibrates the cone or speaker to produce sound.

Psychoacoustics

The way in which we perceive sound

"Free/Phase (Node 1 / Beacon)" by Mendi + Keith Obadike.

This work focuses on African American freedom songs as a material for liberation. How individuals and groups use sound to spatially institute power Beacon is a public sound art installation. It uses large parabolic speaker to create a beam of sound that shines from the roof of the Chicago cultural center playing African American freedom songs

Time Stretching

Time stretching is the process of changing the speed or duration of an audio signal without affecting its pitch.

George Lewis Voyager system

Between 85 and 87, He wrote this in the programming language 4th. It is an interactive virtual improvising orchestra that analyses and improvises performance in real time, generating both complex responses to the musicians playing and independent behavior arising form the programs own interactive processes. All communication between the system and the improvisor takes place sonically.

Rarefaction

When air molecules are pulled apart

Sequencer

A music sequencer (or audio sequencer or simply sequencer) is a device or application software that can record, edit, or play back music, by handling note and performance information in several forms

Sample

A sample is a numeric representation of an analog sound that can be converted back to a voltage to drive a speaker.

Threshold of hearing

The smallest perceptible amplitude - 20 uPa.

Contact microphone

A contact mic, normally stuck to or clamped onto an instrument, receives most if its audio signal from the mechanical vibration of the object it is attached to. It is often used for mic'ing instruments where feedback or unwanted sounds from other instruments would be an issue. One of the drawbacks of using a contact microphone on acoustic instruments is that it emphasizes frequencies resonating at a particular spot on the instrument, thereby creating a misrepresentation of the overall timbre of the instrument. For this reason, many players, such as double bassists, have gone back to acoustic microphones.

Transducer

A device that converts variations in a physical quantity, such as pressure or brightness, into an electrical signal, or vice versa.

How does MIDI relate to capitalism

MIDI helped in universal connectivity. MIDI is a development that commercialized music technology. At that time computer music was outside the popular discourse of music making. MIDI turned everyone who used it into a more flexible worker in a world of commercial music of all kinds. Anybody with a computer could make commercial music. The field became more crowded overnight.

Speed of sound

It is important to note that the speed of sound in air is determined by the conditions of the air itself (e.g. humidity, temperature, altitude). It is not dependent upon the sound's amplitude, frequency or wavelength. The speed at which sound propagates (or travels from its source) is directly influenced by both the medium through which it travels and the factors affecting the medium, such as altitude, humidity and temperature for gases like air.

MIDI

MIDI stands for Musical Instrument Digital Interface. The development of the MIDI system has been a major catalyst in the recent unprecedented explosion of music technology. MIDI has put powerful computer instrument networks and software in the hands of less technically versed musicians and amateurs and has provided new and time-saving tools for computer musicians.

Who is Max Mathews?

Max Mathews, who worked at Bell Labs alongside Claude Shannon with other computer music notables, John Pierce and James Tenney (even Charles Dodge in his student days) produced a program that allowed a mainframe computer to digitally synthesize and playback sound, which—though it had only one voice, one waveform, and little control over other parameters—proved the concept that sound could be digitized, stored and retrieved. Widely regarded as the father of computer music, he eventually went on to develop Music V, a much more robust and widely used digital synthesis program and the precursor to many more. Max - Computer performance of music was born in 1957 when an IBM 704 in NYC played a 17 second composition on the Music I program which I wrote. The timbres and notes were not inspiring, but the technical breakthrough is still reverberating. Music I led me to Music II through V. A host of others wrote Music 10, Music 360, Music 15, CSound, Cmix. Many exciting pieces are now performed digitally.

Localization

Localization refers to our ability to place a sound source in space

Loudness

Loudness is the way in which we perceive amplitude. As mentioned above, a particular change in amplitude is not necessarily perceived as being a proportionate change in loudness. That is because our perception of loudness is influenced by both the frequency and timbre of a sound. The "just noticeable difference," or JND for amplitude — that is, the minimal perceptible change in amplitude — varies by the starting amplitude and frequency, but in general it ranges between 0.2 and 0.4 dB.

How do Pussy Riot use electronic music, performance art, and social media for activism?

The activism by Pussy Riot is a form of sound art. This is an activist group that used electronic music and social media to bring reforms and make heard their demands. Pussy Riot have released a new song and video following their pitch invasion protest at Sunday's World Cup final. The song, titled "Track About Good Cop," references the statement the group released contrasting a "heavenly cop," who fights for the Russian people, with the "earthly cop" who represses them. They also posted a set of demands on their twitter account regarding freeing people who have been illegally arrested.

Acoustics

The physics of sound

Waveform

a visual representation of an audio signal. a curve showing the shape of a wave at a given time. patterns of sound, pressure, or variation or amplitude in a time domain.

Defining features of sonic art

...• Exhibitions often include music, kinetic sculpture, instruments activated by the wind or played by the public, conceptual art, sound effects, recorded readings of prose/poetry visual artworks which also make sound, paintings of musical instruments, musical automatons, film, video, technological demonstrations, acoustic reenactments, interactive computer programs which produce sound, etc. • In short sound art seems to be a category which can include anything which has or makes sound, and in some cases even things that don't. • Sometimes these 'Sound Art' exhibitions do not make the mistake of including absolutely everything under the sun, but then most often what is selected is simply music or a diverse collection of musics with a new name. • It is questionable whether Sound Art constitutes a new art form because why do we need a new name for things we already have a good name for?

Power

Amplitude over time - Unit watt

Intensity

Amplitude over time over an area

Acoustemology

Defined as the local conditions of acoustic sensation, knowledge and imagination embodied in the culturally particular sense of place. It is the one sonic way of knowing a place. (Feld) a sonic way of knowing a place, a way of attending to hearing, a way of absorbing

Basic frequency filters

Lowpass - cut high frequencies, pass low Highpass - cut low, pass high Bandpass - pass a certain width or band of frequencies and cut off frequencies on either side of the band Notch - cut out a notch or band in the center and passes frequencies to either side of the notch or band Extra - Lowshelf/Highshelf Peak/Notch Comb Allpass

Algorithm

It's a procedure that can be written in a programming language as a set of instructions for a computer to follow.

What was John and Alan Lomax's main intention for recording culture?

John Lomax and his son Alan Lomax travelled to the Angola state penitentiary. They took a 315 pound disc recorder with them. They were interested in the preservation of culture through sound recording tech. They feared the obliteration of local and regional knowledge through modern progress. Ethnographic There are clear issues of representation that emerge. A capitalist structure is formed where they are criticized for capitalizing on someone else art.

Reverb

The stream of continuing sound is called reverberation. Because the reflected sound may continue to bounce off of many surfaces, a continuous stream of sound fuses into a single entity, which continues after the original sound ceases.

Harmonic spectrum

Timbre is dependent on a sounds harmonic makeup. Every musical instrument exhibits its own unique mixture of harmonics called its harmonic spectrum. Vibrations in a single musical tone consisted of a fundamental frequency and above it, there are various harmonic components. this forms the harmonic spectrum which helps us perceive the sounds timbre.

How have we talked about "voice" in lecture?

We read an excerpt by Amanda Weidman we "find" our "voice" or discover an "inner voice"; we "have a voice" in matters or "give voice to" our ideas; we "voice concern" and are "vocal" in our opinions. A brieflook at the Oxford English Dictionary shows us that the most basic, literal meaning of "voice"-"the sound produced by the vocal organs of humans or animals, considered as a general fact/phenomenon"-is secondary in importance to a meaning that fuses a basic, literal sense to the notion of voice as an index or signal of identity: sound produced by and characteristic of a specific person/animal. Almost before we can speak of the sound itself, we attribute the voice to someone or something. Attributing voice to nonhuman entities (the collective, the mechanical, musical instruments) is a powerful way of making them intelligible, of endowing them with will and agency.

Phonoautograph

"The Phonautograph by French inventor Edouard-Léon Scott (1817-79) (see Figure 1.27) is widely regarded as the first audio recorder, although it had no method for reproducing the sound. Scott made his first model in 1857. The Phonautograph detected sound using a horn connected to a diaphragm. The diaphragm was connected to a stylus that vibrated to incoming sound waves. The recording consisted of a visual analog of a sound inscribed on a piece of paper. It was used in the scientific study of sound waves and their shapes. The principle of converting a sound into a physical impression using a stylus was key, however, to the development of the first "talking machine" by Thomas A. Edison (1847-1931) in 1876."

Phonograph

"The principle of converting a sound into a physical impression using a stylus was key, however, to the development of the first "talking machine" by Thomas A. Edison (1847-1931) in 1876. Edison's first Phonograph inscribed a sound onto a sheet of tin foil wrapped around a rotating cylinder. The sound was played back using a stylus that amplified the vibrations recorded in the grooves of the tin foil."

Activism by Pauline Oliveros

And Don't Call Them 'Lady' Composers Activism by Pauline Oliveros • This woman wrote an article (in 1970) about discrimination against female composers. Key ideas she emphasizes on: o It is still true that unless she is super‐excellent, the woman in music will always be subjugated, while men of the same or lesser talent will find places for themselves. o Society raises women in a way that they are unconsciously and consciously discouraged from partaking in non-domestic activities. o However, statistics show that (1) female composers are emerging from musical oppression. (2) Also, composers are not ignored as much. o Independent organizations like the Rockefeller are support budding musicians and have allowed the second trend^ o Symphony and opera organizations are also realizing that the music of today's generation is what attracts youngsters or people below 30 years of age. o Critics should be more lenient and open-minded towards new age music. o Trend (1) is dependent on trend (2) because women were culturally deprived before but now are exposed to many musically rich things. o Believes that the biggest problems of society will never be solved until an atmosphere of equality between men and women exists that can utilize creative energies optimally. Her composition bids farewell to the system of polite morality of that age and its attendant institutionalized oppression of the female sex.

Vibrato

Aspect of Timbre a rapid, slight variation in pitch in singing or playing some musical instruments, producing a stronger or richer tone. Frequency modulation

Christine Sun Kim work in relation to her deafness

Artist and TED Fellow Christine Sun Kim was born deaf, and she was taught to believe that sound wasn't a part of her life, that it was a hearing person's thing. Through her art, she discovered similarities between American Sign Language and music, and she realized that sound doesn't have to be known solely through the ears -- it can be felt, seen and experienced as an idea. In this endearing talk, she invites us to open our eyes and ears and participate in the rich treasure of visual language. Well, I watch how people behave and respond to sound. You people are like my loudspeakers, and amplify sound. I learn and mirror that behavior. At the same time, I've learned that I create sound, and I've seen how people respond to me. In Deaf culture, movement is equivalent to sound. This is a sign for "staff" in ASL. A typical staff contains five lines. Yet for me, signing it with my thumb sticking up like that doesn't feel natural. That's why you'll notice in my drawings, I stick to four lines on paper. Now sound has come into my art territory. Is it going to further distance me from art? I realized that doesn't have to be the case at all. I actually know sound. I know it so well that it doesn't have to be something just experienced through the ears. It could be felt tactually, or experienced as a visual, or even as an idea. So I decided to reclaim ownership of sound and to put it into my art practice. And everything that I had been taught regarding sound, I decided to do away with and unlearn. I started creating a new body of work. And when I presented this to the art community, I was blown away with the amount of support and attention I received. I realized: sound is like money, power, control -- social currency. So with sound as my new art medium, I delved into the world of music. And I was surprised to see the similarities between music and ASL. For example, a musical note cannot be fully captured and expressed on paper. And the same holds true for a concept in ASL. They're both highly spatial and highly inflected -- meaning that subtle changes can affect the entire meaning of both signs and sounds. I then started thinking, "What if I was to look at ASL through a musical lens?" If I was to create a sign and repeat it over and over, it could become like a piece of visual music. For example, this is the sign for "day," as the sun rises and sets. This is "all day." If I was to repeat it and slow it down, visually it looks like a piece of music. ASL is such a rich treasure that I'd like you to have the same experience. And I'd like to invite you to open your ears, to open your eyes, take part in our culture and experience our visual language. She painted live using feedback and speaker oscillations. This paining and her practice is situated in a gallery and it becomes a performance. She showcased her piece at MoMA which had a handheld device which was heard when held properly.

Binary

Binary numbers developed as a symbolic representation of computer circuits, which can be thought of as a series of switches that are either on or off. A single-place binary number is called a bit, which is short for "Binary digIT." Binary numbers are base-2, with each place representing the powers of two (as opposed to ten in our decimal system).

Rhythmicon

"During this time he invented the Rhythmicon, an early form of drum machine using photoelectric principles and a keyboard; the keyboard Theremin, a primitive synthesizer designed to emulate other musical instruments" "Cowell asked Theremin to make a special keyboard instrument that came to be known as the Rhythmicon. Depressing one of the keys resulted in a pitched rhythm that could be repeated automatically. It was possible to play multiple notes and rhythms by depressing more than one key at a time. The Rhythmicon worked on the principle of light beams being cast upon photoelectric cells to produce its electronic frequencies. Cowell used this device in a number of compositions during the 1930s."

Granular synthesis

"Granular synthesis introduced a different paradigm for conceptualizing sound signals. Based on the pioneering work in 1947 by Hungarian physicist Dennis Gabor (1900-79), granular synthesis breaks the audio signal down into small overlapping grains typically lasting no more than 50 microseconds. This concept was in stark contrast to traditional wave theory supported by the Fourier analysis of frequency cycles" the samples are not played back conventionally, but are instead split into small pieces of around 1 to 50 ms. These small pieces are called grains. Multiple grains may be layered on top of each other, and may play at different speeds, phases, volume, and frequency, among other parameters.

Oramics Machine

"In 1959, Oram established her own independent production company to produce a broader, more diverse range of sonic experiments for music, television, and motion pictures. Among her projects was the invention of an early synthesizer that produced electronic sounds by optically scanning hand-drawn images on sprocketed loops of clear 35mm film. The Oramics machine, as she called it, included ten such film loops that could be synchronously programmed, each equivalent to a recording track with added control functions. Some of the loops controlled the waveform, duration, and vibrato while others controlled timbre, amplitude, and pitch. The sprocketed loops rotated over a bank of photocells. The opaque images on the loops modulated a stream of light that was then transformed into voltages by the photocells. The voltages then triggered sound-generating oscillators, filters, and envelope shapers to create the music. Introduced in 1962, the Oramics machine was extraordinarily complicated to use. Oram was continually making improvements. Only a handful of composers used the instrument before it was overshadowed by a new generation of easier-to-use voltage-controlled synthesizers, such as those made by Robert Moog."

Luigi Russolo's thoughts on noise

"Russolo envisioned entirely new ways of making music through the use of noise. So devoted was he to this concept that he abandoned painting for a time to devote every working hour to the design and invention of new mechanical noisemakers to produce his music." Drawing inspiration from urban and industrial soundscapes, Russell arced that traditional orchestral instruments and composition are no longer capable of capturing the spirit of modern life, with its energy, speed and noise. "Russolo equated the state of classical music as being out of step with modern industrialized society. "Thus we are approaching noise-sound," he wrote in his preamble." "Russolo's solution for freeing music from its tonal prison was to "break at all cost from this restrictive circle of pure sounds and conquer the infinite variety of noise-sounds." He proposed making music from ambient noise and sounds from the environment, an idea that predated by many years any effective way of making remote audio recordings:" "he constructed a variety of mechanical noise-producing instruments that the pair called Intonarumori ("noise-intoners"). The Intonarumori were designed to produce "families" of sounds ranging from roars (thunders, explosions) to whistles (hisses, puffs), whispers (murmurs, grumbles), screeches (creaks, rustles), percussive noises (metal, wood), and imitations of animal and human voices."

Fairlight CMI

"The Fairlight CMI (Computer Music Instrument) digital synthesizer was introduced in Australia in 1979 and was designed by Peter Vogel and Kim Ryrie, using a dual-microprocessor architecture engineered by circuit designer Tony Furse. Providing a full complement of sound-design features, it was equipped with its own dedicated minicomputer, dual 8-inch disk drives, a six-octave touch-sensitive keyboard, and software for the creation and manipulation of sounds. Its most innovative feature was an analog-to-digital converter for processing incoming audio signals from analog sources. The Fairlight CMI was the first commercially available digital-sampling instrument. It featured a sequencer, 400 preset sounds, and the capability to create new tonal scales tuned in increments as small as one-hundredth of a semitone."

Telharmonium

1895- Thaddeus Cahill began development of the Telharmonium, a 200-ton array of geared electrical dynamos, the sine-tone whines of which were tapped and broadcast over telephone wires in both Telharmonium Hall and to hotels, with the ambition of being the first musical subscription service. Cahill payed attention to the work of Helmholtz, which inspired him to create mixtures of these sine tones to form more complex timbres, what we now call additive synthesis. The telharmonium made use of telephone networks to transmit music from a central hub in midtown Manhattan to restaurants, hotels, and homes around the city. Subscribers could pick up their phone, ask the operator to connect them to the telharmonium, and the wires of their phone line would be linked with the wires emerging from the telharmonium station. The electrically generated tunes would then stream from their phone receiver, which was fitted with a large paper funnel to help pump up the volume. (The electric amplifier had not yet been invented.)

Range of human hearing

20-20,000 Hz

Rebecca Belmore's use of the megaphone.

A form of sound art. She is notable for politically conscious performances and installations. This work was started in 1991 in Banff and made a work called "Speaking to their mother". She built a huge wooden megaphone and invited the public to speak into it, to amplify their wishes to the land. In her words, asking people to address the end directly was an attempt to hear political protest and poetic action. It was her response to the Oka crisis. She was interested in locating the aboriginal voice on the land.

Programming language

A programming language is a formal language, which comprises a set of instructions used to produce various kinds of output. Programming languages are used to create programs that implement specific algorithms

How does a tape player work?

A tape player has two spools inside and the tape is coiled in a tight roll on one side. The tape has a layer of ferric oxide. The process involves the impression of an electromagnetic signal on the metal coated tape. The rust on the tape are contorted and moved into different patterns and then those patterns are transducer into sound by a speaker. There is an electromagnetic tape head in a tape player. The electromagnet consists of an iron core wrapped with wire. During recording and playback, the audio signal is sent to the tape head to create a magnetic field.

Nyquist Theorem

According to this Theorem, the highest reproducible frequency of a digital system will be less than one-half the sampling rate. From the opposite point of view, the sampling rate must be greater than twice the highest frequency we wish to reproduce.

The history of record scratch

Afrika Bambaataa's looking for the perfect beat was one of the earliest recording music with scratching in it. They used the record player as a musical instrument by scratching. There is now multiple types of scratching techniques that DJs have perfected. DJ scratch is a master of using the turntables as a musical instrument. Maria Chavez makes vibration sculptures with the record player. She uses chance and accidents to pull out moments for sound pieces to create themselves. She snaps records with the hole still intact and she can make collages of multiple records using that.

Alvin Lucier's "I am Sitting in a Room".

Alvin Luciers piece exploits the resonant frequencies of whatever room he is sitting in. Through a process of iterate feedback, he amplifies those frequencies until the sound that remains is very difficult to interpret as speech. it is very self contained piece of art. It was made in 1969. you need a microphone, two recorders and a loud speaker. He reads the specific text and people sit there and watch how the piece unfolds He used this to smooth out any irregularities his speech might have (he was a stutterer).

Oscillator

An electronic oscillator is an electronic circuit that produces a periodic, oscillating electronic signal, often a sine wave or a square wave. An oscillator is a repeating waveform with a fundamental frequency and peak amplitude and it forms the basis of most popular synthesis techniques today. Aside from the frequency or pitch of the oscillator and its amplitude, one of the most important features is the shape of its waveform.

Envelope (ADSR)

An envelope generator (EG) in the basic patch (also called an ADSR) creates a changing stream of DC control voltage across the duration of a note. In the basic patch, two EG's are used. The first is applied to the VCA (voltage controlled amplifier). It controls the onset of the note and its initial rise time to a maximum amplitude, decay time to a sustain level where it stays until a note is released, and then the final decay back to silence. A second EG is frequently applied to a filter's cutoff frequency and shapes the timbre over the course of a note in a similar fashion. Using two envelope generators allows the timbre to change independently of the amplitude. Attack time: the time it takes to rise from 0V to maximum c.v. level Decay time: after reaching the maximum attack LEVEL, the time it takes to decay to a sustain LEVEL Sustain level: the amplitude remains constant until the envelope is ungated, at which point it moves to the release phase Release time: The time is take to go from the sustain level back to 0V. If the release time is set to 0, the note ends abruptly.

Tremolo

Aspect of Timbre a wavering effect in a musical tone, typically produced by rapid reiteration of a note, or sometimes by rapid repeated variation in the pitch of a note or by sounding two notes of slightly different pitches to produce prominent overtones. Amplitude modulation

Ethical dilemmas in Damali Ayo's, "The Paint Mixers"

Damali Ayo moved around from hardware store to hardware store with a hidden tape recorder. Her intention was to document the interactions she had with paint mixers. She asked the attendants to mix paint to color of her skin. She feels like recording people without their permission is a good way to uncover the truth. By presenting these voices, reality becomes impossible to ignore. She didn't want the attendants to perform for the audio. She wanted them in their natural state. There is an ethical dilemma about her work as the people did not know they were being recorded.

Steps for digital sampling of sound

Digital audio deals with two important things - sampling rate and bit depth. A digital system (computer) converts continuous acoustic waveforms from the real world into numeric binary data that can be stored and processed and can be reproduced again as a analog signal. analog waveform --> ADC sampling points --> wavetable of digital numbers --> DAC reconstructed analog waveform from sampled points

Electro-harmonic telegraph/ Musical Telegraph

Elisha Gray (who lost the telephone patent to Alexander Graham Bell by hours) patented the Electro-harmonic telegraph in 1876, which in effect produced musically-pitched hums from oscillating circuits controlled by a piano-like keys (propagated acoustically through a vibrating washbasin, as amplification had not yet been invented). The 'instrument' eventually spanned two octaves of keys, and several public concerts were given. "A slightly more practical application of musical tones for the communication of information was the multiple harmonic telegraph, the most musical of which was invented in 1874 by American Elisha Gray (1835-1901). Gray was involved in the field of telegraph communication. He obtained his first telegraph patent in 1867 and was employed by the Western Electric Company as a supervisor. Gray is best known for his contentious patent dispute with Alexander Graham Bell over the design of the original telephone in 1876, a claim that Reis may have also contested had he not died in 1874. The first of Gray's so-called Musical Telegraphs, dating from 1874, had two telegraph keys, each with an electromagnet and a small strip of metal called a reed (see Figures 1.4 and 1.5). When a telegraph key was pressed, an electrical circuit was closed, causing the metal reed to vibrate at a certain frequency that was audible when electrically amplified. The resistance of each electromagnet was different, resulting in the creation of two different buzzing tones. Gray fashioned a loudspeaker using a membrane not unlike the one invented by Reis. Each key produced its own distinct tone and the keys could be[...]T

Trautonium

Freidrich Trautwein developed the Trautonium, and notably, this electronic instrument too attempted to free its music from the equal-tempered piano keyboard by providing a stretched-wire interface one played upon. While position on the wire controlled pitch, pressure on the wire controlled amplitude. Unlike the previous instruments mentioned, its circuitry basis was subtractive synthesis, creating rich timbres, then shaping them through attenuation. Again, several well-known composers embraced the instrument, perhaps none more than Oskar Sala, who continued to compose for and perform on the instrument into the 21st Century." The Trautonium of Dr. Friedrich Trautwein (1888-1956) was developed in Germany between 1928 and 1930.""The instrument had a fingerboard consisting of a metal plate about the width of a medium-sized keyboard instrument. Stretched only a few millimeters above the plate was a wire. Pressing the wire with a finger so that it touched the plate closed a circuit and sent electricity to a neon-tube oscillator, producing a tone. The monophonic instrument spanned three octaves, with the pitch going up from left to right along the fingerboard. Volume was controlled by a foot pedal. The fingerboard was marked with the position of notes on the chromatic scale to make it easier for a musician to play. By 1934, Trautwein had added a second finger board so that two notes could be played at once. At the same time, he introduced an ingenious feature for manually presetting notes to be played. A rail was mounted just a few centimeters above and running parallel to each of the two resistor wires. " "The neon-tube oscillator produced a sawtooth waveform that was rich in harmonic sidebands. This waveform distinguished the sound of the Trautonium from that of the Theremin and Ondes Martenot, both of which used a beat frequency technology and produced waveforms with fewer harmonics."

Tape Loop

In music, tape loops are loops of magnetic tape used to create repetitive, rhythmic musical patterns or dense layers of sound when played on a tape recorder. Originating in the 1940s with the work of Pierre Schaeffer, they were used among contemporary composers of 1950s and 1960s, such as Steve Reich, Terry Riley, and Karlheinz Stockhausen, who used them to create phase patterns, rhythms, textures, and timbres. In a tape loop, sound is recorded on a section of magnetic tape and this tape is cut and spliced end-to-end, creating a circle or loop which can be played continuously, usually on a reel-to-reel machine.

John Cage - Silence

John Cage had a famous piece 4 33, in which he calls for performers and audience to experience 4 Minutes and 33 seconds of silence. Cage used chance operation to compose music. He described his compositions as being indeterminate of their performance. He said there is no such thing as silence John Cage challenged this inter- pretation of silence as "the time lapse between sounds useful to a vari- ety of ends" (Cage 2011: 22) through his famous observation pronounced on entering the anechoic chamber at Harvard University in 1951: "I heard two sounds, one high and one low. When I described them to the engineer in charge, he informed me that the high one was my nervous system in operation, the low one my blood in circulation. . . . Until I die there will be sounds. And they will continue following my death. One need not fear about the future of music" (8). Simultaneously, he reaffirmed silence's central role in twentieth-century musical experimentalism, stating: "when none of these or other goals is present, silence becomes something else—not silence at all but sounds, the ambient sounds" (Cage 2011: 22). Thus how silence is understood depends in good measure on how the relation- ship between the listener and his or her surroundings are conceptualized. Cage's new understanding of the relation between silence and ambient sound provoked, in turn, new types of creative interventions in the arts.

Laurie Spiegel about computer music

Laurie Spiegel worked at Bell Labs. She started working on a program called groove on which one would both synthesize sound and control it in real time. She said she traded off the really rich timbral variety of analog synthesizers in favor she decided to explore the ability to put together complicated patterns and fluctuations of rhythm, pitch and motivic development. Spiegel pinpoints a more pragmatic reason for why women might have found their way to electronics: the DIY aspects of the synthesizer enabled them to bypass a schlerotic system that made it challenging to get your compositions performed. "You could create something that was actually music you could play for people, whereas if you wrote an orchestral score on paper, you'd be stuck with going around a totally male-dominated circuit of orchestral conductors trying to get someone to even look at the score. It was just very liberating to be able to work directly with the sound, not just creating but presenting. Then you could play it to people, get your work taken up by a choreographer, or used to score a film. Or put out your own LP. You could get the music out to the ears of the public directly, without having to go through a male power establishment." At the same time, the gender drum is not something Spiegel particularly cares to bang. "The number of people making music with computers when I started was so small, every person was simply treated as an individual," she insists. "I always felt like an outsider anyway, that was more important than being a woman. In a way I didn't identify as a woman, I identified as an individual."

Electronic Sackbut

Le Caine built the Electronic Sackbut between 1945 and 1948. It is now recognized to have been the first voltage-controlled synthesizer. In 1945, when the first Sackbut was built inside a desk, Le Caine visualized an instrument in which the operator would control three aspects of sound through operations on the keyboard in three co-ordinates of space: vertical pressure was to correspond to volume; lateral pressure to pitch change; and pressure away from the performer to timbre. The control devices were force sensitive. They would alter the sound in response to changes in pressure, something the operator could feel without carefully watching the controls. The timbre controls, however, were soon considerably expanded and could no longer be operated by a single device. Two innovative techniques stand out in the design of the Sackbut: the use of adjustable wave forms as timbres and the development of voltage control. It is in this regard that the Sackbut is recognized to be the forerunner of the synthesizers of the 1970's.

Triode vacuum tube

Lee De Forest invented the triode vacuum tube he called the Audion (or the Audion valve, as the Brits call it) around 1906. De Forest recognized the tube's use for wireless radio, amplification and, through creating heterodyning, a technique for taking the audible beat frequencies of two signals above the human hearing range, actually created an instrument from the tubes almost a decade later "De Forest ushered in the first age of miniaturized electronics with the invention of the audion, or vacuum tube, in 1907. The function of a vacuum tube is to take a relatively weak electrical signal and amplify it. With its widespread availability by about 1919, electronic devices no longer required the enormous, power-sapping mechanical dynamos that made the Telharmonium ultimately impractical. The vacuum tube led to radio broadcasting, the amplification of musical instruments and micro phones, and later innovations such as television and high-fidelity recording."

MPC

MPC is a compact machine that is a holding cell for all types of samples which you can play with 16 touch sensitive pads. The Akai MPC (originally MIDI Production Center, now Music Production Controller) is an integrated digital sampling drum machine and MIDI sequencer designed by Roger Linn and produced by Akai from 1988 onwards. The MPC had a major influence on the development of electronic and hip hop music, allowing musicians and producers to create elaborate tracks without a studio and opening the way for new sampling techniques. MPC 3000 was used by J Dilla. According to Vox, the ability to create percussion from any kind of sound turned sampling into a "new artform" and allowed for new styles of music.[1] The MPC's affordability and accessibility had a "democratising" effect on music. Artists could create tracks on a single machine without the need of a studio or music theory knowledge, and it was inviting to artists who did not play traditional instruments or had no music education

The process and concept of Steve Reich's "Come Out".

On a spring day in 1964, police in Harlem's 32nd precinct tried to beat a confession out of two black teenagers for a crime they did not commit. The young men, Wallace Baker and Daniel Hamm, were repeatedly bludgeoned with billy clubs while in custody, beaten with such force that they were taken to a nearby hospital for X-rays. But even after hours of abuse, the cops weren't about to allow Hamm to be admitted for treatment, since he was not visibly bleeding. Thinking fast, Hamm reached down to one of the swollen knots on his legs where the blood had clotted beneath his skin: "I had to, like, open the bruise up, and let some of the bruise blood come out to show them." And utilizing just that one sentence, composer Steve Reich made one of the most visceral pieces of music of the 20th century. Reich imagined that line as being sung in rounds and made two tape loops to test out his theory. But as he pressed play on his tape machines, a funny thing happened: The loops began in synch and then, as Reich recalls, "ever so gradually, the sound moved over my left ear and then down my left side and then slithered across the floor and began to reverberate and really echo. That whole process immediately struck me as a complete, seamless, uninterrupted way of making a piece that I had never anticipated." Rather than a round about the rain, the sound turned apocalyptic. "It's Gonna Rain" became Reich's first major composition. But he agreed to edit together Nelson's 20 hours of analog interview tapes into a coherent narrative pro bono, under one condition: permission to make a piece along the same lines of "It's Gonna Rain" if he found the right phrase. Nelson agreed. The composer says he was looking "to find the key phrase, the exact wording of which would sum up the whole situation... and the tone of Hamm's voice, the speech melody, and what he says encapsulated a lot of what was going on in the civil rights movement at that time." Reich hums the line's cadence over the phone. "When I heard that, I thought, This is going to make a really, really, really interesting piece." The composition opens with Hamm's statement repeated three times before the two tape loops begin to move out of phase with each other. That subtle shift at first gives Hamm's voice a slight echo and, by the three-minute mark, the words are swathed in reverb as the voices move out of synch. As the loops build, Hamm's concrete imagery transforms into something hazy and unrecognizable as speech. As writer Linda Winer once put it in describing Reich's tape works: "At first you hear the sense of the words—a common statement with cosmic vengeance inside. Then, like with any word repetition, the sounds become nonsense... And one is transfixed in a bizarre combination of the spiritual and the mechanistic."

Multitracking

Multitrack recording (MTR)—also known as multitracking, double tracking, or tracking—is a method of sound recording developed in 1955 that allows for the separate recording of multiple sound sources or of sound sources recorded at different times to create a cohesive whole. Multitracking became possible in the mid-1950s when the idea of simultaneously recording different audio channels to separate discrete "tracks" on the same reel-to-reel tape was developed. A "track" was simply a different channel recorded to its own discrete area on the tape whereby their relative sequence of recorded events would be preserved, and playback would be simultaneous or synchronized.

What is music

Music starts as sound, but something happens in the brain and it transforms. Repetition is a key factor Humans can read the whole picture together. Some animals can understand beat, while others can recognize pitch but never put the whole picture together. One humanly unique thing is emotions. Music is related to feelings

What is musical form

Musical form, the structure of a musical composition. The term is regularly used in two senses: to denote a standard type, or genre, and to denote the procedures in a specific work. The nomenclature for the various musical formal types may be determined by the medium of performance, the technique of composition, or by function. To understand musical form. you need to understand patterns in sound. In most popular music, songs are divided into different sections.

Digital-to-analog converter (DAC)

No matter what the storage or creation method, digital samples must be converted back into analog voltage values to be amplified and reproduced as sound from a loudspeaker. The circuit required for such a feat is the Digital-to-Analog Converter or DAC Sound buffer - Almost all DACs have a buffer to store a large number of samples to avoid the risk of running out of data during sound production Clock: Each sample is "clocked" into the DAC's sample register at the sample rate via a clock-controlled gate. If samples are clocked into the DAC at the rate they were sampled, then the original frequencies will be reproduced up to the Nyquist frequency. If the samples are clocked in at twice the rate, then the frequency will be doubled. Switches: A '1' in a register place will add a voltage to the sum of that sample proportionate to its binary value by closing a switch (or another type of gate) and completing a circuit connection for its particular electronic value. In our hypothetical illustration above, if we have a sample whose value is decimal 9 mV (binary 1001), the gates or switches for the binary places of '8' and '1' are closed, and the value of 9 mvolts is sent out the DAC and held until the next sample is clocked into the register. Summed voltage - If all the switches are closed by a sample of 1111, then the output sums to 15 mV. If they are all open with a sample of 0000, then we get 0 mV's. Smoothing filter: Because the output of a DAC creates a stairstep wave (as in the sampling rate diagram of the previous module) instead of a smoother analog one, a smoothing (lowpass) filter tuned to the sampling rate acts to reduce the discontinuity of those steps and the unwanted frequencies they can produce.

White noise

Noise containing many (all) frequencies with equal intensities and random

Frequency

Number of cycles per unit of time. Measured in cycles per seconds (cps) or Hertz.

Timbre

Our perception of timbre, or tone quality, seems most closely related to the physical phenomena of unfolding partials in the spectrum of a sound, called the spectral envelope. It is what allows us to distinguish between two different instruments playing the same note at the same amplitude. What we expect of familiar sounds, say of a piano note, are certain characteristics that change over time.

Phase

Phase denotes a particular point in the cycle of a waveform, measured as an angle in degrees. It is normally not an audible characteristic of a single wave (but can be when we use very low-frequency waves as controls in synthesis). Phase is a very important factor in the interaction of one wave with another, either acoustically or electronically.

Richard Middletons understanding of repetitions

Richard Middleton (1990) argues that "while repetition is a feature of all music, of any sort, a high level of repetition may be a specific mark of 'the popular'" and that this allows an, "enabling" of "an inclusive rather than exclusive audience"(Middleton 1990, p. 139). "There is no universal norm or convention" for the amount or type of repetition, "all music contains repetition - but in differing amounts and of an enormous variety of types." This is influenced by "the political economy of production; the 'psychic economy' of individuals; the musico-technological media of production and reproduction (oral, written, electric); and the weight of the syntactic conventions of music-historical traditions" (Middleton 1990, p. 268). In his chapter 'In the groove or blowing your mind, the pleasure of musical repetition' - the most most widely applicable aspect of popular music syntaxes is that of repetition and this in turn bares closely in all its manifestations on questions of like and dislike, boredom and excitement, tension and relaxation- in short the dialectics of musical pleasure. Almost all popular songs to. creator or lesser extent fall under the power of repetition.

Sample Rate

Samples are taken at a regular time interval. The rate of sample measurement is called the sampling rate (or sampling frequency). The sampling rate is responsible for the frequency response of the digitized sound. For CD quality, sampling rate is 44100 times per second. We need to be sampling at at least twice the highest frequency (Nyquist theorem).

Bit Depth

Samples taken are then assigned numeric values that the computer or digital circuit can use in a process called quantization. The number of available values is determined by the number of bits (0's and 1's) used for each sample, also called bit depth or bit resolution . Each additional bit doubles the number of values available (1-bit samples have 2 values, 2-bit samples have 4 values, etc.). The higher the bit depth, the more the dynamic range of a sound can be registered.

Wendy Carlos' contribution to the evolution of electronic music

She questioned why all the new synthesize equipment wasn't being used for anything except the academia approved music. She ran into Bob Moog. She was excited by his Synthesizers, which weren't known at the time. Moog made her a custom synthesizer. She made the album Switch on Bach with it. It was synthesized version of Bach's music. This record switched everything. Wendy Carlos' Switched on Bach was the reason the Synthesizer is a household instrument currently. Tis record won three grammies. The Moog she used had 3 to 6 oscillators, white noise, envelopes, the 4 waveforms. Gender has been a major comparison for her work. Wendy Carlos scored clockwork orange's soundtrack. It also used the vocoder.

What is sound?

Sound is produced by a rapid variation in the average density or pressure of air molecules above and below the current atmospheric pressure. We perceive sound as these pressure fluctuations cause our eardrums to vibrate. When discussing sound, these usually minute changes in atmospheric pressure are referred to as sound pressure and the fluctuations in pressure as sound waves. Sound waves are produced by a vibrating body, be it an oboe reed, loudspeaker cone or jet engine. The vibrating sound source causes a disturbance to the surrounding air molecules, causing them to bounce off each other with a force proportional to the disturbance.

Subjective understanding of noise

Sound studies have found in noise a subject of deep fascination that cuts across disciplinary boundaries of history, anthropology, music, literature, media studies, philosophy, urban studies, and studies of science and technol- ogy. Noise is a crucial element of communicational and cultural networks, a hyperproductive quality of musical aesthetics, an excessive term of affective perception, and a key metaphor for the incommensurable paradoxes of mo- dernity. "Wherever we are," John Cage famously claimed, "what we hear is mostly noise. When we ignore it, it disturbs us. When we listen to it, we find it fascinating" (1961: 3). We hear noise everywhere. But what do we listen to when we listen to noise? What kinds of noises does "noise" make? Noise is a material aspect of sound. It is discussed as a generalized property of sound (as "noisiness"); as a distinct sonic object within music, speech, or environmental sounds (as "a noise"); or as a totalizing qualifier for emergent styles (e.g., "that hip-hop stuff is all noise"). But its specific noise qualities are hard to define. The closest thing to a quantifiable form of noise is the abstraction of "white noise," in which all sound frequencies are present at the same time, at the same volume, across the vibrational spectrum (Kosko 2006). But in practice, noise is always "colored," filtered, limited, and changed by contexts of production and reception. Simple loudness is another factor: at the right decibel level, anything, regard- less of its original source, can become noise. Noise, then, is not really a kind of sound but a metadiscourse of sound and its social interpretation. Noise is an essentially relational concept. It can only take on mean- ing by signifying something else, but it must remain incommensurably different from that thing that we do know and understand. Even in the fundamentally relativistic context of musical aesthetics, noise is defined by its mutual exclusion from the category of music. Yet noise is inher- ent in all musical sounds and their mediated reproductions; it has been used as musical material and can even be considered a musical genre in itself.

Analog-to-digital converter (ADC)

Sounds from the real world can be recorded and digitized using an analog-to-digital converter (ADC). The ADC samples the analog signal many times per second and stores them to a table. Once sampled, these sounds can be manipulated.

Subtractive synthesis

Subtractive synthesis is a method of sound synthesis in which partials of an audio signal (often one rich in harmonics) are attenuated by a filter to alter the timbre of the sound. In subtractive synthesis, an oscillator is normally at the beginning of the audio chain. The waveform output by the oscillator is selected for its spectral characteristics. Filters farther down the audio chain then 'subtract' or modify frequencies in that spectrum. Filters are normally used to remove specific frequency components from a complex sound, hence the technique is often called subtractive synthesis Most voltage-controlled synthesizers are modelled around subtractive synthesis techniques, where the sound path begins with a waveform (or sample these days) rich in partials. These frequencies are then partially removed (subtracted), or otherwise shaped, by a filter.

MiniMoog D

The 1964 Moog Modular synthesizer set the gold standard for voltage-controlled synthesis in a single instrument design, and most software synthesizers (such as Native Instruments Absynth), and even synthesis programming languages these days still follow the basic architecture of that instrument. Genius designer/inventor Robert Moog's instrument was highly configurable via patching, or connecting modules via patch cords. It was not unusual to have dozens of patch cords routing audio and control signals to create a single sound. Subsequent lower-tier Moog models, such as the MiniMoog hardwired some of the more common connections and were not as configurable. The Minimoog (or Mini-Moog) is a monophonic analog synthesizer, invented by Bill Hemsath and Robert Moog. It was released in 1970 by R.A. Moog Inc. (Moog Music after 1972), and production was stopped in 1981. It was re-designed by Robert Moog in 2002 and released as Minimoog Voyager. In May 2016, Moog announced a limited-run "pilot production" reissue of the Model D, to be launched at Moogfest. It went into full production shortly afterwards, but Moog Music announced on June 27, 2017 that it was ending the production run of the Model D reissue. The Minimoog was designed in response to the use of synthesizers in rock and pop music. Large modular synthesizers were expensive, cumbersome, and delicate, and not ideal for live performance; the Minimoog was designed to include the most important parts of a modular synthesizer in a compact package, without the need for patch cords. It later surpassed this original purpose, however, and became a distinctive and popular instrument in its own right. It remains in demand today, over four decades after its introduction, for its intuitive design and powerful bass and lead sounds.

Analytical Engine

The Analytical Engine was a proposed mechanical general-purpose computer designed by English mathematician and computer pioneer Charles Babbage. It was first described in 1837 as the successor to Babbage's difference engine, a design for a mechanical computer. The Analytical Engine incorporated an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. In other words, the logical structure of the Analytical Engine was essentially the same as that which has dominated computer design in the electronic era. Babbage was never able to complete construction of any of his machines due to conflicts with his chief engineer and inadequate funding. It was not until the late 1940s that the first general-purpose computers were actually built, more than a century after Babbage had proposed the pioneering Analytical Engine in 1837

Musique concrète

The French championed musique concrète, whose 'objet sonor' or sonic object was to be derived from real-world acoustic events and their societal context "Schaeffer came from the world of radio production and was considered a musical amateur by many of the composers he worked with. He expended an enormous amount of energy establishing a rigid set of criteria for musique concrète, which may be defined simply as "music made from recorded natural and man-made sounds," a familiar practice from today's perspective that requires little interpretation."

How the Kitchen sisters reconstruct reality in sound

The Kitchen Sisters go to a local bar and find a one-handed pool player named Ernest Morgan shooting pool and telling stories about his life. Nelson follows him with her microphone to record all his stories and all the sounds. However, they find a major problem when listening to their tape. The layers of sound that convey the overall vibrancy of the event are impossible to edit without making the whole story sound disjointed (like background music or conversations). Hence, they realized how to do the craft. So to reconstruct reality, they contact Ernest to record him again. Now they control the sonic environment so they can isolate the clean sounds and layer them with recordings they made of the first night. They take Ernest to a bar with less people and record all details needed. Then they go to a crowded bar at night and record all background noises and record the same jukebox sounds that played previously. Then all these pieces are layered to allow for a coherent and intriguing narrative. Hence it sounds as if in real time but it is very dense layering and recreation of a scene.

How audio documentaries have influenced culture

The audio documentary is, perhaps, the most complex of the different audio forms discussed so far because it needs to tell an engaging story that often makes use of many of the approaches addressed previously in this chapter. When done well, this use of audio to put forth a creative treatment of actuality is the most rewarding for the researcher and the most engaging for an audience. Golding's "Voices From the Dust Bowl" is but one example of how archival recordings can be recontextualized in the present to tell a significant and appealing story that would not be possible or as pleasurable if one were to listen to each recording on its own. In this way, the historical audio documentary can help to rejuvenate an interest in old recordings by giving them a sense of historical significance. Whether audio recordings are created primarily for the purpose of making an original audio work, or whether one uses tapes found in an archive or attic, the function of audio documentary is to tell a compelling story that relies on sound and voice. The process of doing so is a creative challenge that requires countless hours of editing and crafting. By audio documentaries, culture is salvaged and preserved.

Mean Free Path

The average distance air molecules have to travel before striking one another. this distance is dependent on air pressure and temperature and therefore correlates to the speed of sound. The farther a molecule has to travel before striking another, the slower the speed at which sound propegates. It is estimated the mean free path of air at sea level and room temperature is between 34 and and 65 nanometers

Cassette culture

The boombox and the cassette tape were instrumental in the evolution of hippie. The music was spread around the city by cassette tapes and were played by the boomboxes around the city. The cassettes created culture. But this new cassette culture also brought the issue of noise and politics. it also dealt with issues of racism. Cassette culture was changing all these musical spheres around the world. This can largely be seen in developing worlds where cassettes have largely replaced vinyl records. Cassettes have served to decentralize and democratize the production and consumption of media. In India, the cassette transformed Indias popular music industry from a monopoly of one LP manufacturer to a free for all amongst hundreds of local cassette producers. The result was a revolution in the quantity quality and variety of Indian music and its patterns of dissemination and consumption. But they also were used to perpetuate commercial vulgarity such as music driven by misogyny. Hence the cassette culture was a technological revolution. It was a vehicle to understand global culture. Cassette culture also about Ata Kak. A white guy capitalizing on his music but if not for him, no-one would know about Ata Kak. Hence cassette culture is complicated.

Who is Ada Lovelace?

The daughter of famed poet Lord Byron, Augusta Ada Byron, Countess of Lovelace—better known as "Ada Lovelace"—was born in London on December 10, 1815. Ada showed her gift for mathematics at an early age. She translated an article on an invention by Charles Babbage, and added her own comments. Because she introduced many computer concepts, Ada is considered the first computer programmer. She published the first algorithm to be carried out by the analytical engine. She was the first to recognize the potential of the computing machine. She suspected the computer to compose scientific pieces of music.

Ethical questions raised by George Orwell's "The War of the Worlds" radio broadcast.

The theatre maker Orson wells shocked radio audiences by the rendition of the way of the worlds on October 30 1938. About a million US citizens believed that the earth was being attacked and after the truth emerged, individuals felt deceived. Many people wrote in to the FCC to express fear about what this incident says about the power of media and specifically radio broadcast sounds.

Vocoder

The vocoder is a device that takes in human speech and disassembles the speech signal na turns it into a series of digital signals. The point of the vocoder is that it creates fewer bits in the signal ( a slower speed signal) hence can be transferred over longer distances. Creator of the vocoder from Bell Labs - Homer Dudley. Cozmo D - I got into synthesized music because it was what I could afford. The vocoder came out of military technology. The signal would be sampled and transferred over radio telephone. The vocoder would reconstruct it into human speech. The use a randomized code key and they used turn tables. These would play thermal noise which would be subtracted at the receiving end.

Main themes of Pink Noises by Tara Rodgers

This book is a collection of twenty-four interviews with women who are dJs, electronic musicians, and sound artists. The interviews investigate the artists' personal histories, their creative methods, and how issues of gender inform their work. This project emerged out of technical interests, social connections, and political affinities in my trajectory as a musician and scholar. The interviews are organized thematically because juxtapositions of genre and generation can reveal how sound and audio technologies connect otherwise divergent experiences. 9 Sounds are points of departure to realms of personal history, cultural memory, and political struggle. This collection will contribute the sounds and stories of some women to historical accounts which have thus far left them out. Yet its relationship to electronic music historiography is not to advocate an unattainable completeness in historical accounts but to be con- cerned with how histories are contained and contested in movements of sound in the present. It is thus necessary to lay out a broad critique of gender issues across multiple histories that electronic music in- herits, including affiliations with militarism in the evolution of audio technologies, a logic of reproduction that operates in audio discourses and practices, and the politics of electronics manufacturing in a music culture that privileges planned obsolescence. Together these factors have informed electronic music histories by delimiting who and what counts in such matters as invention, production, and making noise.

Poème Électronique

Varèse's 1958 Poème Électronique combining both synthetic and real-world sound sources. - link with sound synthesis and musique concrete The catch-all term of electroacoustic (or electro-acoustic), whose true meaning is still debated today, seemed to cover all music whose origin, reproduction and diffusion was not strictly acoustic from beginning to end. "If a turning point in the art of electronic music can be singled out, it began with the somber tolling of a cathedral bell during the opening moments of Poème électronique by Edgard Varèse (1883-1965). The work was composed using three synchronized tracks of magnetic tape and premiered on May 5, 1958 in the Philips Pavilion of the World's Fair in Brussels. The score began as shown in Figure 1.1. Poème électronique was a short work, lasting only 8′ 8″. The music combined the familiar with the unfamiliar in an appealing way and it did so without any formal structure or rhythm. It was a carefully constructed montage of sounds, including bells, machines, human voices, sirens, percussion instruments, and electronic tones, that were processed electronically and edited together moment by moment for dramatic effect. Poème électronique was a "shock and awe" assault on musical culture." "What made Poème électronique a turning point was that it brought one era of electronic music to a close and opened another. Until this piece by Varèse, electronic music was largely produced and performed within the confines of institutions and academia. By contrast, Poème électronique was created expressly for public consumption and was heard by 500 people at a time, many times a day, in a pavilion designed especially for its performance." .

Compression

When air molecules are pushed closer together

Additive synthesis

With the advent of computers and digitized audio data, it has become routine to perform Fourier analysis on existing sound to break it down into its component frequencies along with their specific amplitudes. It has also become commonplace to 'reverse engineer' this process and create complex timbres by carefully mixing sine waves in a process called additive synthesis. Additive synthesis relies on many oscillators chained together, each normally producing a sine wave (which produces only the fundamental frequency) with their outputs being summed, to build up a timbre from scratch. The antithesis of subtractive synthesis is additive synthesis, where sound is produced by summing numerous sine waves together to create composite timbres, and with no filtering.

Amplitude

amplitude in sound refers to the degree of change in atmospheric pressure above or below equilibrium. The major contributing factor to that is the maximum speed of individual molecules involved in the wave. Greater maximum molecular velocities mean greater amplitudes of the sound wave. Amplitude is the objective measurement of the degree of change (positive or negative) in atmospheric pressure (the compression and rarefaction of air molecules) caused by sound waves. Sounds with greater amplitude will produce greater changes in atmospheric pressure from high pressure to low pressure to the ambient pressure present before sound was produced (equilibrium). Amplitude is a measurement of the magnitude of displacement (or maximum disturbance) of a medium from its resting state. Amplitude, power and intensity are correlative components of sound over time, area and distance.

RCA Mark II

the first major fully-integrated synthesizer of great significance is usually credited as the Columbia-Princeton RCA Mark II Electronic Music Synthesizer (affectionately called Victor), and was completed in 1955. At a cost of $250,000 from a Rockefeller grant (a fortune today), with almost 2,000 vacuum tubes, and 20 feet long, the RCA used a player-piano type punched paper roll (see right) to control the function of the various fixed modules—pitch, timbre, envelope, filter, etc. The keyboard punched holes in a pianola type paper role to determine pitch, timbre, volume and envelope - for each note. Despite the apparent crudeness of this input device, the paper roll technique allowed for complex compositions; The paper role had four columns of holes for each parameter - giving a parameter range of sixteen for each aspect of the sound. The paper roll moved at 10cm/sec - making a maximum bpm of 240. Longer notes were composed of individual holes, but with a mechanism which made the note sustain through till the last hole. Attack times were variable from 1 ms to 2 sec, and decay times from 4 ms to 19 sec. On the Mark II, High and low pass filtering was added, along with noise, glissando, vibrato and resonance, giving a cumulative total of millions of possible settings.

Sound synthesis

the production of sound that originates electronically from scratch by either analog or digital circuitry, hence "synthetically," as opposed to sound whose origins derive from recorded or sampled real-world sonic events.


Conjuntos de estudio relacionados

Chapter 1: Lesson 2 - Types of Government

View Set

FIN 3070 Chapter 7 SmartBook Questions

View Set

ATI Concept-based assessment online practice A Level 2

View Set

WISE - Insurance, Insurance (W!SE)

View Set

Psych Final Part 1 (Quiz 2 and 3)

View Set