COMM 130 Quiz #2

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

Sound Variations

Quantity of sound o Signal to noise ratio Quality of sound o Reverberation o Presence Direction of Sound o Moving sound Recording Clean Sound o Recording Formats ♣ Analog Formats ♣ Digital Formats ♣ Digital Audio Recorders QUANTITY OF SOUND • Quantity of sound refers to its amplitude, which the ear perceives as loudness or volume. • A sound wave's amplitude constantly adjusts to the strength of the vibrations being produced. • To be heard, sound waves must be strong enough to be registered by the ear. • To be recorded, the amplitude of the wave must be strong enough to vibrate the transducers in the microphone, creating the electronic signal or voltage. • Amplitude of recorded sounds can be measured by different types of meters, the most common being *VU meters* and *LED meters*. • A volume unit (VU) meter measures and displays the amount of voltage passing through it. • This voltage corresponds to the strength of the sound wave being captured by the microphone. • VU is a measurement of signal strength. • Most VU meters measure amplitude in two ways, in volume units (from -20 to +3 UV) and as a percentage of modulation. The values are displayed on a mechanical meter with a needle. • • Zero VU is equivalent to 100% and represents the strongest sound desirable. • Light-emitting diode (LED) meters have a row or column of lights that light up in proportion to the strength of the signal. • The lights change color (usually from green to red) when the signal becomes too strong. • • When *monitoring* a captured sound, the needle will move or the lights will illuminate in response to the varying loudness of the sound. • A properly recorded analog audio signal should *peak* at 0 dB or 100% (the highest point). • To be heard, a signal needs to be above 0%, at approximately 20% of modulation. • • If the signal exceeds 100% (or 0dB), you risk the possibility of *over-modulation* and subsequent distortion. • Digital recording methods are much more sensitive to overmodulation than analog ones, so maximum levels in digital recording are generally kept to -10dB to -12dB. • An audio recorder or a mixing console inputs multiple audio signals and outputs a single, mixed signal and allows you to control the levels of the incoming signals to ensure high-quality recording. • You can present the levels before recording or mixing, and if the levels change during recording, you can *ride the gain* or adjust them as you go. SIGNAL-TO-NOISE RATIO The *signal-to-noise* ratio is the comparison between the sound you want and the noise you don't want. • The term, used strictly, applies to the different between the voltage of the audio signal and any electrical interference with that signal. • A high signal-to-noise ratio indicates clean sound. • The signal-to-noise concept though is also useful in comparing different sound sources within your signal. • A meter reading tells you what your overall volume level is, but it doesn't tell you if you're capturing the particular sound that you want. • You might want to hear the voice of an interview subject on the street with the sounds of the city low in the background, but you might be getting loud traffic sounds with a barely perceptible voice. • Maintaining a good signal-to-noise ratio is dependent on proper microphone choice and placement. • A lavalier mic used a few inches from the subject, a handheld cardioid held close to the subject's mouth, or a very accurately aimed shotgun a foot or so away from the subject could all give you a clearly recorded voice. • • Generally, you want to record your primary audio sources as a strong foreground sound with your ambient sound well in the background. • More *ambience*, or the background sound of the location, can always be added in postproduction. • To determine that you're actually getting the sound you want, you need to listen to or monitor the sound. • A signal can be split and sent to multiple locations, such as a recording or mixing device and a headset jack or speaker, so you can hear what's being captured. • Monitors also have volume level controls that you can adjust for comfort, but changing the level of the monitor does not affect your recording or mixing volume. That needs to be adjusted separately according the the VU meter. QUALITY OF SOUND When you speak of a sound wave of a particular amplitude and frequency, we mean a single wave at a particular volume and pitch, but sound is really more complex. • A piano playing a musical note sounds very different from another musical instrument playing that same note or a voice singing that note. • The *timbre* or tone quality of each sound is different. • Each sound is actually made up of a group of frequencies or *harmonics* that distinguish one sound from another. • • The shape of the sound wave also makes it distinctive. This is called the *sound envelope*. • How quickly does a sound wave peak at its maximum amplitude (attack?) How quickly does it back off from that peak (initial decay)? How long does the sound last (sustain) before it drops out of hearing (decay)? • Differences in sound envelope contribute to varying sound quality. REVERBERATION Sound waves (like all energy waves) can be absorbed or reflected by different surfaces. • The walls of a room, for examples, will absorb some sound and reflect some. • A hard surface like tile will reflect more sound waves, whereas fabric drapes will absorb more waves. • The multiple sound reflections off all the surrounding surfaces create the effect of *reverberation* (or reverb), which is somewhat like an echo but not as distinct. • A dead space has little to no reverb. A live room has a lot. • Reverb can be added to a recorded sound signal electronically to change the sound quality. PRESENCE The term *sound presence* defines the subjective closeness the listener feels to the sound. • A recorded voice with a great deal of presence sound close to your ear with little background noise. • It has a sense of intimacy. • A recorded voice with little presence sounds distant with a good deal of air and background sound mixed in. • It has little to do with volume but can be manipulated by choice of microphone and placement of that mic in relation to the source of the sound. • A lavaliere clipped to the talent's chest or handheld mic with excellent frequency range held close to the talent's mouth will deliver great sound presence. • A shotgun at a distance of 5 feet will have less presence, even if properly aimed with a strong level. • You usually want to match sound presence with shot size, if shooting visuals. • A close-up matches strong presence. A long shot should sound farther away. DIRECTION OF SOUND Our ears hear sound from all around us. We have *binaural hearing*. • The *sound perspective*, or perception of placement of the sound in space, comes from many clues. • We can tell which side of us a sound is coming from - it reaches one ear before the other and is louder in that ear. • We can tell how close a sound is by its volume and its presence. • Sound waves bouncing off the walls reflect back to our ears, giving us a sense of placement and of the size of the room. • • Recorded sound is different. A single microphone records one input of sound. One-channel playback, or monaural playback, of all the sound comes to our ears from one direction. • *Stereo* (short for stereophonic) recording - done by using two microphones, each recording the sound from slightly different locations (somewhat like your two ears) - can then be played back thru a right and left speaker. • Although it provides an effect that has more perspective than monaural playback, stereo still does not provide the 360-degree effect of the human ear. • *Surround sound* comes closer, using four or more speakers placed around the listening area. • Different tracks of sound from a multitrack mix can be output to different speakers, creating a specific sound perspective. • Sound can be panned (or moved) from one speaker to another. • • An effect even closer to human hearing is possible with special binaural microphones and headset playback. • This involves using a set of small, omnidirectional microphones that are placed in the outer ears of the recordist or dummy head. • When played back thru head-phones, a realistic sound perspective is heard. MOVING SOUND Sound can move thru the space of the frame as well. • By shifting the presence (the perception of proximity) of the audio, the sound seems to be moving closer to or father from the listener. • An increase in presence of the foreground sound is usually accompanied by a decrease in volume of the background audio. • With visual media, sound movement is usually matched by visual movement. • If a shot is tighter (and therefore the viewer feels closer to the subject), the presence should increase as well. • Matching sound presence to shot size and establishing a sense of where the sound is coming from creates a realistic sound perspective. • Sound perspective is achieved naturally by the use of a boom microphone. • As the shot gets wider, the microphone needs to be farther from the subject to avoid being seen in the frame. • A tighter shot allows a closer microphone, causing more presence. • This holds true whether the change in shot size is the result of subject movement, camera movement, or editing. • • Stereo, as mentioned, is a method of recording, mixing, and playing back the audio on two separate channels or tracks. • Stereo speakers are used to the right and left of the listener (and also the screen if used with images). • By mixing the audio so it pans from one channel to the other, the sound moves from one side of the listener to the other, creating movement within the audio frame that can correspond with movement in the visual frame. • To record for stereo, at least two microphones, facing different directions, must be placed far enough away from each other to avoid causing interference. • • Surround sound takes this idea further. • Although different systems are in use, surround sound employs multiple speakers - front, side, and back - to place the listener within a three-dimensional sound environment. • Sound can travel in all directions, once again corresponding with the visual movement. • Movie theaters and high-definition television (HDTV) are increasingly taking advantage of this technology. RECORDING CLEAN SOUND There are many tricks to recording good sound. • Generally, you want to record the cleanest sound possible (high signal level, low noise level), leaving any special effects, such as reverb or additional ambient sound, until postproduction. • Sound recording should always be monitored by listening through good-quality headphones. • A test recording should be done and played back to catch any interference caused by bad electronics. • Interference can also occur when a microphone cable runs parallel to an electrical cord. (Run them perpendicular) • Local radio signals can sometimes be picked up in your recording when ungrounded connections or wiring acts as an antennae. (Change the mic cables or unplug from the wall AC circuit and run off batteries). • • Recording studios are constructed to minimize unwanted noise and reverb. • Anywhere else you record audio, you'll be faced with the challenge of controlling ambient sound. • Sometimes, ambient sound is desirable. • Every space has its own unique ambience or *room tone.* • The room tone helps to establish a sense of place in your audio recording. • If your audio is going to be edited, you should always record some extra room tone with no one talking or moving to fill in between pieces of audio. • • However, ambience can overpower your audio and needs to be managed. • Hums from air conditioners, refrigerators, or fluorescent lights can create unpleasant background noise. • They should be turned off, if possible, or the noise at least should be reduced by pointing the microphone away from the source of the hum and as close to the desired source as you can. • • Clean sound recording often requires patience. • If you are recording in a situation you can control, such as a scripted scene or narration, you can wait for the airplane to fly over or the truck to drive away. • An uncontrolled situation requires careful monitoring, careful mic placement, and the realization your audio might be unstable during the sound of the siren. • • The handling, placement, and quality of your microphone are key to clean recording. • Better mics have low *impedance* (as opposed to high impedance) - that is, they have less resistance to the signal and therefore create less noise and interference. • Better mics also tend to have a greater frequency response and are sensitive to a greater range of low to high frequencies. • • The desired proximity of the mic to the source of sound is dependent on the design of the mic. • Some microphones, such as handhelds, can be a couple of inches from a speaker and get a clean signal that is free of pops and sibilance ("s" sounds). • A lavaliere is best used approximately 6 inches from the speaker. • A shotgun is designed to be used at a distance of at least a foot, which is handy when you can't get close to the sound source or want to create the effect of a faraway voice with minimal presence. • If you are handholding the microphone or using it on a boom or fish pole, careful handling is important to avoid interference. RECORDING FORMATS • Sound can be recorded onto many different devices. ANALOG FORMATS • Although digital audio formats dominate the field today, analog equipment is still used and even preferred by some who find digital recording "too clean." • The first audiotape was open reel, which is still in use today, in both analog and digital formats. • A supply reel holds the raw tape. A takeup reel spools up the tape after it is recorded. • The tape travels thru an erase head, a record head, and a playback head. • The erase head makes sure the magnetic particles are neautrally aligned, the record head aligns the particles with the incoming signal, and the playback head allows the signal to be monitored during recording and after playback. • • Reel-to-reel tape can be 1/4 inch, 1/2 inch, 1 inch, or 2 inches. • The 1/4-inch size is more portable and is used for field recording. • The wider tape formats are used in recording studios because they allow room for more tracks of audio. • Varying the tape speed during the recording process affects the quality of sound. A faster speed allows more information to be recorded and subsequently better frequency response and a lower signal-to-noise ratio but obviously allows less recording time on a tape. • • Analog audiotape has been put into different housings as well. • The audio cartridge is a self-contained device that loops tape through and can be easily inserted into and removed from a playback machine. • The consumer version (eight-track tape) has been obsolete for many years. • Audio cartridge or carts of short duration are still sometimes used for promos, commercial spots, and sound effects in radio and TV stations because of their ability to recue themselves, but they have been replaced by digital files, which can be easily cued. • • Analog audiocassettes house 0.15-inch tape in a convenient form and have had wide use over the years in both professional and consumer formats. • However, they too have become rare. Their sound quality is quite low, compared to the other options. DIGITAL FORMATS Many digital audio formats are in use today. Each seems to have its own advantages and therefore a niche in the market. • The sampling rate and bit depth of a digital recording determine the recorded sound quality. • The common sampling rates for audio are 32 (digital broadcast standard), 44.1, 48, and 96 kHz. • Sampling rates are commonly 16 or 24 bit, bit 24 bit being better. • • Digital audiotape, like digital videotape, is somewhat of a hybrid. • The audio signal is sampled and stored as bits, thus preventing digital audiotape from experiencing the noise and generational loss inherent in analog processing. • Tape, though, is a linear format, so digital tape does not have the random access capability associated with digital technology. • You still need to play or fast-forward the tape from the beginning to get to the end. • Digital audiotape is available in a variety of open-reel formats. • Some have stationary heads, like analog recorders, that require fast-tape speeds. • Some have rotary heads, which spin and increase the ability of the recording head to encode information on the tape. This is called helical scanning and is used by videotape recorders as well. • Consequently, tape speed can be much slower, allowing more recording on a tape. • • Digital audiotape (DAT) was widely used for quality sound recording in both the field and studio but has been largely replaced by tapeless formats. • Disc-based formats have the advantage over tape of random access. • Also, the recorded information is a bit more stable and less susceptible to wear and damage. • Audio-compact discs (CDs) were introduced in the 1980s and became the dominant audio distribution form in the 1990s, replacing phonograph records, eight-track tapes, and analog cassettes. • CDs are losing favor to digital music players such as iPods. • Regular CDs allow playback only. • Recordable CDs (CD-Rs) allow recording, but only once. You cannot rerecord. • Rewritable CDs (CD-RWs) allow CD use for mixing and editing because you can go in and make changes by rerecording over a section of the disc. • However, the different CD formats are not completely compatible. • CD-RWs must be played in their own drives or a special CD-ROM drive, not on a standard CD player. • Digital versatile discs (DVDs) are sometimes used in audio production in addition to video, because of their much expanded storage capacity. • • Mini discs (MDs) are similar to CD-RWs - digital quality and random access - but use a magnetic recording system, somewhat similar to tape. • MDs were introduced as a consumer format and have two different disc types. • One is for prerecorded audio, and one is for recording. • High quality and portability made MDs a popular professional format, but one that is losing ground to digital audio recorders. DIGITAL AUDIO RECORDERS Digital audio recorders that record straight to computer memory are quickly replacing other digital audio services. • The recording can be to a hard drive or to removable *flash memory* (a form of solid-state drives as a Secure Digital [SD] card or a Compact Flash card). • The capacity of flash memory continues to grow while the prices come down, but at this writing, a hard drive gives you more storage for less money. • Flash memory has no moving parts and is, therefore, less susceptible to movement. • This generally makes flash memory a better choice for portable devices. • It is also faster and more easily erased and reformatted. • Both types allow downloading to a PC. • Some digital devices are designed to be handheld and have built-in microphones. • High-quality units designed for film and video production record timecode to allow accurate synchronization of sound and image.

Sound/Image Combinations

The relationship between sound and image can exist in several ways. • *Synchronous sound* is sound that matches the action and is recorded at the same time as the picture. o The character's lips are moving while we hear her words. The door shuts, and we hear the slam. o Synchronous sound is recorded as the image is acquired; it is the sound of what we're seeing. o Recording clean synchronous sound can be a challenge on location because unwanted noise of too much ambient sound may be present. o However, when the sound and image match on screen as they do in reality, it contributes to a sense of authenticity. o ADR would be an example of postsynchronized sound. • *Nonsynchronous sound* is any sound that was not recorded with the image and does not match the action. o Nonsynchronous can be be wild sound, which is audio recorded in the field but not simultaneously with the picture. o It can also be music, narration, dialogue happening off-screen, sound effects, or any combination. o Nonsynch sound can be descriptive; for instance, it may accompany a shot of a couple silently drinking coffee while we hear conversations from unseen patrons in the shot and the clattering of dishes back in the kitchen. o The sound/image relationship is realistic, but the sound was likely recorded in a different time and place. o A less realistic use of nonsynchronous sound would be created by the same shot with the soundtrack of the couple arguing the night before with a crying child in the background. • Related to but somewhat different is the distinction between *diegetic* and *nondiegetic* sound. o Diegetic sound is part of the story - sound that characters in the story can hear. o Diegetic sound can be synchronous or nonsynchronous off-screen sound. o The term is often used to describe the relationship between the music and the story. o Fro example, music coming form a car radio on the screen or played by a piano player in the corner of the bar where the scene takes place is diegetic music. o Music that only exists on the soundtrack and is not heard by the characters on screen is nondiegetic. o Voiceover narration is nondiegetic also - the audience hears it, but the characters do not. o For example, we see a young boy riding his bike through a 1980s suburban neighborhood, but we hear the character as an adult talking about the lost innocence of youth. • The relationship between sound and image can shift from scene to scene. o We hear the voice of a schoolteacher talking about the Revolutionary War while we see exterior shots of a one-room schoolhouse. o We then cut to inside the school and see the teacher addressing her students. o The narration is diegetic because it is part of the world of the story, but it started as off-screen nonsynchronous sound and became synchronous. o In another example, music might at first seem to be a nondiegetic score that does not exist in the world of the characters. A young woman sits on the beach as we hear a meloncholy tune. The camera then zooms out to reveal her car parked next to her with the radio playing. The song ends, and the disk jockey speaks. She can hear the music. It is diegetic after all. • The ability to layer sound and image, shifting the dynamic between them, is an integral part of media art. o Sound and image can match, sound can contradict image, or the juxtaposition of sound and image can create meaning neither could on its own.

sampled

In the digital recording process, the wave is measured or *sampled*, and the information is converted to mathematical units or *quantized*. • The information is stored as a series of ones and zeroes that describe the characteristics of a wave. • The more often the wave is sampled (its *sampling rate*), the more fully the wave is reproduced.

trough

Lowest point of a wave. With a sound wave, the ______ represents the maximum release of air molecules.

binaural hearing

Ability of human ears to perceive differences in location, direction, and presence of sound.

unidirectional (microphone)

Classification of a microphone pickup pattern that includes a limited area in front of, to the sides of, and behind the microphone.

slate

Device used to synchronize the simultaneous shooting of film and recording of sound. Can be manual or digital.

timbre

Distinctive tonal quality of a sound.

Sound Recording for Different Media

Film Sound Video Sound Audio- and Computer-Based Media FILM SOUND Film and film sound are usually captured at the same time (synchronous sound) but use separate pieces of equipment to do the capturing. • Shooting film is a mechanical and chemical process that takes place in the camera, whereas audio recording is electronic. • In order for the sound and image to be precisely synchronized, the audio recorder must work at the exact same speed as the camera. • Even slight differences can result in out-of-synch sound, such as the sound of the voice or a door slam that the audience hears just before or just after the image. • • When portable 16-mm filming and synch sound capabilities were first developed in the 1960s, camera/audio recorder rigs were connected by a cable to make sure the motor of the camera was the same speed as that of the recorder. • Later, oscillating crystals in the camera and recorder regulated the speed of the recorder to match that of the camera and made the cables unnecessary (crystal synch) • Now that digital audio recording and transfer of film to video for digital nonlinear editing are the norm for film production and editing, timecode synchronization is most common. • Timecode is an electronic signal recorded onto tape that identifies each frame or the audio equivalent of the recording. • Whether synchronized by crystal synch or timecode, there needs to be a way to match up the shot of film with the piece of synchronous audio that goes with it. This is done by slating. • A *slate*, or clapper, is the recognizable rectangular object with the hinged top (called clapsticks), which is snapped shut right before the director yells, "Action." • • Slates are made to be written on with chalk or dry erase markers. The shot, take number, and date are then visually identified for each take. • An electronic or "smart" slate has a timecode counter that can be aligned, or jam synched, to match the timecode on the digital audio recorder. Slates without clapsticks are used to identify shots when recording video and audio with the same device. • If the slate is used at the beginning of each shot, it's possible to match the frame of film with the point on the audio where the sound begins. • If the film and sound were recorded in synch, they will stay synchronized after that beginning point is found. • Each shot needs to be synched before editing can begin. • • Different types of audio recorders are used for film. • For many years, the Nagra, a high-quality, rugged, analog open-reel recorder, which uses 1/4-inch tape, was the standard for location shooting. • As digital recording has come to dominate, digital multitrack recorders, DAT (digital audio tape) cassette recorders, and flash recorders are also widely used. • Most digital recorders used for synch film production include timecode capabilities that provide acceptable synchronization. • • Once recorded, the sound must be transferred for editing. • When editing the film itself (cutting apart shots and taping them together in the proper order), the sound is transferred to magnetic stock (or mag stock), a type of audiotape that is the same size as the film (16 mm or 35 mm). • The shots of film are matched up with the correct pieces of sound and stay together through the editing process. • This processes has become quite rare. • If editing is done digitally (now the norm), sound is captured directly from its original format into the editing software. • • Once the editing process is complete, the sound must be physically put onto the film for nondigital theatrical distribution. • Several methods are available for doing this, the most basic being an optical stripe on the film, in which sound is read by a light beam in the projector. • In addition to the optical stripe, digital enhancements, such as Dolby, can be added to the film. • Surround sound in movie theaters is often delivered on audio CDs that are synchronized with the film. VIDEO SOUND Unlike the double-system method of recording sound with film, video uses a single-system method where sound and image are recorded together onto the same material. • The video signal is divided into separate sections for different pieces of information, including the video signal, timecode, and audio information. • Different formats of video record the information in different figurations, but on any format there will be at least two channels of audio. • When you record sound for video, you can usually direct your microphone signal or signals to record on either channel or both. • Some consumer camcorders allow only one microphone input with the right and left stereo channels. • If you don't have enough microphone inputs, you can input the mics into an audio mixer, which comes in field versions that can run off of battery power, and feed the output of the mixer into the camera. • Mixers are also used with audio recorders for film. Make sure the mix is the way you want it when recording. Mixing is live editing. Once the different mic sources are mixed, you can't unmix them. • • Despite the fact that video sound is recorded with the image, it is not uncommon for people to record sound separately as well, on a flash or DAT recorder. • It allows for more control in monitoring and setting levels - separating those functions from the camcorder operation. • This is especially true with fiction projects on video, but it is true in nonfiction work as well where there is a large enough crew and the desire for high production values. • A slate like that used in film production can be used to allow synching of the sound and image in postproduction, but if the audio recorder has a timecode function, synching the footage is easier. • • When editing video, the sound and picture are generally digitized into the editing software together. • Other tracks of sound, such as voiceover, music and sound effects, can be added to the track or tracks of live audio. • It is also possible to delete live tracks if you like. • At the end of the editing process, all the tracks are mixed into a final soundtrack that accompanies the picture onto the release version. • Television sound has traditionally had a reputation for poor quality. • The limited frequency response of audio on analog videotape and the small mono speakers on most TV sets supported this reputation. • However, with the advent of digital videotape with its digital-quality audio recording capabilities and the use of stereo speakers and surround sound in consumer's home entertainment systems, television sound can be produced and experienced at a high-quality level. AUDIO- AND COMPUTER-BASED MEDIA As we have seen, producing and editing sound for film and video are increasingly taking place digitally. • Once audio has been digitized into a sound file on a computer, it can be used for any digital media piece. • Audio is becoming an important aspect of computer-based interactive programming and games. • • For audio to be usable in computer-based multimedia, it needs to be digital and in a usable format. • The computer needs to have enough storage to hold the application, enough memory (RAM) to run it, and a fast enough processor to play it back smoothly without interruption. • • Techniques are available that help make it possible to distribute audio and multimedia via the Internet. • One is compression, or making the size of the digital file smaller. • Reducing the sampling rate, for example, from 48 kHz to 32 kHz, will also reduce file size, as will using mono sound as opposed to stereo, but both of these will reduce the quality. • Streaming audio is an important technique for using audio and video over the internet. It eliminates the need to download the entire file onto your computer before hearing or seeing it. • Streaming allows a gradual download that can be accessed while it is playing.

harmonics

Group of similar frequencies that creates a specific sound.

handheld microphone

Microphone designed for use in proximity to subject. Can be held or placed on a stand.

radio frequency

Part of the electromagnetic spectrum that is used for wireless broadcasting communication.

frequency response

Sensitivity across a range of frequencies.

Understanding Sound

Sound Waves The Ear SOUND WAVES • Sound is created by physical vibrations that set molecules in motion, creating sound waves that travel thru the air. • When a sound is made, the molecules in the air surrounding it are displaced and then pull back slightly, creating a pressure followed by a release • The maximum pressure is the *CREST* of the sound wave • The release is the *TROUGH* of the wave • The *VELOCITY* of sound remains constant at the same temperature and altitude • The strength and rate of the waves vary much more than velocity • Size of a sound wave is measured in terms of length and height • The measurement of one cycle of a wave is called its *WAVELENGTH*. • A cycle measures a single vibration of the air, a compression and release of air molecules. • Sound waves have longer wavelengths than those of visible light • Sound waves vary from a fraction of an inch in length to approx 60 feet • Wavelength of a soundwave is related to its rate of *FREQUENCY*. • *FREQUENCY* is the number of cycles that the sound wave travels in 1 second. • Because the speed of sound waves stays constant at a given altitude and temperature, the length of the wave determines how many cycles travel through a given point in 1 second. • *Wavelength* and *frequency* are closely aligned. • The longer the wavelength is, the fewer cycles are able to pass through a point in a second and the lower the frequency. • Frequency is measured in cycles per second (cps). • The equation for determining wavelength or frequency is [wavelength = velocity / frequency]. • Frequency determines the *PITCH* of the sound - how high or low it is. • A human with excellent hearing can hear frequencies that range in frequency from 20 to 20,000 cps; most of us hear a bit less. • A wave with rate of 7000cps has a frequency of 7000 hertz (Hz) or 7 kilohertz (kHz) • The measure of air compression of sound is called its *AMPLITUDE*. Graphically represented by the height of a wave. • These variations in pressure are perceived as variations of loudness. • A loud sounder produced a stronger vibration, causing more air molecules to be displaced. • That larger vibration is represented by a taller sound wave. • *Amplitude* is measured in a unit called a *DECIBEL (db)* • A sound at 1 dB can hardly be heard. 150 dB and above will permanently damage your hearing. • Once created, sound waves travel outward in all directions, like ripples when a rock is thrown into water. However, they don't travel equally in all directions. • When you speak, you project stronger sound waves in front of you than behind you. • Like all waves, sound waves get weaker, or *attenuate*, with distance. • If a sound stays at a constant loudness and pitch, its wave also remains the same. Most sound, however, such as speech, music, and most environmental noise, contains variations of both frequency and amplitude. THE EAR • The human ear is made up of the outer, middle, and inner ear. • The outer ear funnels the sound through the ear canal to the middle ear in which the eardrum, or tympanic membrane, is located. • The sound waves cause pressure on the eardrum that is then amplified by three small bones (ossicles) that act as a lever. • The vibrations then travel to the cochlea, a fluid-filled coil that holds small hairs (cilia) that attach to nerve cells. • Here the vibrations are converted into electrical signals that travel to the brain via the auditory nerve. • The brain reads these impulses as sound in a process that is similar to how the brain reads the impulses from the optic nerve as sight. • Research has shown that different parts of the whole sound is stored in short-term memory and that different parts of the brain are involved with interpreting different kinds of sounds, such as speech, music, and background noise.

soundtrack

Soundtrack

velocity

Speed of a wave in relation to time.

room tone

The sound of a space or its ambient sound.

sound presence

Subjective sense of sound proximity. A sound that seems close to the listener has a lot of ______________.

sound design

Technical and aesthetic plan to combine sound elements to work together as a whole or in combination with image.

pickup pattern

The area around the microphone in which sound will be clearly reproduced.

electret

• A common type of capacitance microphone, that has permanently charged plates that require only a very small battery for preamplification.

waves

Undulations of traveling energy.

decibel (dB)

Unit of measurement for the amplitude (loudness) of a sound wave.

Elements of Sound Design

Dialogue Narration Ambience ADR • *Dialogue* is integral to most fiction pieces of media. o Conversations between characters make up a large part of many movies and are even more central to television drama and comedy. o With its radio parentage and smaller screen size, TV has always been more talk-based and characer-driven than movies, which tend to be more action-based. o Dialogue is also an important element in some documentaries that observe subjects conversing with each other. o The statements made by a subject being questioned by an interviewer (who may or may not be heard) can be considered dialogue as well. • *Narration* is similar to dialogue except the narrator is speaking directly to the audience. o A narrator may appear either on camera or only on the soundtrack. o Narration that is only heard is called a *voiceover*. o Voiceover (VO) can be written and spoken in the third person by an unidentified voice that is generally accepted as an expert. o VO can also be in the first person, the voice of the maker or a participant in the story, identified explicitly or implicitly. o The narrator might introduce himself, or his identity might become clear through what he says or through voice recognition. • *Ambience* is the sound of the space where the story takes place. o Ambient sound is part of the sound recorded on location. o Sometimes different or additional ambience is mixed in to the *soundtrack* later. • Sound effects are extra sounds that can either come from things seen on the screen or be mixed into the soundtrack. o Sound effects can replace the actual sound recorded for a screen event to increase the dramatic emphasis. o A punch performed on the set may not sound hard enough for the demands of the story, so a sound effect os a sledgehammer striking a cabbage is inserted. • Sounds of unseen sources can be inserted to enhance the atmosphere or mood of the scene portrayed on the screen. o *Sound effects* can be birdcalls, lapping waves, footsteps, gunshots, clinking glasses, or screaming crowds. o Sound-effects libraries are sold with hundreds of sounds to choose from, but a *Foley* artist, or sound-effects creator, can produce sounds more tailored to a specific image. • *Automatic dialogue replacement (ADR)* is a technique used especially in the production of fiction film or television production. o Because it is often difficult to record high-quality sound on location, the soundtrack is essentially rebuilt from scratch. o Actors are brought into the studio for ADR or looping, and they watch their image on a screen while they respeak their lines. o These cleanly recorded dialogue tracks are then mixed with ambience tracks and sound effects to recreate the original sound - only better. • Music that is used as part of sound design is such a common and accepted part of media storytelling that often we don't even notice it. o Music sets the tone, triggers changes in mood, creates suspense, and lets us know when the story is going to end. o *Scoring* is composition of music specifically for certain visuals. o Sometimes the music comes first and the visuals are created to illustrate or complement it, such as for a music video or montage of Olympic moments. • Ultimately, all of these sources will be combined together, forming the *sound mix*. o Creating a good sound mix is a true art. o The subtleties of variation in sound level and quality are very important to the success of a film, video, or other media project using sound.

diegetic

Existing within the reality of the story.

voltage

Electric energy.

bit depth

Amount of digital information taken within each individual sample.

impedance

Amount of resistance to a signal.

capacitor

Another name for condenser mics

pitch

Aural manifestation of a sound wave directly proportional to its frequency.

scoring

Composition of music to complement visual elements and heighten mood in a media production.

dialogue

Lines spoken by subjects or actors that are part of the story.

Types of Audio Media

Radio Podcasts Music Recording Alternative Radio RADIO Radio is the transmission of audio signals through the air. • The electronic signal is transducted into high-frequency radio waves that travel a distance and are received by an antenna. • The antenna transduces them back to the electronic signal that can be transduced to sound waves by the radio speaker. • *Broadcasting* is the transmitting of radio waves point to multipoint, whereby one transmitter reaches many receivers. • Like television signals, radio can also be distributed thru cable systems or via satellite. • Within the past few years, most broadcasters have also begun to program simultaneously on the internet. • This involves converting the radio signal to digital information that can be sent thru the phone lines or larger types of cable that carry internet service (like a cable modem or ISDN line) and listened to on your home computer speaker. • Because an audio signal requires much more digital information than does text or graphics, *streaming* is required. • Streaming sends the audio information in a small, steady flow that can start to play while the rest of the information is still being sent, as opposed to one large piece of information that could take a long time to download to the computer. • Radio is also now transmitted via satellite. • Providers offer national programming to listeners for a monthly fee. • As with satellite TV, this type of service often bypasses the local broadcaster and therefor, local news and public affairs programming. • • In the Golden Age of radio broadcasting during the 1930s and '40s, there was a great variety of original programming, ranging form dramas to situation comedies to variety to news to music. • When TV came along, virtually all of the dramatic and comedy programming migrated to TV, which left radio with news and music. • Most radio programming is now prerecorded music with dis jockeys or hosted talk shows. • News, sports, traffic, and weather punctuate both types of programming. • Public radio offers some additional program types - commentaries and the occasional radio drama or acoustic art. • Some programming is nationally syndicated. • Increasingly fewer corporations own a larger and larger percentage of radio stations, but most radio is locally produced, which gives it a regional flavor missing from most other types of media today. PODCASTS The term *podcast* gets its name from the Apple iPod and refers to a radio broadcast or audio blog that can be downloaded or streamed to a personal computer. • It can be listened to from the computer or downloaded to an iPod or other portable media player. • Often listeners subscribe to a podcast, enabled by a group of web formats called RSS that allow for automatic updating. • Podcasts are created by large media outlets and individuals alike, ranging in topic from major news events to esoteric special interests. • A recent search of newly created podcasts included rock band interviews, religious services, and a podcast created by a class of fifth graders. MUSIC RECORDING Recording music makes up a large portion of the audio-related media that we consume. • The format we purchase and listen to has changed more than any other media form. • Phonograph records gave way to eight-track cartridges, then cassettes, CDs, and now digital music, downloaded off the internet and stored as compressed digital files on a tiny handheld digital storage device that can hold thousands of songs. • Music downloaded off the internet has been the subject of legal battles, pitting those who want to protect the recording artist and record companies' rights to profit from their music against those who claim the right to obtain it without charge. • The courts have upheld the rights of artists and record companies, but as will all the technologies that allow easy reproduction of creative property, copyright is difficult to protect. • Some artists have chosen to offer their music freely for download in the spirit of shared expression or as a marketing approach that treats downloads as a promotion for live concerts. • • MP3 is the dominant format that takes digital audio files (such as those recorded on an audio CD) and compresses the amount of digital information required without losing much audio quality. • • Music recording is integral to the sound design of movies and television, and music videos revolutionized TV programming and film and video editing styles. • Most music recording is done under controlled conditions in studios and heavily remixed and postproduced. • Live recording and mixing are also done, albeit in less controlled environments. ALTERNATIVE AUDIO Through music recording and radio programming, hosted by disc jockeys, dominate sound-only media, other programming types do exist. • Documentaries based on interviews, field recordings of actualities (ambience and sound effects recorded in the field), and narration explore issues and sometimes obtain access to people and places that film or video cameras could never get. • Radio dramas still challenge our imaginations to visualize action, setting, and characters just as they did in pretelevision days. • Occasionally you can hear a documentary or drama on public or alternative radio. • • Sonic or acoustic artists use sound as a medium of expression like paint or light on the human body (see John Cage). • Sonic art can stand alone, be performed in concert, or be broadcast on alternative radio outlets. • Some sonic artists exhibit their sound art in installation form, creating listening experiences for museums, galleries, and public art spaces. • Sound can also combine with dance, performance, or visual arts in a multimedia expression.

lavaliere (microphone)

Small microphone designed to attach to a subject.

nondiegetic

Sound or music that exists outside the reality of the story in a film or video production, such as background music or a voice-over narration.

ride the gain

To adjust a level while recording or taping.

attenuate

Weakening of an energy wave as it travels.

ambience

Natural sound of a location

crest

Highest point of a wave. With a sound wave, the _____ represents the maximum pressure of air molecules.

peak

Highest point of voltage within an audio signal

bidirectional (microphone)

Classification of a microphone pickup pattern that includes left and right sides of a microphone.

continuous wave

End result of the analog recording process based on the direct relationship between the original waves and the recorded signal.

narration

Lines spoken that describe or reflect on the action but are not a part of it.

reverberation

Multiple reflections of a sound off of all the surfaces surrounding ints initial source.

voiceover

Narration for a film or video that can be written and spoken in either the first or third person.

automatic dialogue replacement (ADR)

A technique used most commonly in fiction films where inferior audio, recorded in the field, is replaced by having actors come into a studio, watch the film, listen to the field recording, and respeak their lines in a controlled environment. Also called looping.

podcast

A term derived from the Apple iPod referring to an audio production (radio program or audio blog) that can be downloaded or streamed to a personal computer.

cardioid (microphone)

Another name for a unidirectional microphone pickup pattern that describes the pickup's heart-shaped pattern of sensitivity.

ribbon microphone

Another type of magnetic microphone. Has a small strip instead of a metal coil. • Sometimes classified as dynamic mikes and sometimes considered a separate category. • Tend to be larger than other mikes and very fragile. Hence, they are not suitable to field use but have a rich sound and excellent reproduction of lower frequencies, which makes them an excellent choice for speech (especially low-pitched voices).

hertz (Hz)

Basic unit of measurement of the frequency of sound.

omnidirectional (microphone)

Classification of microphone pick-up pattern that includes the entire surrounding area, eliminating only the area directly behind the microphone.

sound perspective

Concept of matching of sound presence and shot size and establishing a sense of where the sound source is coming from.

quantized

Conversion of a sampled signal to mathematical units.

transduction

Conversion of energy from one form to another.

Foley

Creation of sound effects as part of a sound design.

flash memory

DIGITAL AUDIO RECORDERS Digital audio recorders that record straight to computer memory are quickly replacing other digital audio services. • The recording can be to a hard drive or to removable *flash memory* (a form of solid-state drives as a Secure Digital [SD] card or a Compact Flash card). • The capacity of flash memory continues to grow while the prices come down, but at this writing, a hard drive gives you more storage for less money. • Flash memory has no moving parts and is, therefore, less susceptible to movement. • This generally makes flash memory a better choice for portable devices. • It is also faster and more easily erased and reformatted.

sound mix

Dialogue Narration Ambience Automatic dialogue replacement (ADR) • Ultimately, all of these sources will be combined together, forming the *_________*. o Creating a good _________ is a true art. o The subtleties of variation in sound level and quality are very important to the success of a film, video, or other media project using sound

phantom power

Externally-provided power circuit used for pre-amplification in condenser microphones.

amplitude

Height of a wave. With sound waves, it determines loudness.

wavelength

Measurement of one full cycle of a wave, from crest to crest.

LED meter

Mechanism that displays the amount of voltage in an audio signal. The information is displayed by a series of light-emitting diodes (LEDs) on a metered scale.

VU meter

Mechanism that measures the amount of voltage in an audio signal. A needle indicates volume units or percentage of signal on a linear scale, or LEDs display the strength of the signal.

streaming (audio)

Media presentation technique that allows simultaneous viewing or listening while downloading rich media from the internet onto a computer.

stereo

Method of recording, mixing, and playing back audio on two separate channels or tracks.

shotgun microphone

Microphone with a narrow pick-up pattern (unidirectional) designed for recording audio at a distance from the source, usually as a method of removing the microphone from the frame of the image.

Reproducing Sound

Microphones Using Microphones Transducer Pickup Patterns Microphone Usage Types Analog vs. Digital Recording Audio Compression REPRODUCING SOUND • Reproducing sound for media production follows a similar process to that of sound transportation in the human ear. • Transmitting sound from one location where it is created to another where it is heard or saving that sound for playback at a later time requires another step. • The sound waves created by the vibrating voice box of someone speaking, an instrument playing, or a knock on the door need to be converted from one form of energy (sound waves) into another (electronic signal). • The conversion of one form of energy into another is called *transduction*. • The transducer for sound is a microphone. • The microphone creates a signal that contains all the variations of amplitude (loudness) and frequency (pitch) produced by the sound source. • The electronic signal can travel along a microphone cable to a loud-speaker, recording device, or mixing console. • A mixing console allows you to input several microphones and other audio playback equipment and combines them into a mixed signal to be recorded for future playback, played back live, or transmitted via wires or *radio frequency* to another location or locations. • When you play back a recorded signal, the process is reversed. • The signal is transduced by a speaker into physical vibrations that travel to the ear to be received by the eardrum and perceived as sound by the brain. • Wen a wireless microphone is used or sound is transmitted via radio, a second stage of transduction occurs in which the electronic signal is converted into radio-frequency waves. • Radio waves have a much higher frequency than sound waves and are even higher than those of visible light. • This characteristic allows radio-signal information to travel long distances without wires. • Received by an antenna, the radio waves are converted back to an electronic signal and then to sound waves that the listener will hear. MICROPHONES • Ways to classify microphones: by their transducer, their *impedance* (the inherent opposition to the flow of the electrical current), the way they pick up sound, and how they are used. o Using Microphones • Knowing which types of microphone to use requires the consideration of many factors: ♣ What kind of sound are you reproducing (speech, singing, instrumental music, or background sound)? ♣ Where is that sound being recorded - a sound studio or busy street? ♣ What else is going on? Is the speaker moving? Is a visual image also being captured? ♣ How will the reproduced or recorded sound be heard - thru a small television speaker or thru high-quality surround sound speakers? ♣ What kind of microphone can you afford? Quality and price vary ♣ What is the production situation? Are you a one-person crew, recording sound and picture and perhaps conducting an interview all at the same time, or is this a high-budget production with many microphones available for use? o Transducer • All microphones convert sound vibrations into an electrical signal, but transducing elements inside the microphone vary. • When referring to transducing elements, most microphones you come across will be either *dynamic* or *condenser.* • A *dynamic microphone* works on the principle of magnetism. • A small metal coil is placed near a magnet, which generates energy called a magnetic field. • When the coil receives the sound vibrations, it moves. • Movement of the coil within the magnetic field creates electric energy or *voltage* that retains the information about amplitude and frequency of the original sound wave. Another type of magnetic microphone is a *ribbon mike*, which has a small strip instead of a metal coil. • Ribbon mikes are sometimes classified as dynamic mikes and sometimes considered a separate category. Condenser mikes, also called *capacitor* mikes, work a bit differently. • They have two plates, one fixed and one moving, one with a positive charge and one with a negative charge. • Voltage is created as a result of the proximity of the positive and negative charges. • The moving plate moves toward and away from the fixed plate in response to the sound vibrations. • The closer it moves, the stronger the voltage. • The variation in the voltage matches the variation in the sound waves, and it is encoded in the electronic signal. The voltage created in a condenser or capacitance microphone is very weak, so it needs to be amplified before it leaves the microphone. • This amplification is powered by a power voltage, which is different from the polar voltage created by the charged plates. • The preamplification is accomplished by a battery or externally provided circuit called *phantom power*. • Phantom power is often supplied by the wiring in a sound or TV studio or can be present in the circuitry of an audio mixer, camcorder, or other device. • A common type of capacitance microphone, called an *electret*, has permanently charged plates that require only a very small battery for preamplification. The different transducer types have some general characteristics, but the microphone quality can vary greatly. • When we compare one to the other, we are comparing microphones of similar quality. • Generally, dynamic microphones are the most rugged and the least susceptible to damage if bounced around. • This makes them an excellent choice of field mikes. • Condenser microphones are a little more fragile and more susceptible to interference. • They are better for music reproduction because they tend to have a wider *frequency response* than dynamic mics do. • They also need preamplification, so they must be turned on and have appropriate power before use. • Ribbon microphones tend to be larger than other mikes and very fragile. Hence, they are not suitable to field use but have a rich sound and excellent reproduction of lower frequencies, which makes them an excellent choice for speech (especially low-pitched voices). • These are generalizations. Some dynamic mics need preamplification. When used with a little bit of care, condenser mikes can perform very well in the field. Newer ribbon mikes are smaller and less susceptible to interference. • Ultimately, you need to experiment with microphones to see which has the sound and performance that works for you. PICKUP PATTERNS • Sometimes you want a mic to pick up sound equally from all directions. Often you want the mic to favor a certain sound source over others. • Microphones are designed with different *pick-up patterns* to suit varying needs. • *Pick-up pattern* means the area around the microphone in which sound will be clearly reproduced. • The 3 main classifications are *omnidirectional*, *bidirectional*, and *unidirectional*. • An *omnidirectional mic* has sound sensitivity all over the microphone, except for right behind it. • Omnidirectional microphones are good for recording ambient sound (the overall sound of a particular location). • They are also fine for picking up the sound of one speaker if that speaker can be placed very close to the microphone. Generally, you don't want the speaker too close, but but omnidirectional mics are less sensitive unwanted vocal noise than are other types of microphones. • *Bidirectional* microphones are sensitive in one direction, but it's not quite that simple. Even if a mic is designed to receive sound from one position (directly in front of the mic), it will inevitably have some degree of sensitivity along the sides of the mic and behind the mic as well. • Pickup will be strongest and clearest in front of the microphone. • This is the desired *on-axis* range for quality recording. • Sound picked up from the side or back of the microphone will be weaker and relatively muffled, but it is still present. This degree of side sensitivity can vary. *Unidirectional* mics are also called *cardioid* mics. • Cardioid describes the heart-shaped pattern of sensitivity that unidirectional mics have. • Sometimes the heart is quite wide. Mikes that are characterized as simply cardioid have a wide area of sound pickup in front of the mic and a good bit of side sensitivity. • A mic with less side sensitivity and a narrower pickup pattern is known as a super-cardioid. • Hypercardioid mics have even less side sensitivity and an even narrower pickup area. • Unlike a cardioid mic, supercardioid and hypercardioid mics also have an area of sensitivity directly behind them. • Unidirectional microphones are used to record sound that emphasizes a particular source. Someone is speaking. You want to hear some background sound, but you want to hear the speaker clearly above the background. • Your ability to do that is based on 3 considerations: pickup pattern, the distance between the mic and speaker, and the relative volume of the background noise compared to that of the speaker. The louder the background sound is, compared to the speaker, the closer the mic's pickup pattern needs to be to the speaker so the speaker's sound will be favored. Many cardioid mics are designed to be used close to the source of sound. If the microphone cannot be close to the speaker because getting there is impossible, you need to make sure the angle of sound sensitivity is narrowed and side sensitivity is lessened. • You do this by using a supercardioid or hypercardioid mic. • Such mics are designed to be used anywhere from 1 foot to approx 15 ft from the source of sound. • They will work at a distance because of the narrow pickup range and reduced side sensitivity screen out much unwanted sound from the background. • However, you do need to ensure that the mic is accurately pointed at the source of sound, keeping the speaker in the on-axis range. • You may have to use a mic at such a distance to avoid seeing it in the shot and therefore having to back it out of camera range, which makes unidirectional mics a common choice for film and video use. MICROPHONE USAGE TYPES • Microphones are also characterized by their usage. Some common types include *handheld mics*, *lavaliers*, and *shotguns*. • *Handheld mics* are just that - designed to be held in your hand. • They can also be attached to desk stands or floor stands. • They can be condensor, dynamic, or even ribbon. • A ribbon handheld is usually used on a stand to avoid interference from a moving hand. • Many handheld mics are made to be used close to the speaker's mouth and are widely used by singers, narrators, and field reporters. • The advantage of handheld mics is the level of control. • For example, a talk show host uses a handheld mic when standing out in the audience, and as long as she keeps hold of the mic, she can choose who speaks and for how long. • A reporter in the field can turn an omnidirectional or cardioid handheld away from himself and capture the sound of live events. He can turn the mic to himself and, by placing the mic close to his mouth, have his voice heard clearly above any background noise. Unfortunately, handhelds have a limited range of pickup and can be rather obtrusive. • Because they are neither omnidirectional nor cardioid, they cannot be used successfully more than a foot or two from a primary sound source. • They need to either be used for picking up environmental sound from a distance or be very close to the source. • Having a microphone in your face is not a problem for professionals, but it can be intimidating for interview subjects. • Also, if images are being captured with sound, the microphone will likely be in the frame with the primary sound source. • This is acceptable in a news report, talk show, or other presentational type program, but it is not desirable in a representational piece such as fiction film. *Lavaliers*, or personal mics, are small unobtrusive mics that are designed to clip onto a person's clothing. • They are generally condenser mics, which require a battery, so the small mic is connected by a wire to a power pack that can be clipped to a belt or tucked in a pocket. • Lavaliers, sometimes called lavs, can be wired or wireless. • A wireless lav has a radio-frequency (RF) transmitter that sends the signal to a receiver, which converts the RF back to an electronic signal to be transmitted or recorded. • Lavaliers are usually omnidirectional but are intended to pick up one speaker and little ambient noise. • This works as long as the lav is clipped close to the speaker's mouth (approx 6 inches away). • Omnidirectional pickup is an advantage if the speaker turns his or her head. The lav should be placed in the direction the talent is speaking, however, to ensure the cleanest recording. • A lav will appear in the picture, but because of its small size and position, it does not draw much attention. • Lavaliers work well for interview subjects because they can get a clear signal of the voice without being intimidating. • However, they are susceptible to wind noise and the sound of clothing brushing against them. • A wired lav limits the subject's movement. • A wireless lav allows more freedom of movement, but it can pick-up other services that also use RF (e.g. radio stations and pagers, especially in urban areas). *Shotgun* microphones are so named because of their gun-barrel like appearance and their need to be properly aimed in order to be useful. • They are designed for use at a distance from the primary source of sound - from 1 to maybe 15 ft away. However, the farther a shotgun mic is used from the primary sound source, the more ambient sound will be heard. • They are not used often for audio-only productions, except when it is impossible to get close to the sound source. • They are very useful, however, in film and video when you don't wish to see the microphone in the shot. • Seeing the microphone in the frame creates a visual distraction, especially in a fiction piece where you want the viewer to get lost in the story. • (Such reminders to the view of the production process can be acceptable in nonfiction). • Shotguns can be dynamic but are usually of the condenser type, which gives them a good frequency response and requires preamplification. • They generally have a supercardioid or a hypercardioid pickup pattern. • The narrow range of sensitivity is necessary in order to be able to pick up a primary source of sound from a distance while minimizing other sounds in the environment. • Because the range is narrow, it is important to make sure the microphone is on the proper axis to record the sound you want. • Shotguns can be mounted in several ways. • You don't want to hold them because even slight hand movements can cause unwanted noise. • Pistol grips can be used. • Placing the shotgun on a boom or handheld boom pole allows closer placement to the subject, but it is still out of the frame. Cardioid handheld mics are also put on booms when a larger pickup area is desired. • You only want to do this if there is little background noise, because the cardioid will pick up more of it. • Many video cameras have mounts for shotgun mikes. • This can be convenient if you are a one-person production crew, recording both sound and picture, but the camera is rarely the optimal position for good sound recording. • Additionally, the mic is apt to pick up some camera noise. ANALOG vs. DIGITAL RECORDING • Recording audio involves electricity and magnetism. • The microphone converts sound waves into an electronic signal that contains all the variations of sound. • When that electricity comes in contact with the magnetic particles on the surface of the audiotape, the particles rearrange themselves to mirror the information encoded into the electronic signal. • When played back, the pattern of magnetic particles on the tape creates an electronic signal that matches the original signal created by the microphone. A loudspeaker converts that signal back to sound waves. •Recording of an audio signal is done through both analog and digital means. Although digital methods of recording and postproduction audio are standard practice in the media production field, analog recording still takes places and is preferred by some. • Each method has its own advantages and disadvantages. • In analog recording, the reproduction of the sound wave is a one-to-one transfer. • All of the variations of amplitude and frequency are included in the transfer. In the analog recording process, that direct relationship between the sound waves and the recorded signal results in the recording of a *continuous wave*. • This may be an advantage of analog recording, resulting in a richness and naturalness of sound that digital recording lacks. • However, a disadvantage of the analog recording process is its susceptibility to the addition of interference of noise (unwanted sound) such as machine hiss. • That noise gets worse if you make a recoding from the recording, such as dubbing or copying a tape, a situation called *generational loss*. Each generation (i.e. each copy of a copy) worsens the digital degradation. In the digital recording process, the wave is measured or *sampled*, and the information is converted to mathematical units or *quantized*. • The information is stored as a series of ones and zeroes that describe the characteristics of a wave. • The more often the wave is sampled (its *sampling rate*), the more fully the wave is reproduced. The more detailed measurements (its *bit depth*), the more accurate its re-creation. • *Bit-depth* refers to how many pieces of information are measured for each sample. • The sampling rate refers to how many samples are taken each second. • Sampling should be done at a rate at least twice the frequency of the sound wave for high-quality recording, resulting in common audio sampling rates of 44.1 or 48 kHz (more than twice the highest frequency of human heart-rate which is 20 kHz). • A sampling rate of 44,000 per second and a bit rate of 16 would result in 44,000 x 16 or 704,000 bit of information being sampled each second. • That's enough for high-quality production. • Digital recording avoids the discernible noise of analog recording when the bit depth is high enough (16 bits). • In analog recording, noise comes along with the signal in the recording process. • In digital recording, just the signal is sampled and quantized so, virtually, only the signal is re-created. The result is a cleaner sound. The digital recording process is essentially free from generational loss, but only of the process is digital all the way through. • This is because you're not actually copying one recording: instead, you're using the same digital measurement to re-create the signal again. • An unlimited number of recordings can be made from the original data with no degradation of quality. • Digital recording has its own pitfalls though. • Dropouts, resulting in a pop or brief gap in sound, can occur from imperfections in a digital tape. • Also, the digital recording process is far more susceptible to distortion caused by overmodulation. • As with visual images, digital audio is often compresses to reduce file sizes, and degradation of sound can occur if a digital image is compressed. AUDIO COMPRESSION • Digital audio creates files that are larger than text, but smaller than video. • Compression is used to reduce the size of the digital audio files. • Compression takes out unneeded information and by doing so, reduces the file size. • One minute of uncompressed CD-quality stereo audio has a file size of 10 megabytes, which is the same as approximately 1000 pages of text. • The file size of digital information has always been an issue. • Initially, trying to save large files on a personal computer's hard drive was a problem. This is no longer the case. • Hard drives are hundreds of gigabytes or terabytes in size. • One tiny flash drive holds several gigabytes, and most personal computers have built-in DVD burners. • Storage capabilities just keep increasing. • The increased availability of broadband internet service, as opposed to connecting through a telephone line, means that downloading or streaming audio (and video) files is less often a problem. • Some compression reduces quality (lossy), some does not (lossless). • Common methods of audio compression include RealAudio and a variety of Motion Picture Experts Group (MPEG) formats, including the popular mp3. Mp3 is able to produce digital quality sound with a small file size by removing audio information that is outside human hearing.

frequency

Number of cycles that a wave travels in one second.

sampling rate

Number of digital samples taken per second.

overmodulation

Point at which the peak of an audio signal exceeds the maximum safe recoding range and risks the possibility of distortion

monitoring

Process of listening to an audio signal during recording to ensure proper levels and sound quality.

on-axis

Proper location of the subject within the pick-up pattern of a microphone for optimum audio recording quality.

signal-to-noise ratio

Relative proportions of desired information and interference.

generational loss

Result of an analog-to-analog signal transfer (or "dub") which creates an amplification of the noise within the original recorded sound or image and subsequent reduction in quality.

nonsynchronous sound

Sound that is recorded separately from the film or video image.

synchronous sound

Sound that matches the action - recorded as the film or video image is acquired.

sound effects

Sounds created to emphasize or add to the action.

broadcasting

Transmission of radio waves from one point of origination to an unlimited number of receivers over a large area.

dynamic (microphone)

Type of microphone that uses a coil and magnet to produce an electric audio signal.

condenser (microphone)

Type of microphone that uses two charged metal plates (one moving, one fixed) to create an audio signal.

surround sound

• Although different systems are in use, ______________ employs multiple speakers - front, side, and back - to place the listener within a three-dimensional sound environment. • Sound can travel in all directions, once again corresponding with the visual movement. • Movie theaters and high-definition television (HDTV) are increasingly taking advantage of this technology.

sound envelope

• The shape of the sound wave also makes it distinctive. This is called the *______________*. • How quickly does a sound wave peak at its maximum amplitude (attack?) How quickly does it back off from that peak (initial decay)? How long does the sound last (sustain) before it drops out of hearing (decay)? • Differences in ______________ contribute to varying sound quality.


Kaugnay na mga set ng pag-aaral

Adult care HESI prep questions- Culture

View Set

WileyPlus Tortura Principles of A&P 14 ed Chp. 28.1

View Set

Pathology Exam 2 Study Questions + Extras

View Set

Chapter 3 - Small Business Environment: Managing External Relations

View Set

Quiz #4 - Chapters 9 & 10: Maslow & Rogers

View Set