RTVF 2210 Intro to RTVF Production- Audio Section

RTVF 2210 Intro to RTVF Production- Audio Section

Introduction to Radio, Television and Film Production / 1

RTVF 2210- Audio Section

RTVF 2210 Intro to RTVF Production- Audio Section

Instructor Name: Sharie Vance

Office Location: RM 225

Email address:

Office hours: As needed

Section Objective

This section is designed to familiarize the student with basic audio theory, the use of audio equipment, and production techniques for effective applications. Digital techniques will be utilized. A major portion of class time will be devoted to “hands on” demonstration. Therefore, attendance is extremely important to enable you to complete class assignments.

Section Learning Goals

Upon completion of this section, students will be able to:

  • Operate studio audio equipment and software for recording
  • Edit digital audio and produce airworthy content
  • Identify equipment needed to complete a given project
  • Explain the history and technological advances in audio equipment

Section Content

Reading Assignments.

Readings will come from the worktext.

Material discussed and distributed in class.

Includes expansion of the text and material not included in the text. Anything discussed in class, including handouts, anecdotes, topics brought up in class by classmates, as well as explanations about activities and project demonstrations may be on the test.


There will be one test at the end of this section. Additionally, audio material will be included in the class final exam. Pop quizzes may also be administered during the section. If you are late for class and a pop quiz is in progress or has already been given, your grade for that pop quiz will be zero. Also, there are no make-up provisions for pop quizzes. You must attend class to take a pop quiz.

IF YOU MISS THE SECTION EXAM, YOU MUST CONTACT THE INSTRUCTOR THE DAY OF THE EXAM TO ARRANGE A MAKE-UP EXAM. A death in the family OR a bona fide documented acute medical situation is required. If you arrive late to the exam, you will only be permitted to take it if no one has finished the exam.

Audio Projects.

A large part of this course section will involve doing assigned projects. Projects will be graded on their individual merits, but before they can be accepted for grading, they must meet certain production format standards that will be described in class. Projects not meeting production format standards will be returned to you without a grade, and will receive a one-half letter grade deduction upon being resubmitted. If the resubmission is still not in the correct format, the grade for the project will be zero.


If you receive less than a passing grade on the first project, you may make the changes suggested to you on your evaluation sheet and resubmit them for further evaluation. You must resubmit a given project within one week of that project being returned to you.

The highest grade that will be awarded for a resubmitted project will be a numerical grade of 70. (Resubmission is not allowed for the second project.)


Audio Section Grade Calculation Table

Elements / Grade / X / % Value / Value
Project 1 / X / .25
Project 2 / X / .25
Lab Test / X / .15
Pop Quiz Avg / X / .10
Section Exam / X / .25
Total Grade / =


Roll will be taken in class, and each unexcused absence will subtract 10 points from your audio section final grade. You will be marked absent if you are more than 5 minutes late for class.

Please be aware that notification is hereby made in this syllabus that the audio portion of this course may involve potentially hazardous activities, the nature of which include working with exposure to electrically powered equipment. Accordingly, the Department of Radio, Television & Film has slated this course within category 2 (courses in which students are exposed to some significant hazards but are not likely to suffer serious bodily harm).

PLEASE READ: ACADEMIC DISHONESTY, including but not limited to cheating and plagiarism. Please refer to the University of North Texas Undergraduate Catalog detailing matters of academic dishonesty. This is brought forth here to state that each student must do their own work, including that on individual projects.

Audio Notes

Bring your own headphones. You will need closed ear (over the ear) headphones. Headphones are not available for checkout from the lab monitor.

The Console:

  • Remember that the monitoring level has nothing to do with the level your recording.
  • The monitors are muted when you are using the microphone to prevent feedback. Thus, headphones are required when recording your voice.

For most applications:

  • All modules should be routed to Program 1.
  • Program 1 should be selected for the control room monitor.
  • The “A/B Select” button should be in the “A” position for all modules. That is, the buttons should not be illuminated.




DAY 1: (DATE) ______

General overview of studio, reservation procedures and project assignments. General description of the equipment and basic concepts relating to its use (single flow, mixing, etc.) Be prepared to review the chapter The Nature of Sound and Recording on pages A-9 through A-22.

Discussion of reasons for editing and editing techniques.

 Audio Project #1 (digital editing assignment) is introduced.

DAY 2: (DATE) ______Introduction to studio recording techniques. Digital and Analog editing is demonstrated and the editing assignment (Audio Project #1) is reviewed.

DAY 3: (DATE) ______

Discussion of studio recording, microphone use, and mixing techniques.

On-air radio applications, film sound, and audio for television.

 Audio project #2 (Spot Production) is introduced.

 Audio Project #1 (digital editing) is due.

DAY 4: (DATE) ______

Lab Proficiency Test.

DAY 5: (DATE) ______

Remaining discussion of film sound is completed.

Audio Project #2 (Spot Production) is due.

DAY 6: (DATE) ______

Audio Section Exam over basic audio procedures.


Project 1 has 2 parts, both to be submitted in the designated folder at turnin.rtvf.unt.edu Each part of the project will be labeled.

This part of the project will be labeled:

Proj1A Last Name First Name

Part A:

Digital Editing

This exercise has four parts. (Most of this material has been adapted from "Techniques of Magnetic Recording" by Joel Tall, chief tape editor for CBS.)

The very first sound should be the "This..." of the first exercise. Each subsequent exercise should be separated by five seconds of silence.

A. Edit this sequence to sound as good and as possible.

"er...This...er...exercise in editing (cough) excuse me...is to give you experience in splice...er...editing in the digital domain." The final product should read: "This exercise in editing is to give you experience in editing in the digital domain."

B. PACE. Whenever possible, cut from sound to sound. Don't cut the middle of "quiet" spots unless it can't be avoided.

"John my big brother, is here in town." The phrase "my big brother" is to be cut out. Edit so that it will to read "John's here in town." NOT "John (pause) is here in town." If the word "John" was accented too clearly, which would indicate that a word with a consonant was to follow, it might be better to edit the "i" of "is" out and make it sound like a contraction, i.e., "John's here in town."

C. CUTTING WITHIN SOUND. In the sentence, "Editing according to the rules we are following, it not difficult," the obvious way to eliminate the phrase "according to the rules we are following," would be to cut from just before "according" to just before "is." A better way is be to cut in the middle of the "editing" before "ing" and after "follow" in "following." Edit the sentence to read "Editing is not difficult."

D.The technique in exercise D is used often, especially where a speaker mispronounces a word and corrects himself abruptly. In this exercise, the normal manner of editing does not work out well, for when the mispronounced, or garbled, word is cut out, we are left with a heavily accented word, but with no indication of why it was heavily accented. By cutting within sound we edit from the good part of the mispronounced word to the unaccented part of the corrected word.

The president returned to Washling--WASHington by train."

Cut from the middle of the "sh" sound in Washlington--" to the middle of "sh" sound in "WASH." The result is a natural "Washington" with normal accent. The final edit would read “The President returned to Washington by train.”

This part of the project will be labeled:

Proj1B Last Name First Name

Part B:

Digital Editing

Edit the exercise so it flows in a conversational, "airable" form. Edit out the mistakes so that a transcription taken from your finished product would read as follows:

"Editing is a skill used extensively in the broadcast industry. It’s used to remove fluffs, to get the program timing right, and for the convenience of assembly. To edit digital audio, you need a computer loaded with an audio editing software program and a soundcard. Within the editing software, you can use the mouse and the keyboard to highlight audio for deletion or for cutting and pasting to another location. Always make certain when editing news audio, called “actualities,” that you don’t take out words that will alter the meaning of the statement. To do so is highly unethical, and could lead to legal action being taken."

The project will be labeled:

Proj2 Last Name First Name


Creative Commercial Production

Incorporate voices, sound effects, and music to produce one thirty second commercial, promo, or public service announcement. The spot must have a music bed with a definite beginning and ending and include at least one appropriate sound effect.

The spot must run between :28-:32.

You may choose to be totally original and create your own scenarios for the commercial, or you may use the following scenarios as a guide.

Scenario 1: Worldwide Hi-Fi in Dallas is having a “Spring Price Break Sale" with 30 to 70 percent reductions on all items in their huge warehouse showroom. Worldwide Hi-Fi is known as the store with instant credit and the lowest prices in the free world!

Scenario 2: The Original Deep-dish Pizza Company is a new pizza chain in town. They feature over 57 toppings in any combination, two for one specials every Tuesday night, and free delivery. They also have on display--this week only--the world's largest anchovy!!!!

Scenario 3: Your favorite music performer or group is

appearing Saturday night at The American Airlines Center. The concert is the hottest ticket in town!

The Nature of Sound and Recording

1.1The Sound Chain

On one level, the gathering, processing, editing, recording, and broadcasting of sound can be a very intimidating task. However, if you think of these tasks as parts in a chain, the sound chain, they become less intimidating and much more manageable.

In order for sound to be heard, there has to be someone to hear it. A chain has to exist. In the earliest days of humankind, the sound chain was quite simple: a sound existed and it was heard. As humankind evolved, new elements were introduced into the chain. We wanted to send sounds across great distances and to many people, so we introduced broadcasting into the sound chain. We also wanted to record sounds so that we could play them back or broadcast them at later dates. We introduced recording into the sound chain.

The Sound Source

The first element in the sound chain is the source of the sound. The means by which we gather and work sounds through the chain is the production process and always starts with the sound source. Sound is nothing more than a vibration. Think of your childhood and the games you played. Did you ever take a large blade of grass, hold it between your thumbs, press your lips to your thumbs and blow as hard as you could? Or did you play with kazoos? If you did, then you created a vibration. The blade of grass or the paper diaphragm in the kazoo did nothing more than vibrate under the pressure of your blowing. The vibration resulted in a sound, probably a loud screech or a buzzing sound.

The Human Ear

The last element in the sound chain is the human ear. The ear is a transducer. The ear transduces sound. It changes or converts sound into something (impulses/ electrical signals per se) that the human brain can understand.

The human ear is comprised of three parts: the outer ear, the middle ear and the inner ear. As sound reaches the ear, it is collected and directed to the auditory canal by the outer ear. The auditory canal channels the sound to the eardrum. The sound strikes the eardrum, forcing it to vibrate (much like the blade of grass in your fingers or the paper in the kazoo). As the eardrum moves, it creates vibrations in the middle ear. These vibrations are transmitted to the inner ear, which is a spiral filled tube filled with fluid. The vibrations in the middle ear create variations in the fluid of the inner ear. These variations excite auditory nerve endings called cilia. The cilia send the impulses to the brain. In short, the human ear has transduced, converted, sound in its basic form into impulses that the brain can understand.

Transducers in the Chain

A transducer, then, is a device that converts or changes. In the sound chain, there are many transducers. A microphone is a transducer. It converts sound into a form of energy that can be recorded or transmitted. A recording device is a transducer. It converts energy from microphone into a form that can be stored. A playback machine is a transducer. It converts stored information into a form that can be sent to a transmitter or to a speaker. A speaker is a transducer. It converts energy from a playback machine into sound. A transmitter is a transducer. It converts information into broadcast energy for transmission to receivers. A receiver is a transducer. It converts broadcast energy, through speakers, into sound that the human ear collects, gathers and processes.

1.2The Sound Wave

Vibrations produced by a sound source must travel through space in order to be heard, or transduced. Sound travels through space in pressure waves. It helps to think of how a sound wave travels by imagining a stone dropping into a pool of water. After the impact, waves fan out over the water. From above, it looks like a series of concentric circles. From the side, however, it travels as a series of crests and troughs. The crests occur where the most of the energy of the wave is concentrated and the troughs occur where the energy is most diffused. In essence, all that happens is that molecules are moved. The sound wave is similar. As the sound source vibrates, air molecules are moved. A graphic representation of a sound wave is known as a sine wave.


The points at which the air molecules are concentrated or pushed together (the crests) are points of compression (areas of high pressure). The troughs (molecules are pulled apart) are the points of rarefaction (areas of low pressure). The distance between each crest is the wavelength. Wavelength is measured in cycles per second, CPS. The number of cycles occurring within one second determines a sounds frequency. The human ear perceives frequency as pitch (how high or how low in frequency we hear a sound). Consequently, the more cycles per second produced, the higher the pitch. The pitch of a steam whistle on a train is higher than the rumbling of earthmover. The difference is that the air escaping from the train’s steam whistle is vibrating much faster than the vibrations caused by the earthmover’s engine. The faster the vibrations of air, the higher the frequency and thus, the higher the pitch. Often times, CPS is expressed in Hertz, or Hz. One CPS is equal to one Hz. When expressing frequencies into the thousands and millions, additional designations are used. One thousand Hz is equal to 1 kiloHertz (1kHz). One million Hz is equal to 1 megaHertz (1 MHz). One billion Hz is equal to 1 gigaHertz (1 GHz).

The human ear is able to hear roughly between 20 Hz and 20,000 Hz (or 20 KHz). The lower frequencies, roughly between 10 Hz and 256 Hz are the bass frequencies. These frequencies are associated with power and “fullness.” The lower midrange frequencies lie between 256 Hz and 2,050 Hz. These are the frequencies that determine most of the origins of a sound. The upper midrange frequencies range between 2,050 Hz and 5,000 Hz. These higher frequencies are in large part responsible for the intelligibility and presence of sound. Most of the fundamental frequencies for speaking fall in the midrange category.

The treble frequencies fall between 5,000 Hz and 20,000 Hz. The frequencies establish the sparkle and clarity of a sound and gives presence to a sound.