Rabu, 02 Juni 2010

TV PRODUCTION

here.
.
.
.
.
TV Production
Overview


Television viewing has now reached an all-time high in the United States. A 2009 report by the Nielsen Company says the average American television viewer is watching more than 151 hours of television per month -- up from an average of 145 hours the previous year. Media Use and Today's Youth tells a more complete story.


So you really want to learn TV production?
You've come to the right place.

This Isn't A "Home Movies" Course
I'm going to assume you're serious about this; that you want to learn more than you'd need to know just to make home movies of your vacation, your little brother's birthday party, or your trip to Disneyland. The instruction manual that came with your camcorder should cover these needs.
Although most who use these modules study in class situations, many people go through these modules on an individual basis -- i.e., they work for government agencies, TV production facilities, or they just need a personal understanding of the concepts.
Thousands of students around the world are now using this award-winning course in television production to meet serious personal and professional goals.

Completing this course could mean an exciting career in broadcast television, ▲Internet webcasting - [note mouseover] institutional videography, satellite programming, mobile video, and other areas -- including the advertising and public relations aspects of any of these. Video production now includes feature films -- the kind you see at your local theater.
I have trained thousands of people in television production and have worked professionally in the field for many years -- and I guarantee that, by the end of this course, you'll have a good understanding of the production process.
Although the modules emphasize studio and field production for broadcast television, what's presented will be applicable to a wide variety of audio- and video-based media. It's all pretty much the same once you know the basics.
Of course, it's desirable to have audio and video equipment to work with -- either personal equipment or equipment provided in a school or lab setting.
Some equipment may not be available to you. That's okay; it's important to understand the equipment and techniques that are part of larger production facilities.
For one thing, you may suddenly be confronted with an internship or job opportunity where this knowledge is essential. Or, you could easily get asked about some of these things in a job interview.
Camerapersons, writers, directors, producers, and even on-camera talent find that having a solid understanding of the tools and techniques of the entire process makes a major difference in the success of productions -- not to mention their careers.
In television production, as in most of today's high-tech areas, knowledge is power.
Enough of the sales pitch. Let's get down to business.

A Bird's Eye View of the
Production Process
Let's take a whirlwind tour of the production process. But unlike a whirlwind tour of Europe (if it's Tuesday, this must be Barcelona), we'll come back to these people and places later. For now, let's take a quick look at the production process from the standpoint of the key people.
We'll start by thinking big -- big productions, that is -- because many of these things can be scaled down, combined, or eliminated in smaller productions.

Who Does What and Why
This list is long, but have you noticed the lengthy credit lists for major films and TV programs?
The person in charge of launching entire production is generally the producer. He or she comes up with the program concept, lays out the budget for the production, and makes the major decisions. This person is the team leader, the one who works with the writers, hires the director, decides on the key talent, and guides the general direction of the production.
In smaller productions, the producer may also take charge of the more mundane activities. And in small productions, the director may handle the producer's responsibilities. In this case, the combined job title becomes (want to guess?), ▲producer-director.
Some productions may also have an associate producer who sets up schedules for the talent and crew and who generally assists the producer.
On a major production, one of the producer's first jobs is to hire a writer to write the script (the document that tells everyone what to do and say). The script is like a written plan or blueprint for the production.
The producer will next consider the key talent for the production. In general, the talent includes actors, reporters, hosts, guests, and off-camera narrators -- anyone whose voice is heard or who appears on camera.
Sometimes talent is broken down into three sub-categories: actors (who portray other people in dramatic productions), performers (who appear on camera in nondramatic roles), and announcers (who generally don't appear on camera).
In a large production, the producer will hire the director.
The director is in charge of working out preproduction (before the production) details, coordinating the activities of the production staff and on-camera talent, working out camera and talent positions on the set, selecting the camera shots during production, and supervising postproduction (after production) work.
In other words, once the producer sets things in motion, the director is in charge of taking the script from the beginning to the very end of the production process.
Assisting a director in the control room is typically a technical director who operates the video switcher. (A rather elaborate version is shown on the right.)
The technical director, or TD, is also responsible for coordinating the technical aspects of the production.
One or more production assistants (PAs) may be hired to help the producer and director. Among other things, PAs keep notes on ongoing production needs and changes.
The lighting director (LD) designs the lighting plan, arranges for the lighting equipment, and sets up and checks the lighting.
As we'll see, lighting is a key element in the overall look of a production.
Some productions have a set designer who, along with the producer and director, designs the set and supervises its construction, painting, and installation.
The makeup person, with the help of cosmetics, hair spray, etc., sees that the talent look their best -- or their worst, if that's what the script calls for.
Makeup is just one of the areas where a link will take you to advanced information. (We'll discuss the meaning of the colored squares below).
It should be emphasized that specific responsibilities of production personnel will vary widely, depending on the production facility. In Europe, and in particular at the BBC (British Broadcasting System) in London, these distinctions are made.
________________________________________
Before you move on to Part Two of this module, let me call your attention to some things:
First, you'll notice the Site Search / Key Terms link at the end of each module. This link is useful in finding terms and phrases anywhere on the site.
Links will also take you to about 100 associated files intended to add to the basic information presented in these modules. (The makeup link above is an example.)
For further readings on any topic click on the link at the bottom of each module for a bibliography of additional readings (the hardcopy type).
For important background information on the television medium, check out the series of modules starting here.
After you visit any of these links, you can either close the window that pops up or click on the back arrow at the top of your browser or the "close window" button to get back to the module you were reading.
These modules are available on independent Internet servers in the United States and Brazil. In case you get lost in cyberspace at some point, you might want to make a note of the following sites in the U.S. where these materials can be found:
• http://www.CyberCollege.com/tvp_ind.htm
• http://www.InternetCampus.com/tvp_ind.htm
If you find that one site bogs down -- we've all known the Internet to do that on occasion -- try the other site. All these sites carry the same TV Production and ▲Mass Media modules.
The "Quick Quiz" button at the end of each chapter takes you to a very short interactive matching game that acts as a review of some of the major concepts in the chapter (and checks to see if you really were awake while you were reading it!).

Green, Yellow, Blue, and Red Readings
And now to explain those little colored squares before most links.
A green square ( ) in front of a link indicates information that's important to what's being discussed. We cover this information in the interactive tests and puzzles. Linked information within these readings is not covered in the tests.
A yellow square ( ) indicates helpful background reading. This material is not included on the interactive tests, but instructors may include the readings on their own tests.
A blue square ( ) indicates technical information designed for advanced classes and professionals; again, this material may or may not be required by an instructor (assuming you are in a classroom setting).
A red square ( ) indicates external links with related information not included on the interactive tests -- but your instructor, of course, has the option of asking that you read this information. Please note that the links to these external sources should in no way be considered endorsements, and no compensation is received by CyberCollege or the InternetCampus for including these links. Unlike the links which go to information on this site, we have no control over the content of these external links.
( ▲ ) A black triangle indicates pop-up information directly related to the discussion. Just mouse-over the blue link that follows the symbol.
NOTE: If you find that some of the interactive features on this site don't work, you are probably using a very old browser or you have JavaScript disabled.
________________________________________
* Sites like YouTube have become a major force in today's entertainment, news, information, and politics.
During the run-up to the 2008 presidential election YouTube videos related to the candidates (including the "Obama Girl" on the right) were accessed 2.3 billion times.
________________________________________

Very much in contrast to the frivolity of the Obama girl is the dramatic example of how the power of video -- even a 40-second cell phone video -- can affect people around the world. This is detailed in this tragic account of a young woman named Neda.

Neda's fiancée took this photo shortly before her death on a street in Iran in June 2009. The link above will take you to a more detailed account of Neda and some of today's political realities.

Her story is an example of the power of video and how the major broadcast and cable news networks regularly run viewer supplied footage from YouTube, FaceBook and similar Internet sites.

It also proves that one person can make a difference in the world -- if that person has a video camera and knows how to use it.

(Click on "more" for the second half of this section.)
2010


TV Production
Overview
Part II
Let's resume our list of the key people involved in TV production.
Major dramatic productions have a wardrobe person who sees that the actors have clothes appropriate to the story and script.
The audio director or audio technician arranges for the audio recording equipment, sets up and checks mics (microphones), monitors audio quality during the production, and then strikes (another production term meaning disassembles and, if necessary, removes) the audio recording equipment and accessories after the production is over. (Mic, strangely enough, is pronounced mike.)
The microphone boom/grip operator watches rehearsals and decides on the proper mics and their placement for each scene. During an on-location (out-of-the-studio) shoot, this person may need strong arms to hold the mic boom over the talent for long periods of time.
The video recorder operator arranges video recording equipment and accessories, sets up video recordings, performs recording checks, and monitors video quality.
In dramatic productions, the continuity secretary (CS) carefully makes notes on scene and continuity details as each scene is shot to ensure that these details remain consistent among takes and scenes.
As we will see, this is a much more important job than you might think, especially in single-camera, on-location production. Once production concerns are taken care of, the continuity secretary is responsible for releasing the actors after each scene or segment is shot.
We're almost done with our list. Are you still with us?
The ▲CG Operator, (electronic character generator operator) programs (designs/types in) opening titles, subtitles, and closing credits into a computer-based device that inserts the text over the video.
Camera operators do more than just operate cameras. They typically help set up the cameras and ensure their technical quality, and they work with the director, lighting director, and audio technician in blocking (setting up) and shooting each shot.
On a field (out-of-the-studio, or on-location) production, they may also coordinate camera equipment pickup and delivery.
Depending on the production, there may be a floor manager or stage manager who's responsible for coordinating activities on the set. One or more floor persons, or stagehands, may assist him or her.
After shooting is completed, the editors use the video and audio recordings to blend the segments together. Technicians add music and audio effects to create the final product.
The importance of editing to the success of a production is far greater than most people realize. As we will see, an editor can make or break a production.
This finishes the list of people and what they do. We'll revisit these as we go along, so don't worry if you don't remember them all at this point.
Now for the production itself.

The Three Production Phases
The production process is commonly broken down into preproduction, production, and postproduction, which some people roughly characterize as "before, during, and after."
The Preproduction Phase
There is a saying in TV production:
The most important phase of production is preproduction.

The importance of this is often more fully appreciated after things get pretty well messed up during a production and the production people look back and wish they had adhered to this axiom from the start.
In preproduction the basic ideas and approaches of the production are developed and set in motion. It is in this phase that the production can be set on a proper course or misdirected (messed up) to such an extent that no amount of time, talent, or editing expertise can save it.
The Prime Directive
"Trekkies" know that Star Trek has its prime directive. So does TV production:
________________________________________

Hit the target audience.
________________________________________
In order for the program to be successful, you must keep in mind throughout each production phase the needs, interests, and general background of the target audience (the audience your production is designed to reach).
In order for your program to have value and a lasting effect, it must in some way affect the audience emotionally.
This assumes both knowledge of the prime directive and the target audience, and it ends up being a key to your professional success.

More on that later.
During preproduction, not only are key talent and production members selected, but all the major elements are planned. Since things such as scenic design, lighting, and audio are interrelated, they must be carefully coordinated in a series of production meetings.
Once all the basic elements are in place, rehearsals can start.
A simple on-location segment may involve only a quick check of talent positions so that camera moves, audio, and lighting can be checked.
A complex dramatic production may require many days of rehearsals. These generally start with a table reading or dry rehearsal where the talent along with key production personnel sit around a table and read through the script. Often, script changes take place at this point.
Finally, there's a dress rehearsal. Here, the talent dresses in the appropriate wardrobe, and all production elements are in place. This is the final opportunity for production personnel to solve whatever production problems remain.
The Production Phase
The production phase is where everything comes together (we can hope) in a kind of final performance.
Productions can be broadcast either live or recorded. With the exception of news shows, sports remotes, and some special-event broadcasts, productions are typically recorded for later broadcast or distribution.
Recording the show or program segment provides an opportunity to fix problems by either making changes during the editing phase or stopping the recording and redoing a segment.
And, Finally, the Postproduction Phase
Tasks, such as striking (taking down) sets, dismantling and packing equipment, handling final financial obligations, and evaluating the effect of the program, are part of the postproduction phase.
Even though postproduction includes all of these after-the-production jobs, most people associate postproduction with editing.
As computer-controlled editing techniques and postproduction ▲ visual effects (VFX) have become more sophisticated, editing has gone far beyond the original concept of simply joining segments in a desired order. Editing is now a major focus of production creativity.
Armed with the latest digital effects, the editing phase can add much in the way of razzmatazz to a production. In fact, it's pretty easy to become enthralled with the special effect capabilities of your equipment.
But, then there is this...

Confusing the Medium With the Message
As fun as all the razzmatazz effects might be to play with, you should consider all this high-tech stuff merely a tool for a greater purpose: the effective communication of ideas and information.
If that sounds a bit academic and stuffy, you might want to look at things from a broader timeline.
If you think about it, today's latest high-tech effects will look pretty lame a few years from now. (Think of the vsual effects in some early films.)
It's only the ideas and feelings that have a chance of enduring.
How many times have you seen a movie and forgotten about it almost as soon as you left the theater? In contrast, some movies seem to "stick with you," and you may think about them for days or even weeks.
As we noted, average adults spend more than 150 hours each month.watching television Today, the average U.S. home has more ▲TV sets than people.
The medium you are learning to control can be used either to provide audiences with time-wasting, mindless, drivel...
...or with ideas that can make a positive difference in the overall scheme of things. (And, as you may have noticed, there is a definite need in the world for people who can make a positive difference.)
How would you rather have your work and life remembered?
________________________________________

Before You Continue - Some Important Notes
1. First, note that links with a green square in front of them signify required readings. This material (but not any links within the readings) is covered on the interactive tests, the words squares, the interactive crosswords, and the interactive Quick Quizzes at the end of the modules. (In other words, don't skip them!)
These linked readings will add perspective and a greater understanding of television's role, impact, and responsibility. George Lucas, one of the most revered film and video innovators of our time, has repeatedly pointed out that to be successful we must go beyond simply knowing how to do things.
It's very foolish to learn the how without the why.
-George Lucas, award-winning writer, producer, and director of the Star War films and a leading innovator in film and video.

The linked readings provide a bit of the why.
You can use some of the links, such as the discussion on alleged TV news bias, to promote thought, discussion, and healthy debate within a classroom. There are similar topics for debate -- often rather lively debate! -- in the CyberCollege Forum.
• The interactive tests are not "a piece of cake"; they require a thorough understanding of the Modules. To make things more challenging right answers are worth two points, but one point is deducted from your final score for ever wrong answers or skipped question.
2. Television is a visual medium, so we'll occasionally include photos that do not directly tie in with the discussion, but illustrate the power of television to communicate ideas and feelings.
Here is an example.
3. Some optional supplemental readings, starting here, will provide perspective on the impact of TV on society. These are not "green dot required readings," but they may be required by a classroom instructor.
4. To test your understanding of this first module, click on the Interactive Test link below. These interactive tests are not designed to be "a piece of cake." Many questions demand a thorough understanding of the material and some serious thinking about what it all means.
5. For those of you who like to solve crossword puzzles, there are interactive crossword puzzles over key terms and concepts. You can find the links to the module's puzzle at the bottom of the page.
A full index of these puzzles can be found here. Hint: if you get stuck, you can use the search option link at the bottom of each page to find key terms.
6. The bottom of each module contains a number of repeating links. They will take you directly to various resources and key pages: a site search, revision information, the CyberCollege Forum, moving to the next module, etc.
7. Next, there are the interactive Quick Quizzes. (Note the link below.) You can use your mouse to capture and move the answer blocks around to match them up with phrases on the left. These are a very quick review of the basic terms in each module. The Quick Quizzes, like many of the features on this site, require a Java-enabled browser.
8. If you can't get near a computer (especially right before you review for a test) the modules are available in a mobile format -- the kind you need for your cell phone, PDA, iPhone, or BlackBerry-type device.
9. Here is another quote: If you want to eliminate as many mistakes in your own life as possible, study the mistakes of others. Having worked in the radio and television fields for a few decades (and having made my share of mistakes), I talk about these things in an ongoing personal blog.
10. Finally, this required reading shows how a country, with lots of help from the broadcast media, was able to topple a corrupt dictatorship.
________________________________________


Module 2


Program Proposals
And Treatments
Now that you know who does what and you have an overview of the basic production process, let's move on to the actual process of doing a TV production.
Even though you may have a clear idea in your head about what you want to get across in a production, unless you can clearly communicate that idea to the people who can help you launch your production, that's just where your idea will stay -- in your head.
These people include the producer, director, production crew, sponsor, and, most importantly, your audience.
So where do you start?

Writing the Program Proposal or Treatment
The first step in a complex production is to write a clear and succinct summary of your ideas.
We refer to this summary as a treatment in dramatic productions and a program proposal in nondramatic productions.
A sample program proposal for a local TV station is illustrated here.
Often, just the process of putting things down on paper allows you to better organize and clarify your ideas.
This step often reveals weaknesses and gaps you should address before it's too late (or before you're asked about some embarrassing details you hadn't thought of).

Get Agreement on Your Proposal
Getting the go-ahead on a proposal affords everyone a bit of insurance. Once everyone agrees on the treatment or program proposal, it's difficult for someone to say later, "This isn't what we agreed on."
This is especially important in large production facilities and television networks, where a variety of people will be involved in program development.
A simple program proposal may be just a couple of pages or, in the case of a feature-length dramatic production, a treatment can run 60 pages or more.
This is as good a place as any to mention the importance of writing.
Yes, I know, you've heard that since you were in fourth grade.
There may even be some people out there who decided to go into TV (rather than print journalism, for example) because they thought they might be able to escape having to learn how to write.
Sorry.
Although it's a visual medium, TV is still based on the written word. When you get down to it, your ability to write and effectively communicate your ideas end up being the most important criterion for success.
Unless you want to stick with the very basic jobs in TV, you have to face this reality -- and the sooner the better.
Interestingly, most producers (the people in charge, remember?) arrived at their jobs by first being writers.
Wouldn't you rather end up being someone who makes the major decisions (and is paid accordingly)?
Okay, back to treatments and program proposals.
Although we write them as an aid in presenting and getting agreement on the focus and direction of the production, they are also used to interest key people in supporting the production -- especially financial backers.

See That Your Proposal Engages the
Audience's Interest and Imagination
A program proposal or treatment should cover the essence of the production; or, in the case of a dramatic production, the basic story line.
Dramatic treatments also include the locations and talent required, as well as the key scenes.
In nondramatic program proposals the basic production needs and approximate times of the segments are included.
Anyone reading a program proposal or treatment should be able to get a clear idea of the entire production.
If disagreement exists on the program concept, it's much easier to change things at this stage than after the complete script is written.
Brief instructions on writing a treatment can be found here.
Finally, the treatment or program proposal must engage the interest of readers and go a long way toward convincing them of the probable success of the production -- which we'll cover in Module 3.
Required Reading For This Module
________________________________________


Module 3



Capturing
And Holding
Viewer Attention

It would be difficult to think of any business that's more competitive than TV broadcasting. The average viewer in the United States has dozens of TV channels from which to choose.
Each year, the TV industry spends millions of dollars trying to make successful new TV shows. And each year most of these attempts don't even make it to air (broadcast).

First, Get Their Attention!
The success of a TV show (and, therefore, your own professional success) will depend in large measure on your ability to effectively capture and hold an audience.
And, once you do, you'd better have something interesting to communicate or they'll quickly go elsewhere -- either tuning to another channel or just mentally tuning you out.
"But," you say, "I don't want to worry about all that; I just want to make TV shows that interest me."
That's great, but who's going to pay for them?

Reality 101
Let's take a quick look at our Reality 101 course notes.
TV productions cost a lot of money, especially today. To cite just one example, in 1966 the budget for each full episode of Star Trek was $100,000. In 2003, each episode of Enterprise, which is similar in form, cost about $100,000 per minute to produce. Today, the cost would be much higher.
Before people put up that kind of money, they have to believe there will be some kind of return on their investment.
Depending on the type of production, that return may be to communicate a corporate message effectively, to get viewers to understand a series of concepts or, in the case of commercial television, to generate profits by selling products.


Hit the Target (Audience)
As we've noted, we use the term target audience to indicate the specific segment of a potential audience we're "aiming at."
Regardless of the type of production, you must start with a clear understanding of the needs and interests of your specific target audience.
Advertisers spend millions of dollars determining these things.
Depending on the products they want to sell, advertisers will have certain ▲demographic preferences.
For designer jeans, for example, the target audience would be fairly affluent teenagers. The same advertisers wouldn't be interested in sponsoring reruns of Murder, She Wrote, which appeals primarily to an older audience.
By the way, the principles of determining the needs and interests of your target audience also apply to something as simple as producing a video for your class. If only an instructor will be evaluating your video, you'll probably take a different approach than if it's intended for a graduation party. In either case, meeting the needs of your target audience is the key to success.
Let's look at just a few of the issues involved.

Using Audience-Engaging Techniques
Audiences primarily react emotionally to program content.
Although people may want to believe they're being completely logical in evaluating a program, their underlying emotional reaction most influences their evaluation. Even a logical, educational presentation evokes -- for better or worse -- an emotional response.
This is a key concept, which Benjamin Franklin (a noted persuader) put this way:
If you would persuade, you must appeal to interest rather than intellect.
-Benjamin Franklin

What types of production content are most apt to engage our interest and affect us emotionally?
First, we have an interest in other people, especially in "experiencing the experiences" of other people.
We're interested in people who lead interesting (romantic, dangerous, wretched, or engrossingly spiritual) lives.
Part of this involves gaining new insights and being exposed to new points of view. This includes learning new things.
Here's something else to keep in mind.
Viewers like content that reinforces their existing attitudes and, right or wrong, they tend to react against ideas that run contrary to their beliefs.
Production people, therefore, must be careful in presenting ideas that blatantly challenge widely held beliefs.
The trick is to know how far you can go without alienating an audience.
To cite a rather extreme example, a number of years ago an East Coast TV station did an exposé on a local police chief. An undercover reporter (one of my former students, in fact) put a camera in a lunch box and filmed the police chief clearly taking a bribe.
When the segment was broadcast, there was negative reaction — against the TV station.
It seems the police chief was popular with many influential people in the community and having the truth presented in this way challenged their commitment to this individual. This reaction on the part of many viewers was justified by cries of entrapment, a liberal media bias against a law-and-order official, etc.
This wasn't the first time a messenger was blamed for the message.
The same negative anti-media reaction took place when former U.S. President Richard Nixon was forced to resign from office for engaging in illegal activities while in the White House. To see how this came about, rent the Academy Award winning feature-length film, All the President's Men. The film represents an important piece of U.S. history presented in a dramatic and even exciting way. It also illustrates how two tenacious reporters faced down major high-level opposition to expose wrongdoing.
Eventually, this U.S. president had to resign. The reporters involved kept the identity of "Deep Throat," the inside informer involved, secret for several decades.
If a democracy is to be successful, the news media have a social responsibility to bring truth to light --
even though that truth may be unpopular.


Audiences also like to hear about things that are new and that generate excitement.
This is why mystery, sex, fear, violence, and horror do so well at the box office.
It also explains why we see so many car chases, explosions, and general instances of mayhem in our films and TV programs.
Such things stir our adrenaline and involve us emotionally. In short, they hold our attention.
This, of course, brings up the possibility of exploitation, presenting things that appeal to elements of human nature that -- how shall we say this -- aren't the most positive.
Sometimes a rather blurry line exists between honestly presenting ideas and stories, and unduly emphasizing elements such as sex and violence just for the sake of grabbing and holding an audience.
Beyond a certain point, audiences will sense they're being exploited and manipulated, and resent it.
And, keep in mind, the content of a production, good or bad, tends to rub off on the reputations of those who produce it -- and even on the sponsors who support it.
With this general background on programming elements that appeal to audiences, we'll next turn to the production sequence.
But first, here's some required reading for this section.
Note the first of the Word Squares below.
________________________________________



Module 4




The Production Sequence



Let's think big -- as in a big production.
Following are 15 basic steps required for an elaborate television production. Once you get a feel for the entire process, you can scale things down for any sized production.

Identify the Purpose of the Production
1. This is the most important step: Clearly identify the production's goals and purposes.
If there is no clear agreement on the goals and purposes of a production, it will be impossible to evaluate its success. (How will you know if you've arrived at your destination, if you didn't know where you were going in the first place?)
Is the purpose to instruct, inform, or entertain -- or maybe to generate feelings of pride or express a social, religious, or political need? Is the real purpose to create a desire in the audience to take some action?
Let's be honest. The primary goal of most broadcasting is simply to hold the interest of an audience through the intervening commercials. Even PBS (Public Broadcasting Service), which used to be commercial free, now runs "mini-commercials" for their corporate underwriters.
Most productions have more than one goal and we'll elaborate on some of these later.

Analyze Your Target Audience
2. Next, identify and analyze your target audience.
Based on such things as age, sex, socioeconomic status, and educational level, program content preferences will differ.
These preferences are also different in various regions of the United States (e.g., North, South, urban, rural).
As we've noted, we refer to audience characteristics as demographics.
We can see regional demographic variations in part by differences in local programming in various areas of the country -- and sometimes by the films and network programming that local stations decide not to air.
Sex and violence are chief among these content issues -- and both show a positive relationship to ratings.

Identify Demographics to Determine the
Acceptability of Content
Knowing your audience, of course, is crucial to success -- and not understanding it is at the base of many failures. Let's look at some examples.
Generally speaking -- and, of course, there are many exceptions -- when it comes to sexual themes, people living in Northern urban areas of the United States tend to be more tolerant than people who have a rural background and live in the South.
Education is also related. Research shows that, generally, the more educated the audience, the less they object to sexual themes.
Interestingly, it appears that this relationship seems to be the reverse when it comes to violence: More educated audiences are less tolerant of violence in the media.
Here are some examples from recent programming decisions.
Knowing that more than 41,000 women are diagnosed with breast cancer in the United States each year, a female program manager of a TV station in the South decided to run a PSA (public service announcement) on the importance of doing breast self-examinations.
Even though the PSA ran late at night and may have seemed rather bland in its approach, there was immediate negative reaction on the part of some people who thought the subject matter was too personal to be broadcast. Because of viewer complaints, the station canceled the PSA.
In 2006, a similar controversy broke out over a vaccination that can prevent cervical cancer, which kills more than 3,500 women in the United States each year. Some social conservatives both oppose its use and the dissemination of pro-vaccination information. Although the vaccine reportedly could save thousands of lives, cervical cancer is associated with sexual activity and these people feel the vaccination could encourage premarital sex.
With strong pressures on each side of controversial issues, broadcasters have to try to sort out fact from fiction while trying to stay sensitive to public attitudes.

What is and is not broadcast is largely determined by audience feedback and this varies greatly with demographics. For example, in contrast to the seemingly innocuous PSA on breast cancer we mentioned, some PBS stations have run programming with brief full frontal nudity late at night without appreciable reaction.
The difference? Demographics. The people most apt to complain weren't watching, and the people watching were least apt to complain.
You may have a compulsion to "just tell it like it is" and not be concerned about alienating your audience.
Time to review those Reality 101 notes. If you consistently disregard audience preferences and predispositions, you'll limit your future in TV production.
But what if you're not producing programming for broadcast or general distribution?
Compared to standard broadcast television, institutional television, which includes corporate and educational video, has different needs and expectations. But here too, predominating demographic characteristics, such as age, sex, and education, influence a production's form and content.
For example, to underestimate education or experience and inadvertently "talk down to" an audience insults them. To overestimate education or experience and talk over people's heads is just as bad. Either way, you lose.

Check Out Similar Productions
3. Check out similar productions from the past. If you're going to make mistakes, at least make new ones.
Ask yourself some questions: How will your proposed production differ from previous successful and unsuccessful efforts by others? Why did they work; or, maybe more importantly, why didn't they?
Of course, since production styles change rapidly, you need to take into consideration differences in time, locations, and audiences. This link will take you to more information on the success and failure of TV programs.

Determine the Basic Value
of Your Production
4. Next, determine the overall value of the production to a sponsor or underwriter. Obviously, underwriters and advertisers want something in return for their investment
For this, you'll need to ask yourself some questions. First, what is the probable size of the audience? In determining this, you must know if your show will be a one-shot presentation or if you can recoup production expenses over time by presenting the show to other audiences.
Generally, the larger the audience the more marketable a production will be to an underwriter or advertiser.
At the same time, simple numbers don't tell the full story.
Let's say an advertiser has a product designed for young people -- athletic shoes or designer jeans. In this case, a production that draws a large percentage of this age group will be more valuable than a production that has a larger overall audience, but a lower percentage of young people.
Broadcasters have canceled many TV series, not because they had a small audience, but because they had the wrong kind of audience (the wrong demographics).
You'll always want to balance the potential value of a production to an advertiser or underwriter with the projected cost of producing and presenting the production.
If the costs exceed the benefits, you have a problems!
In commercial television, the return on investment is generally in the form of increased sales and profits. But it may take other forms, such as the expected moral, political, spiritual, or public relations benefit derived from the program.

Develop a Treatment
Or Production Proposal
5. Next, put it down on paper. The steps involved span the interval from the initial proposal to the final shooting script. You'll recall that earlier we talked about treatments and program proposals (written summaries of what you propose to do).
After the program proposal or treatment is approved, the next step is to write and submit a full script. It will be at this point that any remaining research on the content will be commissioned.
For example, if the script calls for someone watching TV in a 1960s period piece (a production that takes place during a specific historic era), you should check on the television shows broadcast at that time. (Would we see an episode of Law & Order on a TV screen during a documentary on Elvis Presley?)
The first version of a script is often followed by numerous revised versions.
Throughout the rewriting process, a number of story conferences or script conferences typically take place.
During these sessions audience appeal, pace, and problems with special interest groups, etc. are discussed
If it's an institutional production, you'll review the production's goals and pose questions about the most effective ways to present ideas. If the director is on board at this time, he or she should be part of these conferences.
Finally, a script version emerges that is (we can hope) more or less acceptable to everyone. Even this version, however, will probably not be final. In many instances, scene revisions continue right up to the time the scenes are shot.
Typically, in a dramatic film production each new script version is issued on a different color paper so that the cast and crew won't confuse them with earlier versions.
Depending on the production, you may want to develop a storyboard.
A storyboard consists of drawings of key scenes with corresponding notes on elements such as dialogue, sound effects, and music. (Note the simple storyboard on the right.)
Today, high-budget film and video productions create sophisticated storyboards with software supplied by companies such as Zebra Development.

Develop a Production Schedule
6. Next, draw up a tentative schedule. Generally, broadcast or distribution deadlines will dictate the production schedule (the written timetable listing the time allotted for each production step).
Not planning things out carefully might cause you to miss a critical deadline, rendering the production useless.

Select Key Production Personnel
7. Bring on board the remaining above-the-line production personnel. In addition to the producer and writer, above-the-line personnel include the production manager, director and, in general, key creative team members. Below-the-line personnel, generally assigned later, include the technical staff.

Decide On Locations
8. If you're not shooting in the studio, decide on key locations.
In a major production you will hire a location scout or location manager to find and coordinate the use of the locations suggested by the script.
Although it might be easier to shoot in a TV studio, it's been shown that audiences like the authenticity of "real" locations, especially in dramatic productions.
Most major cities encourage TV and film production and maintain film commissions that supply photos and videotapes of interesting shooting locations in their area. They'll also provide information on usage fees and the names of people to contact.
It's often necessary to make changes in the on-location settings. For instance, rooms may have to be repainted or redecorated and visible signs changed.

Decide On Talent, Wardrobe and Sets
9. Next, you'll want to make some decisions on talent, wardrobe (costuming) and sets.
Depending on the type of production, auditions may take place at this point as part of the casting process (selecting people for the various roles).
Once completed, you'll negotiate and sign contracts.
If you're lucky enough to afford well-known actors, you'll probably have decided on them early in the preproduction process.
Once you decide on the talent, you can begin wardrobe selection. These are suggested by the script, coordinated with the look of the sets and locations, and ultimately approved by the director.
After a set designer is hired, he or she will review the script, possibly do ▲research, and then discuss initial ideas with the director.
Once there's agreement, sketches of the sets can be made for final approval before actual set construction starts -- if there is any construction. Today, many sets exist only in computers and the actors are ▲electronically inserted into them. If this is the case, the set sketches will be given to a computer artist.
You can then schedule rehearsals, from initial table readings to the final dress rehearsal.
Even though personnel may not have finished the sets at this point, the actors can start reading through the script with the director to establish pace, emphasis, and basic blocking (the positioning of sets, furniture, cameras, and actors).
Once the sets are finished, the final blocking and dress rehearsals can get underway.


Decide on the Remaining
Production Personnel
10. Make decisions on the remaining staff and production needs. At this point you can arrange for key technical personnel, equipment, and facilities. This includes the rental of both equipment and production facilities.
Next, arrange transportation, catering (food and refreshment trucks) and on-location accommodations (for any overnight stays).
Unions, which may or may not be involved, often set minimum standards for transportation, as well as the quality of meals and accommodations. Union contracts also cover job descriptions, specific crew responsibilities and working hours, including graduated pay increases for overtime hours.

Obtain Permits, Insurance, and Clearances
11. In major U.S. cities and in many foreign countries it's not possible just to go to the location of your choice, set up your tripod, and start filming. Except for spot news and short documentary segments, you must arrange access permits, licenses, security bonds, and insurance policies.
Many semipublic interior locations, such as shopping malls, require filming permits. (Yes, these things do get complicated!)
Depending on the nature of the production, liability insurance and security bonds may be necessary because accidents can happen that can be directly or indirectly attributed to the production.
In some locations, the controlling agency will limit exterior production to certain areas and specific hours. In a street scene where traffic will be affected you'll need to arrange for special police.
We also include in this category a wide variety of clearances, ranging from permission to use prerecorded music to reserving satellite time to transmit the production back to a studio. If you can't obtain clearance, you need time to explore alternatives.
Are you beginning to see why list of credits in films and TV programs is so long?

Select Video Inserts,
Still Photos, and Graphics
12. Arrange to shoot or acquire videotape and film inserts, still photos, and graphics.
To reduce production costs check out existing stock footage in film and tape libraries around the country. This is generally background footage, such as general exterior scenes of an area that will be edited into the production.
One example of a stock footage source is Film & Video Stock Shots in North Hollywood, California. (As we mentioned in Module 1, we have not control over the content these external links and they should in no way be considered endorsements.)
If suitable footage is not available or does not meet the needs of the production, you may need to hire a second unit to produce needed segments.
Second unit work is production done away from the main location by a separate production crew and generally does not involve the principal, on-camera talent.
If part of a dramatic production calls for shots of a specific building in Cleveland, for example, a second unit can shoot the necessary exteriors in Cleveland while the primary unit works on interior shots in Southern California where the actors live. When the shots are edited together it will appear that the interior shots belong to the building in Cleveland.
You will want to begin to make decisions on music at this point, including working out copyright clearances and royalties for music and visual inserts. We'll discuss these in more detail later.

Begin Rehearsals and Shooting
13. Start rehearsing and shooting. Depending on the type of production, rehearsals may take place either minutes or days before the actual shooting.
Productions shot live-on-tape (without stopping, except for major problems -- whether recorded on videotape or another medium) will need to be completely rehearsed before recording starts. This includes early walk-through rehearsals, camera rehearsals, and one or more dress rehearsals.
Productions that are shot single-camera, film-style (to be covered later) are rehearsed and recorded one scene at a time.

Begin Editing Phase
14. After shooting is completed, the producer, director, and video recording editor review the footage and start to make editing decisions. This ▲has typically been done in two phases: on-line and off-line.
Briefly, in off-line editing copies of the original taped footage that contains time-code number references are used to develop a kind of blueprint for final editing. In on-line editing the original footage is used in editing.
During the final editing phase, sound sweetening (enhancing), color balancing, and visual effects are added.
Because editing is so important to the creative process, we're going to devote several chapters to the subject.
If all these terms and procedures sound a bit intimidating right now, don't worry; we'll explain them in more detail later.

Do Postproduction Follow-Up
15. Although most of the production crew will conclude their work by the time production wraps (finishes), some follow-up work generally needs to be completed.
Included is totaling up financial statements, paying the final bills, and determining the production's success (or failure). Ratings indicate success levels in broadcast television. In institutional television success may be determined by testing, program evaluations, and viewer feedback.
Speaking of ratings -- those numbers often spell the life and death of TV programs. You can check your understanding of these things here.
In the next module, we'll turn our attention to that important "blueprint" for the entire TV production, the script.
________________________________________


________________________________________
Module 5



The Script --
The Key Element
In Productions

With the basic overview of the production process out of the way, we can look at the key element in the process: the script.
There are semi-scripted shows and fully scripted shows.
In the first category are interviews, discussions, ad-lib programs, and many demonstration and variety shows. These scripts resemble a basic outline, with only the segments and basic times listed.
Although scripts for a semi-scripted show may be comparatively easy to write (since there's very little to write!), this type of show puts pressure on the director and talent to figure things out as they go and to try to bring things together "on the fly."
Much in contrast, scripts for fully scripted shows list the complete audio and video for every minute. In a fully scripted show, the overall content, balance, pace, and timing can be figured out before the production starts so that surprises can be minimized. (Notice we didn't say eliminated).

The Concrete-to-Abstract Continuum
Documentary and hard news pieces should be reasonably concrete. That is, they should present information clearly, minimizing the possibility for misunderstanding.
In fact, the better you are at clearly explaining things, the more successful you'll be.
A concrete news script is quite different in approach and structure from the script for a feature story, soft news piece, music video, or dramatic production. In the latter cases, it's often desirable not to be too concrete -- in order to allow room for personal interpretation.
Let's look at two examples.
An instructional video on the operation of a software program should be as explicit as possible. Given the nature of computers and computer programs, you should present information in a clear, systematic fashion.
Although you'll want to present the material in a creative, interesting, and possibly even humorous way, the challenge is in having all audience members acquire the same clear idea of a specific sequence of operational procedures. If most of the audience can successfully operate the program afterward, you're successful; if they can't, you're not.
In contrast to this concrete type of production there are, for example, feature pieces on Jazzercise or new fashions.
Given the fact that the audience has undoubtedly seen scores of television segments on fashion, the first challenge is to approach the segment in a fresh, creative, attention-getting way.
Unlike software programs or stereo components, fashions are not sold based on technical specifications. Because they appeal largely to the ego and emotions, we're less interested in communicating facts than in generating excitement, i.e., creating a positive emotional response.
Likewise, a soft news piece on exercise should not emphasize facts as much as action. Its approach should be more abstract. Instead of facts, its purpose is to communicate something of the feelings surrounding exercise and those that go along with having a slim, trim, fit body.

Hold Their Interest
In scripting content, a logical and linear sequence is the most natural approach, especially when information must be presented in a precise, step-by-step fashion. Recall the instructional computer piece we cited.
Often, however, it's not desirable to use a structured, linear presentation. In fact, the latter can get a bit predictable and boring.
In dramatic productions, the techniques of using flashbacks (momentarily cutting back to earlier events) or presenting ▲ parallel stories (two or more stories running at the same time) can add variety and stimulate interest.
Whatever you do, be certain to present the materials in a way that will hold the attention and interest of your audience. You can do this by:

• engaging the audience's emotions

• presenting your ideas in fresh, succinct, clear, and creative ways

• making your viewers care about the subject matter

• using aural and visual variety
While visualizing your scenes, if you discover spots that don't seem as if they would hold viewer attention, make changes.
Remember, if you lose your audience, you've compromised the whole purpose of your effort.
Spicing Up Interviews
For better or worse, interviews serve as the mainstay of many, if not most, nondramatic productions. Because of this and the difficulty involved in making interviews interesting, they require special attention. (Later, we'll talk about interviewing techniques.)
Even though "talking heads" can get pretty boring, the credibility of an authority or the authenticity of the person directly involved in the story is generally better than a narrator presenting the same information.
However, except for rather intense and emotional subject matter, keep in mind that once we see what someone looks like during an interview, you will probably want to enhance interest and pace in your piece during the editing phase by cutting in B-roll (related supplementary) footage.
B-roll footage consists of shots of people, objects or places referred to in the basic interview footage -- the A-roll.
At the same time, don't let the B-roll footage distract from what's being said.
In television, "A-rolls" and B-rolls" refer to rolls or reels of videotape. At the same time, other recording media are now replacing videotape.
Although audio and video technology changes rapidly, in this case and many others we tend to stick to the original (and often outdated) terms to describe things. Recall that in England, the TV control room is still "the gallery" -- a setting that hasn't been used since about 1940.
Whenever you plan an interview, plan for supplemental, B-roll footage. Sometimes you won't know what this will be until after the interview, so you need to keep your production options open.
In postproduction, you'll need to specify exact points in the interview (the A-roll) where the B-roll footage will go. Simply trying to describe points in scenes for edits can be difficult and open the door to errors -- not to mention require a lot of words. The only way to specify precise audio and video edit points is to use time-code numbers.
Time code, sometimes called ▲SMPTE/EBU time code after the organizations that adopted it, refers to the eight-digit numbers that identify the exact hours, minutes, seconds, and frames in a video.
These numbers specify points on video recordings within at least 1/30th of a second -- a level of accuracy important for a tightly edited show.
Note the time-code numbers in the picture on the left. In this case, we read them as 0 hours, 1 minute, 16 seconds, and 12 frames. We'll go into time codes more in the audio and video editing sections.

Assembling the Segments
Documentary writers who prefer a systematic approach (and have the luxury of time) start by typing -- or having typed -- a transcript of the interviews on a computer, complete with time-code references. This is especially valuable if they need to break up numerous lengthy interviews and rearrange them in a topical sequence.
Once on computer disk, writers can do word or phrase searches and quickly locate key words or topics in the interview segments.
Most word processing programs allow two or more windows on the screen.
Using this approach you can search and review the interview transcript in one window while writing the script in the other. Thus, you can easily condense, rearrange, and assemble the segments directly on the computer screen to provide the most logical and interesting flow.
If time-code numbers are included with the video segments, you should make a note of the time codes on the script as you go along in case you later need to change anything.
In some instances you may be able to "run" video and audio sequences on the computer and see the results as you proceed.
Whenever it's necessary to explain or amplify points or establish bridges between interview segments, you can write narration. An announcer will generally read this over B-roll footage.
In writing the script be alert at every moment to use the most effective means of getting your ideas across.
Ask yourself which technique(s) will best illustrate your point: narration, a short clip from an interview, an electronically animated sequence, a graph, or a still photo?
Some sophisticated editing programs have speech recognition capabilities, which means they can can search for spoken words or phrases in video footage.

As you pull the elements together, think of yourself as watching the show; try to visualize exactly what's going on at each moment. Great composers can hear each instrument in their heads as they write music. In the same way, effective scriptwriters visualize scenes as they write their scripts.
In establishing the pace of the production, eliminate long, slow periods and even long fast-moving periods. Either will tire an audience.
Except for a short, fast-paced montage (a rapid succession of images), keep shots segments to at least two seconds in length. Conversely, only a scene with plenty of action or intensity will be able to hold an audience for more than a minute.
Remember, engage your audience quickly and leave them with a positive impression at the end. In between, keep interest from drifting by varying pace, emotional content, and presentation style.
Not always an easy assignment.
Outside Readings: Since these modules deal with computers, digital cameras and TV news, we have assembled a listing of sources of news for each of these topics.

These articles are updated each day to provide the very latest information in each of these areas. Click on: Latest on PCs, Macs, Digital Cameras, Plus, Up-to-the-Minute News and Information.

There is also a readily available link to these articles on the TV production production index page.


________________________________________
Green, Yellow, Blue, and Red Square Reminder
________________________________________



Word Square
Module 6



Scriptwriting
Guidelines

Can a contractor build an office building without being able to understand the architect's blueprints? Not likely.
In the somewhat the same way key production personnel must be able to understand scripts, especially the nuances in good dramatic scripts, before they can translate them into productions.
A comprehensive guide to scriptwriting is beyond the scope of this course. However, when you complete this module, you should understand the basic elements of scripts and even have a good start on writing one. (Remember: the most traveled route to producing is through writing.)

"Excuse Me, Mr. Brinkley..."
Many years ago, while dining in a Miami restaurant, a TV production student of mine saw David Brinkley, one of the most experienced and respected network anchorpersons of all time.*
The student strode boldly up to Mr. Brinkley, introduced himself as an aspiring TV journalist, and asked:
"Mr. Brinkley, what advice could you give me to be successful in broadcast journalism?"

David Brinkley, who won more awards in news than any radio or TV newscaster in history, put down his fork, thought for a moment, and said, "Three things: Learn to write. Learn to write. And learn to write."
Although you can learn the basics of writing here or in a good book, you can become a good writer only by writing.
Doing lots of writing.
Most successful writers spend years writing before they start "getting it right" -- at least right enough to start making money consistently.
In a sense, initial failures aren't failures at all; they're a prerequisite for success.
Thomas Edison said, "Genius is one percent inspiration and ninety-nine percent perspiration."
By another definition, a genius is a talented person who has done all his or her homework. These modules constitute the prerequisite homework involved in success.
Keep in mind that writing for the electronic media is not the same as writing for print. Those who write for print enjoy some advantages their broadcast counterparts don't have.
For example, a reader can go back and reread a sentence. If a sentence isn't understood in a TV production, however, the meaning is lost -- or worse, the listener is distracted while figuring out what was said.
With the written word, such things as chapter divisions, paragraphs, subheadings, italics, and boldface type guide the reader. And the spelling of sound-alike words can indicate their meaning.
Things are different when you write for the ear.
In order to deliver narration in a conversational style, you don't always follow standard rules of punctuation. Ellipses...three dots...are commonly used to designate pauses. Often, complete sentences aren't used...just as they aren't used in normal conversation. In broadcast writing an extra helping of commas provides clues to phrasing.
Although such usage is sometimes inconsistent with proper written form and your English 101 teacher may not approve, the overriding consideration in writing narration is clarity. This entails making it easy for an announcer to read, and making it easy for an audience to understand.
The way we perceive verbal information also complicates things.
When we read, we see words in groups or thought patterns. This helps us grasp the meaning.
But, when we listen, information is delivered one word at a time.
To make sense out of a sentence we must retain the first words in memory while adding all subsequent words, until the sentence or thought is complete.
If the sentence is too complex or takes too long to unfold, meaning is missed or confused.
Of course, through proper phrasing and word emphasis a narrator can go a long way toward ensuring understanding. This gives the spoken word a major advantage over the written word.

Broadcast Style
Writers write video scripts in broadcast style. With allowance for sentence variety, video scripts use short, concise, direct sentences.
You should also be aware of ▲some common mistakes, such as the difference between further and farther and less than and fewer than.
Of course, the English language is constantly changing. Things which were deemed "wrong" at one point can eventually come into regular use and become accepted. (For example, in the preceding sentence "which" should actually be "that," but this is another case where things have been changing.)
"Close proximity" is becoming accepted, even though proximity means close, so it's actually redundant.
"There are less concerns about good grammar in advertising" should be "fewer concerns." Fewer relates to things you can count; less to things you can't.
"Whom," even when correctly used in speech, now sounds stilted. "Irregardless" can be found in a couple of dictionaries -- even though it's not seen as acceptable.
________________________________________
In writing your scripts, remember that the active voice is preferred over the inactive or passive voice. Nouns and verbs are preferred over adjectives, and specific words over general ones.
Facts must be taut, verbs strong and active;
a script should crackle.

Avoid dependent clauses at the beginning of sentences. Attribution should come at the beginning of sentences ("According to the Surgeon General...") rather than at the end, which is common in newspaper writing. In broadcast style, we want to know from the beginning who's doing the "saying."
The classic reference on writing clarity and simplicity is a little 70-page book called Elements of Style. Even many seasoned journalists keep it handy.
A recent book on punctuation is Lynne Truss' and Bonnie Timmons' Eats, Shoots & Leaves. Who would believe an instructional book on a mundane subject like punctuation could make the New York Times best-seller list? But as the saying goes, "It's not what you say, but how you say it" -- something that's especially important in writing scripts.

Ten Newswriting Guidelines
With a bit of help from Ms. Debrah Potter of RTNDF, the Radio and Television News Directors Foundation, here are ten guidelines for writing news:
1. While making sure you bring the most interesting and surprising elements to the forefront of your story, don't give away everything right at the beginning.
Maintain interest by spreading these "nuggets" throughout the story. And try not to let the lead-in to the story steal the thunder from what follows.
2. Use the active voice: subject, verb, and object.
3. Remember that nouns and verbs are stronger than adjectives and adverbs. Don't tell viewers what they should be feeling by using adjectives, especially shopworn adjectives, such as "tragic," "amazing," and "stunning." If the story's facts don't make such things obvious, you might want to examine your approach.
4. Avoid jargon; use well-known terms. For example, your audience probably won't know what ENG and B-roll mean.
5. Include defining details, such as the make of the car and the type of trees being cut down.
6. Write (tell!) the story as if you were trying to catch the interest of a friend. Try mentally to follow up on the phrases, "Guess what...," or "This may be hard to believe, but...."
7. After you write something, set it aside for at least ten minutes and concentrate on something else. Then go back and review the story with a fresh perspective.
At that point it may be easier to catch and eliminate unnecessary words and phrases.
8. Read the story aloud (not under your breath).
Rewrite:
 sentences that are too long
 tongue-twisting or awkward phrases
 phrases that could be taken two ways
 long titles ("The 18-year-old, College Park Central High School sophomore...")
9. Don't rely on the sound track to tell the story or explain the video. The basic idea should be obvious from the video. At the same time, the audio and video should complement and strengthen each other. (See the section below.)
10. Screen the complete audio and video story (package) as a "doubting Thomas." Have you made statements that could legitimately be challenged? Your clearly stated and verified facts should silence any rational critic.

Correlate Audio and Video
Keep in mind the basic guideline of correlating (relating) audio and video because viewers are accustomed to having what they see on the screen relate to what they hear -- generally in the form of dialogue or narration. (Note that the intentionally long and complex sentence you just read is not be appropriate for broadcast style.)
If viewers see one thing and hear another, things get confusing.
Even though you want audio and video to relate, watch out for the "see Dick run" approach where the audio states the obvious. If you can clearly see what's happening on the screen, this can get downright annoying.
Although radio drama had to slip many things into the dialogue to tip off the listeners to what they couldn't see ("Emma, why are you staring out the window?"), this is hardly the case with TV, where you can see what's taking place.
The trick is to write slightly off the pictures. This means that, while you don't describe the pictures, your words aren't so far removed from what is being seen that you split viewer attention. This technique involves a delicate balancing act.

Information Overload
With more than one hundred TV channels available to viewers in some areas and millions of pages of information available on the Internet, to name just two sources of information, one of today's biggest problems is information overload.
In TV production the goal is not just to unload information on viewers. To be successful you must engage your audience and clearly communicate selected information in a manner that will both enlighten and possibly even entertain.
We can absorb only a limited amount of information at a time. The average viewer has preconceptions and internal and external distractions that get in the way.
If a script is packed with too many facts, or if the information is not clearly presented, the viewer will become confused, lost, and frustrated.

Lost vs. Bored
Not only is the amount of information you communicate important, but also the rate at which it's presented.
In information-centered productions, give the viewer a chance to process each idea before moving on to the next.
If you move too rapidly, you'll lose your audience; too slowly, and you'll bore them.
The best approach in presenting crucial information in an instructional production is first to signal the viewer that something important is coming.
Next, present the information as simply and clearly as possible.
Then, reinforce may points by repeating them in a different way -- or with an illustration or two.
Here are seven general rules to remember in writing for television. Some of these apply to instructional productions, some to dramatic productions, and some to both.
• Assume a conversational tone by using short sentences and an informal, approachable style.

• Engage your audience emotionally; make them care about both the people and content of your production.

• Provide adequate logical structure; let viewers know where you're going, which concepts are key, and when you're going to change the subject.

• After making an important point, expound on it; illustrate it.

• Don't try to pack too many facts into one program.

• Give your audience a chance to digest one concept before jumping to another.

• Pace your presentation according to the ability of your target audience to grasp the concepts.

Video Grammar
Some people say that, unlike writing, video and film production don't have standardized grammar (e.g., conventions or structure).
Although video has abandoned much of the grammar established by early filmmaking, even in this MTV, YouTube era we can use various techniques to add structure to formal productions.
In dramatic productions, lap-dissolves (when two video sources overlap for a few seconds during the transition from one to the other) often signal a change in time or place.
Fade-ins and fade-outs, which apply to both audio and video, can be likened to the beginning and end of book chapters. A fade-out consists of a two- or three-second transition from a full signal to black and silence. A fade-in is the reverse.
Fade-ins and fade-outs often signal a major change or division in a production, such as a major passage of time. (But "often" is a long way from "always.")
▲Traditionally, teleplays (television plays) and screenplays (film scripts) start with a fade-in and close with a fade-out.
Although we don't focus on dramatic film scripts here, one film writer-producer told us about a saying he has taped over his desk: 'It's the story, stupid.'


Script Terms and Abbreviations
A number of terms and abbreviations are used in scriptwriting. Some describe camera movements.
When the entire camera is moved toward or away from the subject, it's referred to as a dolly.
A zoom, an optical version of a dolly, achieves somewhat the same effect. A script notation might say, "Camera zooms in for close-up of John" or "Camera zooms out to show John is not alone."
A lateral move is a truck. Note the illustration on the left.
Some terms designate shots.
Cuts or takes are instant transitions from one video source to another. In grammatical terms, shots can be likened to sentences where each shot is a visual statement.
The cover shot or establishing shot are designated on a script by "wide-shot" (WS) or "long shot" (LS). Occasionally, the abbreviations XLS for extreme long shot or VLS for very long shot are used. These all can give the audience a basic orientation to the geography of a scene (i.e., who is standing where) after which you'll cut to closer shots.
On small screen devices or in the relatively low-resolution medium of standard-definition television (SDTV), this type of shot is visually weak because important details aren't easy to see. Film and HDTV (high-definition television) don't have quite the same problem.
Cover or establishing shots should be held only long enough to orient viewers to the relationship between major scene elements. (How close is the burning shed to the house?) Thereafter, they can be momentarily used as reminders or updates on scene changes as reestablishing shots.
TV scripts are usually divided into audio and video columns, with shot designations in the left video column.
So that you can see how some of these things come together, here are some sample scripts.
Simple video script
Dramatic film/video script format
Commercial script
News script
Television and film scripts are available on the Internet for study. (See the section on Internet Resources at the end of this module.)
You'll find the following shot designations relating to people:
An LS (long shot) or FS (full shot) is a shot from the top of the head to the feet.
An MS (medium shot) is normally a shot from the waist up. (To save space, we've used a vertical rather than a horizontal format in this illustration.)
An MCU (medium close-up) is a shot cropped between the shoulders and the belt line, rather than at the waist.
A relatively straight-on CU (close-up) is the most desirable for interviews. Changing facial expressions, which are important to understanding a conversation, can easily be seen.
XCUs are extreme close-ups. This type of shot is reserved for dramatic impact. The XCU may show just the eyes of an individual. With objects, an XCU is often necessary to reveal important detail.
A two-shot or three-shot (2-S or 3-S) designates a shot of two or three people in one scene.
The term subjective shot indicates that the audience (camera) will see what the character sees. It often indicates a handheld camera that follows a subject by walking or running. Subjective camera shots can add drama and frenzy to chase scenes.
We sometimes indicate camera angles, such as bird's eye view, high angle, eye level, and low angle on scripts.
A canted shot or Dutch angle shot (note photo on left) is tilted 25 to 45 degrees to one side, causing horizontal lines to run up or down hill.
Although scriptwriters occasionally feel it necessary to indicate camera shots and angles on a script, this is an area that's best left to the director to decide.
Even so, in dramatic scripts you may see the following terms:
• camera finds: the camera moves in on a particular portion of a scene
• camera goes with: the camera moves with a person or object
• reverse angle: a near 180-degree shift in camera position
• shot widens: signals a zoom or dolly back.
We use a number of other abbreviations:
• EXT and INT: exterior and interior settings

• SOT (sound-on-tape): The voice, music, or background sound is from the audio track of a videotape.

• SOF (sound-on-film): This is not much used anymore. Even if a production starts out on film, it's converted into a video recording before being "rolled into" a production

• VTR: videotape, videotape recording. Video and audiotape have now been largely replaced by computer disks and solid-state memory

• VO (voice over): narration heard at higher volume than music or background sound

• OSV (off-screen voice): voice from a person not visible to the audience
• MIC: microphone (pronounced "mike")

• POV (point of view). Dramatic scripts may indicate that a shot will be seen from the point of view of a particular actor.

• OS (over-the-shoulder shot): The picture shows the back of a person's head and possibly one shoulder with the main subject in the distance facing the camera. This is also designated as O/S and X/S.
• ANNCR: announcer
• KEY: electronic overlay of titles, credits or other video sources over background video

• SFX or F/X (special effects/visual effects): audio special effects (audio FX) or video special effects; altering normal audio and video, generally to achieve some dramatic effect
With this basic background, we'll turn to some "bottom line" considerations in the next module.
________________________________________
*After a 50-year career in broadcast news, David Brinkley died in June 2003, a few weeks before his 83rd birthday. He and his TV news co-anchor, Chet Huntley, are credited with establishing the popularity and credibility of TV news in the United States.

Mr. Brinkley had to give up covering presidential candidates because he was so recognizable that when he accompanied the candidate, more people would gather around him than the candidate.

Noted for his sage observations, he once pointed out that history provides many examples of generals seizing power and putting journalists in jail. But it provides no examples of reporters seizing power and putting generals in jail.
________________________________________

Internet Resources
A free, comprehensive computer scriptwriting program is available here. You can also find free demo programs of scriptwriting and general production software on the Internet at bcsoftware and screenplay, among other places.
The site offering the widely used Final Draft scriptwriting software also has a forum where scriptwriters and aspiring scriptwriters can register and exchange ideas and information.
You can find many writing tools for both professional and aspiring writers at The Writers' Store in Los Angeles.
________________________________________

________________________________________
Module 7



Costing Out
A Production

Although you may have come up with a truly great idea for a script -- one you're certain will make you famous! -- unless you can raise the money to get it produced it'll remain simply that: a great idea.
So the first question is what will it cost to produce?
Even if you have no interest in producing, the better your grasp of this issue, the better your chance of success.
And keep in mind that no production company will commit to a production without a reasonable idea of how much it will cost.
We call this process costing out a production.
Traditionally, we think of expenses as falling into two broad areas: above-the-line and below-the-line.

Above-the-Line and Below-the-Line
Although the "line" blurs at times, above-the-line expenses generally relate to the performing and producing elements: talent, script, music, and others.
Below-the-line elements refer to two broad areas:
• the physical elements: sets, props, make-up, wardrobe, graphics, transportation, production equipment, studio facilities, and editing
• the technical personnel: stage manager, engineering personnel, video recording operators, audio operators, and general labor
To cost out a major production accurately, you can go beyond the above-the-line and below-the-line designations and divide production into at least 15 categories:
1. preproduction costs
2. location scouting and related travel expenses
3. studio rental
4. sets and set construction
5. on-location expenses
6. equipment rental
7. video recording and duplication
8. production crew costs
9. producer, director, writer, creative fees
10. on-camera talent costs
11. insurance, shooting permits, contingencies, etc.
12. on-line and off-line editing
13. advertising, promotion, and publicity
14. research and follow-up
15. materials, supplies, and miscellaneous expenses
Smaller productions, of course, will not involve all of these categories.
You can list these categories in a column on the left side of a computer spreadsheet program, such as Microsoft Excel.
Under each category you can then add items and their costs. You can then add corresponding formulas that will automatically generate totals for each category, as well as for the grand total.

Renting vs. Buying Equipment
Note that one of the categories covers equipment rental. Except for studio equipment that's used every day, it's often more economical to rent equipment rather than buy it.
First, production equipment becomes outdated quickly. At more than $70,000 for a top-notch video camera, you might assume you'll recoup the cost through several years' use. If you pay cash for a $70,000 camera and use it five years, the cost breaks down to $14,000 a year, plus repair and maintenance expenses.
Even though the camera might still be reliable after five years or more, compared to the newer models it will be outdated. It may even be difficult to find parts.
Several different production facilities can use equipment available for rent, however. This means the rental company can write off the initial investment on their taxes more quickly, making it possible to replace the equipment with newer models.
Even for consumer grade equipment, the rental cost (which may be only $50 a day) might make sense if you'll use it for just a few days.
Second, the rental company, rather than the production facility, is responsible for repair, maintenance, and updating. If equipment breaks down during a shoot (production), rental companies will typically replace it within a few hours.
Third, renting provides an income tax advantage. When equipment is purchased, it must be depreciated (written off on income tax) over a number of years. But sometimes this time span exceeds the practical usefulness of the equipment. This may mean that the production facility will need to sell the used equipment in order to recoup some of their initial investment. (Companies often donate their equipment to schools for a tax write-off.)
If you rent non-studio equipment, however, you can write it off immediately as a production expense.
Although rules governing income taxes change regularly, deducting the cost of rental equipment can represent a quicker, simpler -- and in many cases greater -- tax deduction.
Finally, when you rent equipment, you increase the opportunities to obtain equipment that will meet the specific needs of your production. Purchasing equipment can generate pressure to use it, even though at times other makes and models might be better suited to your needs.
Again, in each of these examples, we're talking about equipment that you wouldn't use every day.

Approaches to
Attributing Costs
Once you figure out the cost of a production, you may need to justify it, either in terms of cost-effectiveness or expected results.
There are three bases on which to measure cost effectiveness:
• cost per minute
• cost per viewer
• cost vs. measured benefits

Cost Per Minute
Cost per minute is relatively easy to determine; simply divide the final production cost by the duration of the finished product. For example, if a 30-minute production costs $120,000, the cost per minute is $4,000.

Cost Per Viewer
Cost per viewer is also relatively simple to figure out; divide the total production cost by the actual or anticipated audience.
In the field of advertising, CPM (or cost-per-thousand) is a common measure. If 100,000 people see a show that costs $5,000 to produce, the CPM is $50. On a cost-per-viewer basis, this comes out to be only five cents a person.

Cost Per Measured Results
Cost per measured results is the most difficult to determine. Here, we must measure production costs against intended results.
Suppose that after airing one 60-second commercial we'll sell 300,000 packages of razor blades at a resulting profit of $100,000. If we spent a million dollars producing and airing the commercial, we would have to question whether it was good investment.
But, advertisers air most ads more than once. (Sometimes endlessly, it seems!)
If the cost of TV time is $10,000 and we sell 300,000 packages of razor blades after each airing, we will soon show a profit.
All of these "measured results" are easily determined by a calculator.

Return on Investment
Things may not be this simple, however.
What if we also run ads in newspapers and on radio, and we have huge, colorful displays in stores? Then it becomes difficult to determine the cost-effectiveness of each medium, and the question becomes, which approaches are paying off and which aren't?
And there can be another issue. We can count razor blades, but it may be more difficult to determine the returns on other "products."
For example, it's very difficult to determine the effectiveness of programming on altering human behavior and attitudes.
How do you quantify the return on investment of public service announcements designed to get viewers to stop smoking, "buckle up for safety," or preserve clean air and water?
Even if we conduct before-and-after surveys to measure changes in public awareness, it can be almost impossible to factor out the influence of the host of other voices the public may encounter on that issue.
Apart from in-depth interviews with viewers, we may have to rely largely on "the record."
If we know a series of 60-second TV spots increases razor blade sales by 300,000, we might assume a 60-second PSA (public service announcement) would also have some influence on smoking, buckling seat belts, and preserving clean air and water. The question is how many people modified their behavior as a direct result of your PSA?
This is important for nonprofits and other organizations to know in order to determine the best use of their informational and educational dollars.
With some of the major preproduction concerns covered, our next step is to become familiar with the tools of production.
To understand these, we'll need to start with the basics of the medium itself.
________________________________________


________________________________________
TO NEXT MODULE Search Site Video Projects Revision Information
Issues Forum Author's Blog/E-Mail Associated Readings Bibliography
Index for Modules To Home Page Tell a Friend Tests/Crosswords/Matching
© 1996 - 2010, All Rights Reserved.
Use limited to direct, unmodified access from CyberCollege® or the InternetCampus®.




Module 8



How the Imaging
Process Works

Why do you need to know how the film and TV imaging process works?
For one thing, knowledge is power, and the more you know about the process, the easier it will be to use the tools creatively. Plus, you'll be able to solve most of the inevitable problems that crop up during TV production.
Let's start at the beginning with...
Fields and Frames
Ironically, both "motion" pictures and TV are based solidly on an illusion. "Motion" as such does not exist in the actual TV and motion picture images. The illusion is created when a rapid sequence of still images are presented.
This illusion was discovered after a $25,000 bet was established by a motion picture foundation in 1877. For decades, an argument raged over whether a race horse ever had all four hooves off the ground at the same time.
By the way, the horse in the above illustration should be in motion. If it isn't, you may need to turn on the animation in your browser in order to see this and other animated illustrations in these modules.
In an effort to settle the issue Leland Stanford, founder of Stanford University, set up an experiment in which a photographer took a rapid sequence of photos of a running horse. (And, yes, they found that for brief moments a race horse does have all four feet off the ground at the same time.)
However, this experiment established something even more important. It illustrated that, if a sequence of still pictures is presented at a rate of about 16 or more per-second, the individual pictures blend together and give the impression of a continuous, uninterrupted image.
If the series of eleven still photos shown below are presented sequentially in rapid succession, they create the appearance of continuous motion. (Note photo above.)

You can see in the sequence of images above that the individual pictures vary slightly to reflect changes over time.
In the circular illustration on the right we've slowed down the timing of the images. Here, you can see more clearly how a sequence of still images can create an illusion of movement.
We see a more primitive version of this in the "moving" lights of a theater marquee or "moving" arrow of a neon sign suggesting passersby come in and buy something.
Although early silent films used basic frame (picture) rates of 16 and 18 per second, when sound was introduced the rate increased to 24 frames per-second.
This was necessary primarily to meet the quality needs of the sound track. To reduce flicker, today's motion picture projectors use a two-bladed shutter that projects each frame twice, giving an effective rate of 48 frames per-second. (Some projectors flash each frame three times.)
Unlike broadcast television with its frame rates of 25 and 30 per-second, for decades, film has maintained a worldwide 24 frame-per-second standard.

The NTSC (National Television System Committee) system of television used in the United States, Canada, Japan, Mexico, and a few other countries reproduces pictures (frames) at a rate of approximately 30 per-second. (We'll take up the new ATSC digital broadcast standard in the next module.)
Of course, 30 per-second frame rate presents a bit of a problem in converting film to TV (mathematically, 24 doesn't go into 30 very well), but we'll worry about that later.
A motion picture camera records a completely formed still picture on each frame of film, just like the still pictures on a roll of film in a 35mm camera. It's just that the motion picture camera takes the individual pictures at a rate of 24 per-second.
Things are different in TV. In a video camera, hundreds of horizontal lines make up each frame.
Thousands of points of brightness and color information exist along each of these lines. This information is electronically discerned in the TV camera and then reproduced on a TV display in a left-to-right, top-to-bottom scanning sequence.
This sequence is similar to the movement of your eyes as you read.

Interlaced Scanning
Originally, to reduce variations in flicker and brightness during the scanning process, as well as to solve some technical limitations, the scanning process was divided into two halves.
The odd-numbered lines were scanned first and then the even-numbered lines were interleaved between these lines to create a complete picture. Not surprisingly, we refer to this process as interleaved or interlaced scanning.



In this greatly enlarged TV image, we've colored the odd lines green and the even lines yellow.
When we remove these colors, we can see how they combine to create the black and white video picture on the right. (Later, we'll describe a color TV picture, which is a bit more complex.)
Each of these half-frame passes (either all of the odd- or all of the even-numbered lines, or the green or the yellow lines in the illustration) is a field. The completed (two-field) picture is a frame.
After scanning the complete picture (frame), the process starts again. But, if the subject matter in the scene changes with time, the next frame will reflect that slight change.
Human perception fuses together these slight changes between successive pictures, giving the illusion of continuous, uninterrupted motion.
The interleaved approach, although necessary before recent advances in technology, results in minor picture "artifacts," or distortions in the picture, including variations in color.

Progressive Scanning
After several decades of using the interlaced approach, today more and more video systems (including computer monitors and new flat-screen TV sets) use a progressive or non-interlaced scanning approach.
With this approach, the fields (odd and even lines) are combined and reproduced together in a 1-2-3 sequence, rather than an odd (1-3-5) and even (2-4-6) interlaced sequence.
Progressive scanning has a number of advantages, including greater clarity and the ability to interface more easily with computer-based video equipment. But it adds greater technical demands on the TV system.
As we'll see in the next module, the specifications for digital and high-definition television (DTV, HDTV) allow for ▲both progressive and interlaced scanning.

The Camera's Imaging Device
The lens of the television camera forms an image on a light sensitive target inside a video camera in the same way a motion picture camera forms an image on film.
But instead of film, television cameras commonly use a solid-state, light-sensitive receptor called a CCD (charged-coupled device) or, more commonly, a CMOS (complementary metal oxide semiconductor). Both of these "chips" are able to detect brightness differences at different points throughout the image area.
The chip's target area (the small rectangular area near the center of this photo) contains from hundreds-of-thousands to millions of pixel (picture element) points. Each point can electrically respond to the amount of light focused on its surface.
A very small section of a chip is represented below -- enlarged thousands of times. The individual pixels are shown in blue.
Differences in image brightness detected at each of these points on the surface of the chip change that light into electric voltages.
Electronics within the camera scanning system regularly check each pixel area to determine the amount of light falling on its surface.
This sequential information is directed to an output amplifier along the path shown by the red arrows.
This information readout results in constantly changing field and frame information. (We'll cover this process, especially as it relates to color information, in more detail in Module 15.)
In a sense, your TV receiver reverses this process. The pixel-point voltages generated in a camera are changed back into light, which we see as an image on our TV screens.

Analog and Digital Signals
Electronic signals -- as they originate in microphones and cameras -- are analog (or analogue) in form.
This means the equipment detects signals in terms of continuing variations in relative strength or amplitude.
In audio, this translates into volume or loudness; in video, it's the brightness of the picture.
As illustrated above, we can change these analog signals (on the left) into digital data (on the right). The latter is computer zeros and ones (0s and 1s, or binary computer code). The digital signal is then sent to subsequent electronic equipment.
Backing up a bit, we need to explain how the analog-to-digital process works. The top part of the illustration below shows how an analog signal can rise and fall over time to reflect changes in the original audio or video source.
In order to change an analog signal to digital, that wave pattern is sampled at a high rate of speed. The amplitude at each of those sampled moments (shown in blue-green on the left) is converted into a number equivalent.
These numbers are simply the combinations of the 0s and 1s used in computer language.
Since we are dealing with numerical quantities, this conversion process is appropriately called quantizing.
Once the information is converted into numbers, we can do some interesting things (generally, visual effects) by adding, subtracting, multiplying, and dividing the numbers.
The faster all this is done, the better the audio and video quality. But this means that as the quality increases the technical requirements become more demanding.
Thus, we are frequently dealing with the difference between high-quality equipment that can handle ultra high-speed data rates and lower-level (less expensive) consumer equipment that relies on a reduced sampling rate. This, in part, answers the question about why some video recorders cost $300 and others cost $100,000.

What's the Advantage of Digital Data?
Compared to the digital signal, an analog signal would seem to be the most accurate and ideal representation of the original signal.
While this may initially be true, the problem arises in the need for constant amplification and re-amplification of the signal throughout every stage of the audio and video process.
Whenever an analog signal is reproduced or amplified noise is inevitably introduced, which degrades the signal.
In audio, this can take the form of a hissing sound; in video, it appears as a subtle background "snow" effect. This is exaggerated in the photo below.
By converting the original analog signal into digital form, we can eliminate this noise buildup, even though the signal is amplified or "copied" dozens of times.
Because digital signals are limited to the form of 0s and 1s, no "in-between" information (spurious noise) can creep in to degrade the signal.
We'll delve more deeply into some of these issues when we focus on digital audio.
Today's digital audio and video equipment has borrowed heavily from developments in computer technology -- so heavily, in fact, that the two areas are now merging.
Satellite services such as DISH and Direct TV make use of digital receivers that are, in effect, specialized computers. And you probably listen to music recorded on a pocket-sized device capable of storing several hours of digitized music.
We discuss some of the advantages of digital electronics in video production here.
In the next module, we'll look at world television standards.
________________________________________


________________________________________
Word Square Crossword
TO NEXT MODULE Search Site Video Projects Revision Information
Issues Forum Author's Blog/E-Mail Associated Readings Bibliography
Index for Modules To Home Page Tell a Friend Tests/Crosswords/Matching
© 1996 - 2010, All Rights Reserved.
Use limited to direct, unmodified access from CyberCollege® or the InternetCampus®.
Module 9-A
Part I


World Television
Standards and
DTV/HDTV

Fifty years ago it didn't matter much that there were a dozen or so incompatible systems of television in the world. Distance was a great insulator.
But times have changed.
Today, satellites link every country with television, and the Internet provides video, audio, and the written word to virtually anyone anywhere who has a computer and telephone line.
Now, incompatible broadcast standards, not to mention scores of different languages, represent a barrier to world-wide communication and understanding.
Dictators like it that way, as do others who fear that the free flow of information will undermine their views and threaten their control.
This is why many countries ban outside films and broadcasts, and spend millions of dollars each year to try to keep out "undesirable" information. (An example that few people in the United States know about is the jamming that has taken place on international short wave. This is discussed here.) Recall that we previously discussed the meaning of ▲the colored boxes before these links.
Most of the rest of us -- especially those who live in democracies -- feel a free flow of information is essential not only to progress, but also to dissolve barriers of misunderstanding between peoples.
Films, TV, and the Internet -- and especially e-mail between individuals in different countries -- all show that, despite conflicting politics and religions, people the world over have pretty much the same hopes, fears, and dreams.
Touching on these basic human similarities is a major goal of TV production.

Plus, films and TV programs represent two of the major exports of the United States. In fact, many productions don't begin to show a profit until they go into international distribution.
Incompatible Broadcast Standards
A TV program produced in one country can't automatically be viewed in many other countries without converting it to a different technical standard. These technical differences relate to both incompatibilities in equipment and in the approach to broadcasting the audio and video signals.
Some 14 different SDTV (standard definition) broadcast TV standards have been used at different times throughout the world. They can be reduced to three primary groups:
• NTSC (National Television System Committee)
• SECAM (Sequential Color and Memory)
• PAL (Phase Alternating Line)
Within these there are four major differences:
• the total number of horizontal lines in the picture (525 or 625 for standard- definition. or SDTV) and 1,125 and 1,250 for high-definition TV (HDTV)

• whether the transmission rate is 25 or 30 frames (complete pictures) per-second

• the broadcast channel width (data bandwidth of the signal)

• whether an AM or FM signal is used for transmitting audio and video
Historically, the number of lines used in standard broadcast TV has ranged from the United Kingdom's 405-line system to France's 819-line system. The phase-out of both these systems left us with the 525 and 625 standards for SDTV.
You might think all this a bit technical, but hang in there. It's relevant to what you need to know -- especially with the international exchange of programming being a growing factor in the field's economic viability. The quick matching game at the end of the chapter will tell how much of the chapter has sunk in.
Aspect Ratios
Although the number of scanning lines may have varied, until recently all television systems had a 4:3 aspect ratio. The aspect ratio is the width-height proportion of the picture.
The 4:3 ratio (note red box in the photo on the right) was consistent with motion pictures that predated the wide screen aspect ratios used in CinemaScope, VistaVision, and Panavision. When the HDTV standard was introduced it also made use of this wider (generally, 16:9) aspect ratio.
In the picture here, the wider area (just inside the blue borders) represents the 16:9 ratio used in HDTV. Compared to the 4:3 ratio, this aspect ratio conforms to the wider perspective of normal human vision.

The NTSC Broadcast Standard
Before we take up the new ATSC (Advanced Television Systems Committee) digital broadcast standard, we'll take a quick look back at the systems that proceeded it and that are still used in countries that have not made the transition to digital TV.
For almost 50 years he United States used the NTSC (National Television System Committee's) 525-line, 30 frames-per-second system. It was developed in 1941 as the broadcast standard for black and white (monochrome) television.
By 1953, a NTSC color standard had been finalized. (Note the January 1954 issue of Popular Mechanics announcing the arrival of color TV in the United States.)
We refer to the NTSC system of television as a 525-line, 60-field system because, as we saw in Module 8, the 30 frames consist of 60 fields.
The NTSC's 60-field system originally based its timing cycle on the 60 Hz (hertz or cycle) electrical system these countries use.
Since other countries in the world use a 50 Hz electrical system, they developed systems of television based on 50 fields per-second.
Since the basic NTSC standard was more than 50 years old, many technical improvements had come along during the subsequent half-century. Digital TV standards, which we'll cover later in this module, take advantage of many new technical capabilities and provide major improvements over the original NTSC standard.

The PAL and SECAM Television Systems
Prior to the introduction of digital broadcasting, more than half of all countries used one of two 625-line, 25-frame systems: SECAM (Sequential Color and Memory) or PAL (Phase Alternating Line).
The extra 100 lines in SECAM and most PAL systems add significant detail and clarity to the video picture, but the 50 fields per second (compared to 60 fields in the NTSC system) meant that the viewer could sometimes notice a slight flicker.
Even so, the 25 frames-per-second (fps) standard is very close to the international film standard of 24 fps. Therefore, we can easily convert the 24-fps film standard to the PAL and SECAM video systems. (Slightly speeding up film to 25 fps is hard to notice.)
With the 30 frames per-second NTSC video standard converting film to video is more difficult. The ▲ to 30 frames per second frame rate had to be converted to 24 and vise-verse. This took a bit of fancy footwork, as explained here. This link also explains how we convert NTSC video to PAL and SECAM video and vice versa.
In the next module we'll take up the new DTV broadcast standards.
________________________________________

(Click on "more" for the second half of this module.)
Part II

Digital
Broadcasting




The World Moves to DTV
By 2010 about half of the world's major countries had converted to digital broadcasting. Digital TV uses a more efficient transmission technology allowing for improved picture and sound quality. In addition, digital signals provide more programming options through the use of multiple digital subchannels (channels of information within the basic broadcast signal).
Compared to analog signals, digital broadcast signals react differently to interference. Common problems with over-the-air analog television include ghosting of images (seeing multiple faint images at the same time; note photo), noise or "snow" because of a weak signal, etc.
Changes in analog signal reception result from factors such as a poor or misdirected antenna and changing weather conditions. But even under these conditions an analog signal may still be viewable and you may still hear the sound.
With digital television, the audio and video must be synchronized digitally, so reception of the digital signal must be very nearly complete. The nature of digital TV results in a perfect picture initially, until the receiving equipment starts picking up interference or the signal is too weak to decode.
With poor reception some digital receivers will show a "blocky" video or a garbled picture with significant damage, other receivers may go directly from a perfect picture to no picture at all. This phenomenon is known as the digital cliff effect.
The first country to make a complete switch to digital over-the-air (terrestrial) broadcasting was Luxembourg, in 2006. Shortly thereafter, the Netherlands made the switch. Finland, Andorra, Sweden and Switzerland followed in 2007.
In June 2009, all major broadcast stations in the United States switched to DTV. We say "major" because some lower power TV stations were allowed to stay with the NTSC analog standard for a period of time.
Some countries don't plan a complete analog-to-digital transition until around 2020.
There are two basic international standards for digital broadcasting, the ATSC (Advanced Television Systems Committee) standard adopted by United States and Canada, and DVB-T (Digital Video Broadcast – Terrestrial) system used in most of the rest of the world.
Although the ATSC approach has weaknesses, most notably the ability to hold up under mobile conditions, it includes important features such as 5.1-channel surround sound using the Dolby Digital AC-3 format. The reduced bandwidth requirements of lower-resolution images allow up to six standard-definition subchannels or datacasting channels within the 6 MHz TV channel. How these will be developed and used remains to be seen.
The table below summarizes the difference between the analog and digital broadcast systems.
Standards SDTV (Analog) HDTV (Digital)
Total Lines 525 1125
Active Lines 480-486 (maximum visible on the screen) 1080 (maximum visible on the screen)
Sound Two channels (stereo) 5.1 channels (surround sound)
Max Resolution 720 X 486 1920 X 1080
As you can see, the ATSC standard is capable of 16:9 images up to 1920 by 1080 pixels in size and resolution, which is more than six times the display resolution of the earlier analog standard. In addition, many different image sizes can be supported. These include:
• Standard definition—480i (interlaced), to maintain compatibility with existing NTSC sets
• Enhanced definition—480p, (progressive), about the same quality as current DVDs
• High definition—720p
• High definition—1080i (the highest definition currently being broadcast)
• High definition—1080p (only used by a few cable operators)
We'll illustrate the difference in clarity between SDTV and HDTV in Part Two of this module, and we'll explain surround sound and 5.1 audio in Module 42.
It was thought that the move to digital and the "sudden" loss of all major NTSC television stations in the U.S. would be met with widespread viewer consternation. In fact, TV stations braced themselves for an avalanche of unhappy viewers demanding to know what happened to their TV stations.
This did not happen for four reasons. First, TV stations had launched a major educational campaign about the switch that had lasted for months, second, most viewers were receiving the stations by cable or by satellite, which were not affected, third, for some time new TV sets had been equipped to handle ATSC signals, and, finally, the government went so far as to issue vouchers to help pay for set-top boxes to enable existing over-the-air NTSC receivers to convert to over-the-air ATSC signals.



Compare the screen enlargements shown here that represent HDTV and the standard NTSC systems.
When projected on a 16 x 9-foot screen and observed from normal viewing distance, the picture detail in good (1,080p) HDTV systems appears to equal or better that attained by projected 35mm motion picture film.
The enlarged illustrations on the left show the relative pixel detail of SDTV and HDTV. (The illustrations assume a 40-inch TV screen.)
SDTV produces an image with about 200,000 pixel (picture) points. HDTV increases that by a factor of about 10 to two million pixels.
________________________________________
In the graph on the right, the taller the red bar, the sharper the picture. Note that the interlaced (i) and progressive (p) approaches to scanning result in a significant difference in apparent picture sharpness (measured in terms of discerned pixel points of detail).
All other things being equal, the difference in perceived picture sharpness centers on the number of (visible) scanning lines, which here ranges from SDTV's 480 lines to HDTV's 1,080 lines.
The although the 1080p system delivers the sharpest images, the approach is so technically demanding it can only be distributed by non-broadcast systems. However, it can be converted to film and projected in a theater without most patrons ▲realizing they're seeing video.
________________________________________
We often make comparisons between video and film quality. But video and film are inherently different media, and the question of their relative "quality" (a word that can mean many things to many people) has been the subject of lively debate. Both sides claim their medium is superior.
When we compare film and video media in a broadcast application, the differences between video and film are based more on differences in their traditional production approaches than on inherent differences between the media.

We discuss the relative advantages of film and video and the differences between their quality and costs in more detail here.

Converting Wide-Screen Formats
Production facilities make the conversion of 16:9 HDTV/DTV images to the standard 4:3 aspect ratio in the same way they convert wide-screen films to NTSC television. (We'll cover in-set conversion approaches later.)
Three approaches are used:
First, is when the conversion involves cutting off the sides of 16:9 image to a narrower 4:3 size. We refer to this as an edge crop or 4:3 center cut.
If we shoot the original HDTV/DTV (or wide-screen film) with the narrower 4:3 cutoff area in mind, losing the information at the sides of the picture should not be an issue. (This is the area on each side of the red box in the photo below, which, as noted, is referred to as a center-cut of the full 16:9 raster.)
We refer to the procedure of keeping essential subject matter out of the cutoff areas as shoot-and-protect.
Second, the entire production can go through a process called pan-and-scan. In this case a technician reviews every scene and programs a computer-controlled imaging device to electronically pan the 4:3 window back and forth over the larger, wide-screen format. The red arrows suggest this panning movement.
In this picture, cutting off the sides would not be an issue; but what if you had the two parrots talking (??) to each other from the far sides of the screen?
Finally, if the full HDTV/DTV frame contains important visual information (as in the case of written material extending to the edges of the screen), panning-and-scanning will not work.
In this case, a letterbox approach can be used, as shown here.
But you can see the problem. The result is blank areas at the top and bottom of the frame. Often, we reserve the letterbox approach for the opening titles and closing credits of a production, and pan-and-scan is used for the remainder.
Since some directors feel that pan-and-scan introduces pans that are artificial and not motivated by the action (nor the composition they originally intended). They may try to insist their work be displayed using letterbox conversion.
Originally, producers feared that audiences would object to the black areas at the top and bottom of the letterbox frame. (More than one person who rented a film (video) in the letterbox format brought it back to the video store complaining that something was wrong with the tape.) Today, however, viewers accept this format.
There is another way of handling the 16:9 to 4:3 aspect ratio difference -- especially for titles and credits. You've probably seen the opening or closing of a film on television horizontally "squeezed" in. We refer to this optical technique as anamorphic conversion.



The effect is especially noticeable when people are part of the scene -- people who, as a result, suddenly become rather thin. (Not that all actors would complain!) Compare the two images above. Note how the bird in the squeezed 4:3 ratio on the right seems to be thinner than the bird on the left.
Another way of visualizing the major SDTV-to-HDTV and HDTV-to-SDTV conversion approaches is illustrated here.

SDTV to HDTV In-Set Conversion Approaches
HDTV receivers can also (roughly speaking) convert SDTV (4:3) and HDTV (16:9) aspect ratios. Manufacturers build three options into many HDTV receivers:
• Zoom - Proportionally expands SDTV horizontally and vertically to fill the 16:9 screen. This eliminates the unused blank areas we would normally see at the edges of the picture, but it also crops off some of the SDTV picture in the process.
• Stretch - Expands SDTV horizontally to fill the 16:9 screen. This makes objects a bit wider than they would normally be.
• Combined zoom/stretch - A hybrid of the zoom and stretch modes that minimizes the cropping effect of the zoom mode and the image distortion of the stretch mode.
Clearly, all these approaches leave something to be desired, so today savvy producers originate productions in the 16:9 wide-screen format using the "shoot-and-protect" approach we've discussed.

Digital Cinema
In November 2000, moviegoers saw the film Bounce in both film and high-definition video.
Satellite facilities distributed the video version to digitally equipped theaters, which used high-definition video projectors. The difference between the film and video versions was difficult for audiences to discern.
Since 2000, there have been major improvements in the video projection process. By 2007, the images from the best video projectors were sharper than those of 35mm film projectors.
Film crews shot Star Wars: Attack of the Clones -- which more than 90 theatres around the world projected in its digital form -- entirely on 24p video (which we covered earlier). Whereas film and processing would have cost several million dollars, the cost of videotape for this production was only about $15,000.
More and more "films" intended for theaters are being shot with high-definition video.
After elements such as special effects, editing, and color correction are completed, the technician converts the final product to 35mm motion picture film for
distribution to theaters.

Why don't theaters junk their 35mm projectors and switch to video projectors? It's a matter of cost. They already have film projectors and for the most part see no need to invest thousands of dollars to convert to video -- especially when audiences probably won't notice the difference.
A major step toward video projection in theaters was taken with the release of the 3-D motion picture, Beowulf. The "film" was also seen as representing a major step forward in ▲digital animation. Beowulf is based on a famous Old English epic poem about a warrior who fights terrorizing monsters -- designed to be all the more scary in 3-D.
Despite the limited number of theaters equipped with 3-D video projectors and the fact that patrons had to wear special glasses, this film toped the box office when it was released in late 2007.
But the all-time box office record was set in late 2009 and early 2010, when the 3-D "film" Avatar quickly became the largest grossing film in history. Many theaters used video projectors for this production.
Each year, the motion picture industry spends almost a billion dollars duplicating films and distributing them to theaters around the U.S. and the world. Films have limited life; they collect dirt and scratches and soon must be replaced.
Video can cut the billion-dollar figure to a fraction of this amount by using a central satellite location to uplink theatrical releases to theaters as they're needed.
Plus, pirating (creating and selling illegal copies) is a constant problem, costing the motion picture industry billions of dollars in lost revenue. Pirating feature films is far more difficult when they're encrypted and either sent directly to theaters via satellite, or, more commonly, delivered to theaters on a high-capacity disk drive or a recording medium such as videotape. We discuss the issue of pirating in more detail here.
In addition to cost savings, digital cinema offers production advantages.
We can immediately play back and evaluate a scene we shoot in video -- even while the actors and production personnel are still in position. With film the hours of delay involved in processing and preparing film "rushes" (rough prints for quick screening) make this impossible.
Today, however, most film directors use video assist, or shooting on film and simultaneously viewing and recording scenes on video. This means they can play back and evaluate their work as they go along.
Finally, not only are postproduction costs far less with video, but special effects are much more easily and inexpensively produced.
The chart below indicates the excepted growth of theaters moving to some form of digital "film" projection.
Percentage of U.S. Digital Theaters
2005 (3%)
2006
2007
2008
2009
2010
2011 (70%)
Today, most audiences can't tell the difference between professional film and video projection systems. Traditional "Hollywood thinking" has long opposed production with video equipment for "serious, professional work."
However, today, the cost savings for video production alone, not to mention video's many production, post-production and distribution advantages, make the move to video for both production and theater presentation inevitable.
The key differences between film and video are discussed here.

In addition to showing feature films, theaters with digital projectors can provide patrons with other entertainment, such as live concerts, Broadway shows, sporting events and productions aimed at special audiences.
Digital theaters can operate with fewer employees, representing a considerable cost savings over time. Offsetting this savings, however, is the initial investment for digital projectors and the associated computer -- an estimated $60,000 to $120,000 per theater screen

Is 3-D Production Finally
Going to Catch On?
Over the years, three-dimensional (3-D) movies and TV programs often tried, but failed, to catch on with the general public. However, new technology such as HDTV, digital video projectors, Blu-ray discs, 3-D cable networks and the award-winning films such as the Avatar, which most people saw in 3-D, have changed things.
In anticipation to a move to 3-D production the 2010 National Association of Broadcasters convention where new technology is typically introduced featured a wide array of 3-D production equipment.
3-D has the potential to revitalize the industry. Watching something in high-def makes you feel like you're there; watching something in 3-D HD makes you feel like you can reach out and touch what's there
-Phil Swann of TVPredictions.com.


This link will take you to more information on 3-D film and video production.
________________________________________
You can find information on film revenues, top grossing films and the future of motion pictures here.
For a more detailed look at the various DTV and high-definition standards in the United States, including those for digital cinema click here.
In the next module, we'll begin discussing audio and video equipment, starting with a key part of a video camera: the lens.
________________________________________

________________________________________
Interactive Test
TO NEXT MODULE Search Site Video Projects Revision Information
Issues Forum Author's Blog/E-Mail Associated Readings Bibliography
Index for Modules To Home Page Tell a Friend Tests/Crosswords/Matching
© 1996 - 2010, All Rights Reserved
Module 10



Lenses: The Basics

Apart from protecting it from the elements and occasionally cleaning it, the average person doesn't think too much about a camera's lens.
However, variables associated with camera lenses have a major effect on how a viewer will see subject matter. The cameraperson who understands this commands a significant amount of creative power.
To start our investigation of this "power," let's look at some basic information about lenses -- starting with the most basic of all lens attributes: focal length.
The focal length of a lens affects the appearance of subject matter in several ways.

Lens Focal Length
We define focal length as the distance from the optical center of the lens to the focal plane (target or "chip") of the video camera when the lens is focused at infinity.
We consider any object in the far distance to be at infinity. On a camera lens the symbol ∞ indicates infinity.
Since the lens-to-target distance for most lenses increases when we focus the lens on anything closer than infinity (see second illustration), we specify infinity as the standard for focal length measurement.
Focal length is generally measured in millimeters. In the case of lenses with fixed focal lengths, we can talk about a 10mm lens, a 20mm lens, a 100mm lens, etc. As we will see, this designation tells a lot about how the lens will reproduce subject matter.

________________________________________


Zoom and Prime Lenses
Zoom lenses came into common use in the early 1960s. Before then, TV cameras used lenses of different focal lengths mounted on a turret on the front of the camera, as shown on the right. The cameraperson rotated each lens into position and focused it when the camera was not on the air.
Today, most video cameras use zoom lenses. Unlike the four lenses shown here, which operate at only one focal length, the effective focal length of a zoom lens can be continuously varied. This typically means that the lens can go from a wide-angle to a telephoto perspective.
To make this possible, zoom lenses use numerous glass elements, each of which is precisely ground, polished, and positioned. The space between these elements changes as the lens is zoomed in and out. (Note cutaway view on the right below.)

With prime lenses, the focal length of the lens cannot be varied. It might seem that we would be taking a step backwards to use a prime lens or a lens that operates at only one focal length.
Not necessarily. Some professional videographers and directors of photography -- especially those who have their roots in film -- feel prime lenses are more predictable in their results. (Of course, it also depends on what you're used to using!)
Prime lenses also come in more specialized forms, for example, super wide angle, super telephoto, and super fast (i.e., it transmits more light).
However, for normal work, zoom lenses are much easier and faster to use. The latest of HDTV zoom lenses are extremely sharp -- almost as sharp as the best prime lenses.

Angle of View
Angle of view is directly associated with lens focal length. The longer the focal length (in millimeters), the narrower the angle of view (in degrees).
You can see this relationship by studying the drawing on the left, which shows angles of view for different prime lenses.
A telephoto lens (or a zoom lens operating at maximum focal length) has a narrow angle of view. Although there is no exact definition for a "telephoto" designation, we would consider the angles at the top of the drawing from about 3 to 10 degrees in the telephoto range.
The bottom of the drawing (from about 45 to 90 degrees) represents the wide-angle range.
The normal angle of view range lies between telephoto and wide angle.
With the camera in the same position, a short focal lens creates a wide view and a long focal length creates an enlarged image in the camera. The two images below shot from the same position demonstrates this.



Put another way, when you double the focal length of a lens, you double the size of an image on the target; and, as you would assume, the reverse is also true.
Another issue in using different focal length lenses at different distances is the relative amount of background area you'll include in the picture.
The drawing below shows the major differences for telephoto, normal, and wide-angle lenses (in this case 70mm, 20mm, 10mm, and 5mm lenses). Although the subject remains in the same place, note the differences in the background area covered with each lens focal length.


A Zoom vs. a Dolly
Another way to alter the area that the camera sees is to move (dolly) the camera toward or away from a subject. Although it might seem this would produce the same effect as zooming the lens in and out, that's not quite true.
When you zoom, you optically enlarge smaller and smaller parts of the picture to fill the screen. When you dolly a camera you physically move the entire camera toward or away from subject matter. The latter is how you would see the central and surrounding subject matter if you were to walk toward or away from it.
Some directors, especially in motion pictures, prefer the more natural effect of a dolly, even though it's much harder to achieve smoothly.

Zoom Ratio
Zoom ratio is used to define the focal length range for a zoom lens. If the maximum range through which a particular lens can be zoomed is 10mm to 100mm, it's said to have a 10:1 (ten-to-one) zoom ratio (10 times the minimum focal length of 10mm equals 100mm).
That may tell you something significant, but it doesn't tell you the minimum and maximum focal lengths of the lens. A 10:1 zoom lens could have a 10 to 100mm, or a 100 to 1,000mm lens, and the difference would be quite dramatic.
To solve this problem, we refer to the first zoom lens as a 10 X 10 (ten-by-ten) and the second as a 100 X 10. The first number represents the minimum focal length and the second number the multiplier. So a 12 X 20 zoom lens has a minimum focal length of 12mm and a maximum focal length of 240mm.
The zoom lenses on most handheld field cameras have ratios in the range of 10:1 to 30:1. The photos below show the effect of zooming from a wide-angle position to a telephoto view with a 30:1 zoom lens.

Although one manufacturer makes a zoom lens with a 200:1 zoom ratio (the lens costs much more than the camera), the ratio used for network sports is generally 70:1 or less.
A camera with a 70:1 zoom lens could zoom out and get a wide-shot of a football field during a game and then zoom in to fill the screen with a football sitting in the middle of the field.

Motorized Zoom Lenses
Originally, the cameraperson manually zoomed a lens in and out by push rods and hand cranks. Today built-in, variable-speed electric motors do a much smoother and more controlled job. We refer to these electric zooms as servo-controlled zooms.
Although servo-controlled lenses can provide a smooth zoom at varying speeds, directors often prefer manually controlled zoom lenses for sports coverage, because the camera operator can adjust them much faster between shots. This can make the difference between getting to a new shot in time to see the critical action -- or missing it.
Supplementary Lenses
Although most videographers work within the limits of the lens supplied with their cameras, it's possible to modify the focal length of most lenses (both zoom and prime lenses) by adding using a positive or negative supplementary lens. These generally go in front of the lens. Supplementary lenses, as illustrated here, can increase or decrease the basic focal length and coverage area of lenses.
Thus far, we've assumed that varying the focal length of a lens simply affects how close the subject matter seems to be from the camera. That's true, but we will see in the next section that focal length also affects the subject matter in a number of other important and even dramatic ways.
________________________________________
Green, Yellow, Blue, and Red Square Reminder
________________________________________

Module 11


Lenses: Distance, Speed,
and Perspective

Lens focal length differences affect more than just the size of the image on the camera's target -- or in the case of a motion picture camera, the film. Also affected are:
• the apparent distance between objects in the scene
• the apparent speed of objects moving toward or away from the camera
• the relative size of objects at different distances
Compressing Distance
A long focal length lens coupled with great camera-to-subject distance appears to reduce the distance between objects in front of the lens.
The drawing on the right illustrates differences in the camera-to-subject distance and the photos below show the dramatic difference this distance makes in the appearance of the background.



Camera distance=
1 meter (approx. 3 feet)
with wide-angle lens


Camera_distance=
30 meters (approx.
100 feet)
with telephoto lens
The woman remained in the same place for both of these photos. But the fountain in the background of the photo on the right appears to be much closer to her.
However, the only distance that changed in these photos is the subject- (woman)-to-camera distance.
To compensate for this difference and keep the size of the woman about the same in each picture, the photographer used different lens focal lengths: a wide-angle lens with a short focal length for the first photo and a telephoto with a long focal length for the second.
Contrary to widely held beliefs, the spatial relationship differences between objects in a scene that seem to accompany wide-angle and telephoto lenses (or zoom lenses used in the wide-angle or telephoto position) are not primarily a function of lens focal length, but camera-to-subject distance.

This gets a bit tricky to follow.
In the setting above, if we used the wide-angle lens while standing at the distance used for the telephoto picture on the right (30 meters), the woman would obviously end up being rather small in the setting. But let's assume we enlarged the section of that image to make the woman equal in size to the image of telephoto lens.
The result (although probably grainy and blurry due to great enlargement) would have about the same fountain-to-woman distance perspective as the photo on the right.
Although you may think this is much to-do about nothing, it becomes important in understanding the effects of zoom lenses on subject matter -- not to mention in legal cases involving "wandering road signs."

The Case of the
Wandering Road Signs
A group opposed to the addition of more billboards along a highway reportedly launched a court case a number of years ago -- a noble goal, unless you happen to be in the advertising business.
Advertisers defended the construction of new signs by saying the existing ones had been placed far enough apart that new ones would not create a cluttered appearance.
The judge asked for photographs. Both sides employed photographers who understood the effect of subject-to-camera distance on spatial relationships.
As luck would have it, the photographers stood in the same place to take their photos.
One of the photographers -- hired by the citizen group to show the close distance between the existing signs -- backed up a great distance and used a long lens; this compressing the distance between billboards, making them appear crowded together. (Note photo above.)
The photographer representing the advertisers, however, moved in close to the first sign and used a wide-angle lens. That made all the signs appear to be far apart. (No sign clutter here!) This is similar to the apparent distance between the woman and the fountain in the photo on the left above.
Seeing the dramatic difference between the photographs (and possibly believing "the camera never lies"), the judge reportedly assumed fraud had taken place and disallowed all photographic evidence!
Now you know more about these things than the judge did.

Changes in the Apparent Speed of Objects
In addition to affecting the apparent distance between objects, changes in camera-to-subject distance and changes in lens focal length influence the apparent speed of objects moving toward or away from the camera.
Moving away from the subject matter and using a long focal length lens (or a zoom lens used at its maximum focal length), slows down the apparent speed of objects moving toward or away from the camera.
Filmmakers often use this technique to good effect. For instance, in The Graduate, Dustin Hoffman runs down a street toward a church to try to stop a wedding. The camera with a very long focal length lens conveys what he's feeling: although he's running as fast as he can, it seems as if he's hardly moving. Both he and the audience fear he won't make it to the church on time to save the girl he loves, thus, increasing the dramatic tension in the story.
Conversely, moving close to the subject matter with a wide-angle lens increases (exaggerates) the apparent speed of objects moving toward or away from the camera.
You can easily visualize why. If you were standing on a distant hilltop watching someone run around a track or, perhaps, traffic on a distant roadway, they would seem to be hardly moving. It would be like watching with a long focal length lens. But stand right next to the track or roadway (using your visual wide-angle perspective), the person or traffic would seem to whiz by.

Perspective Changes
The use of a wide-angle lens combined with a limited camera-to-subject distance creates a type of perspective distortion.
If a videographer uses a short focal length lens shooting a tall building from street level, the parallel lines along the sides of the building appear to converge toward the top. (Note the photo on the left.) At this comparatively close distance, the building also appears to be leaning backward.
Compare the photo taken with a wide-angle lens with the photo on the right taken at a much greater distance with a normal focal length lens.




You get even more distortion when you use an extreme wide-angle lens and get very close to subjects. (Note the two photos above.) The solution -- assuming this is not the effect you want -- is to move back and use the lens at a normal-to-telephoto setting.
Here's another example of perspective distortion.
Note the convergence of lines in the photo of the video switcher on the right.
A close camera distance coupled with a wide-angle lens setting makes the rows in the foreground look much farther apart than those in the background.
Again, you can eliminate this type of distortion by moving the camera back and using a longer focal length lens.

What's Normal?
Psychologists have long debated what's "normal" in human behavior. But what's normal in terms of lenses and their focal length comes down to a simple measurement.
First you need to know that the human eye has a focal length of about 25mm (approximately one inch) and covers a horizontal area of about 25 degrees. Since we're used to seeing the world in this perspective, this 25-degree angle represents a "normal" perspective for film and TV cameras.
With cameras, however, "normal" also depends on the area of the camera's target or film. The larger the larger area the longer the lens focal length needs to be to cover it.
Still photographers have a good rule of thumb.
They consider a 50mm lens normal with a ▲35mm still camera, because this is the approximate diagonal distance from one corner of the film to the other.
Using the same rule, we can define the normal focal length for a video camera as the distance from one corner of the target area to the opposite corner, as shown here.
If the diagonal distance on the target of a video camera is 20mm, then a lens used at 20mm on that camera will provide a normal angle of view under normal viewing conditions.
Now, if we could just quantify normal human behavior as easily.
________________________________________


Module 12



F-Stops and Creative
Focus Techniques

Cats and owls can see in dim light better than we can, in part, because the lenses of their eyes allow in more light. We could say the "speed" of the lenses in their eyes is greater or better than ours.
We define lens speed as the maximum amount of light that can pass through the lens to end up on the target.
However, it's generally not desirable to transmit the maximum amount of light through the lens, so we need a way of governing the amount.
Like the pupil of an eye automatically adjusting to varying light levels, the iris of the camera lens controls the amount of light passing through the lens.
Under very low light conditions, the pupils of our eyes open up almost completely to allow in maximum light. Conversely, in bright sunlight the pupil contracts in an effort to avoid overloading the light-sensitive rods and cones in the back of the eye.



In the same way, the amount of light falling on the light-sensitive target of a TV camera must be controlled with the aid of an iris in the middle of the lens (shown above on the left).
Too much light will overexpose and wash out the picture; too little will cause the loss of detail in the darker areas.
We can smoothly adjust an iris from a very small to a large opening. We refer to the specific numerical points throughout this range as ▲f-stops.
The "f" stands for factor. An f-stop is the ratio between the lens opening and the lens focal length. More specifically, the f-stop equals the focal length divided by the size of the lens opening.
________________________________________
f-stop = focal length / lens opening
________________________________________
This math explains the strange set of numbers used for f-stop designations, as well as the fact that the smaller the f-stop number the more light the lens transmits.
That's worth repeating: the smaller the f-stop number the more light the lens transmits.
________________________________________
Thus:
1.4, 2.0, 2.8, 4.0, 5.6, 8, 11, 16, 22
<== more light less light ==>
________________________________________
Occasionally, we see other f-stops, such as f/1.2, f/3.5, and f/4.5. These are mid-point settings between whole f-stops, and on some lenses they represent the maximum aperture (speed) of the lens.
The figure at the right compares f-stop sizes.
We've noted that the speed of a lens is equal to its maximum (wide-open) f-stop. Here, f/l.4 is the speed of the lens.
Opening the iris one f-stop (from f/22 to f/16, for example) represents a 100 percent increase in the light passing through the lens. Conversely, "stopping down" the lens one stop (from f/16 to f/22, for example) cuts the light by 50 percent.
Put another way, when you open up one stop, you double the light going through the lens; when you stop down one stop, you cut the amount of light going through the lens in half.
So how will you use this knowledge?
Once you understand this f-stop range, you'll know which way to adjust a lens iris to compensate for a picture that is either too light or too dark -- a major
issue in video quality.

.
Cameras with automatic exposure controls use a small electric motor to automatically open or close the iris in response to varying light conditions.
Makers of professional cameras print f-stop settings on the lens barrels and sometimes in viewfinder displays. (Note the f-stop settings in this photo.) It's important for professionals to understand and be able to work with the f-stop concept.
Not wanting to trouble unsophisticated consumers with such things as f-stops, manufacturers of consumer cameras don't show the numbers, and exposure adjustments are automatic. However, depending on circumstances, the camera may not set the iris at the best setting.
In this photo, automatic exposure adjustment has not provided the best video. In a scene that contains areas brighter than the main subject matter -- in this case, the window -- automatic circuitry will generally result in dark (underexposed) video and muted colors.
As we will see, savvy videographers who are stuck with this automatic feature on a camera need to know how to "influence" or override the automatic exposure. Not only can that result in better image exposure, but it can also provide control over such things as depth of field (discussed below).
This problem repeatedly shows up in amateur videos and the work of beginning videography students. In future modules we'll cover different approaches to solving this problem.

Depth of Field
We define depth of field as the range of distance in front of the camera that's in sharp focus.
Theoretically, if we focus a camera at a specific distance, only objects at that exact distance will be what we might consider completely sharp, and objects in front of and behind that point will be, to varying degrees, blurry.
In actuality, areas in front of and behind the point of focus may be acceptably sharp. The term acceptably sharp is subjective. A picture doesn't abruptly become unacceptably blurry at a certain point. The transition from sharp to out of focus is gradual.
For practical purposes, we've reached the limits of sharpness when details become objectionably indistinct. This will vary with the medium.
The range of what is acceptably sharp in standard NTSC television (SDTV) is greater than that of HDTV. In the latter case, the superior clarity of the medium more readily reveals sharpness problems.

Depth of Field and F-stops
The larger the f-stop number (that is, the smaller the iris opening and the less light let in), the greater the depth of field.
Therefore, the depth of field of a lens we set at f/11 is greater than the same lens set at f/5.6, and depth of field at f/5.6 will be greater than at f/2.8.
Except for extreme close-ups, depth of field extends approximately one-third of the way in front of the point of focus and two-thirds behind it.
The drawing on the right illustrates this range.

Depth of Field
And Focal Length
Although it's commonly (and erroneously) stated that depth of field depends on lens focal length, this is not the case. The reason, which is rather technical, was explained most recently in a January, 2009 article in Videography. You can find more information on this general issue here.
Although depth of field appears to be related to lens focal length, it's only an apparent relationship.
As long as the same image size is maintained on the target, all lenses of similar design set at a specific f-stop will have about the same depth of field, regardless of focal length.
Wide-angle lens appear to have a greater depth of field than a telephoto lens because sharpness problems in the image created by the wide-angle lens are compressed and therefore not as apparent.
If you enlarge a section of image area from the wide-angle shot -- a section exactly equal to the image area created by the telephoto lens -- you'll find that the depth of field is about the same.
If the same subject size on the film is maintained, depth of field is relatively unchanged no matter what the [lens] focal length.
Popular Photography test report, 1994


Why all the fuss about this seeming technicality? Because an understanding of the concept can save you from unpleasant surprises in video production.
Let's pursue this a bit.
Wide-angle lenses (or zoom lenses used at wide-angle positions) are good at hiding a lack of sharpness, so they're a good choice when accurate focus is an issue.
Of course, when you use a wide-angle lens setting, you may need to move much closer to the subject to keep the same size image. But by moving in, you've lost the sharpness advantage you seemingly gained by using the wide-angle lens in the first place.
With a telephoto lens (or zoom lens used at a telephoto setting), focus must be much more precise. In fact, when zoomed in fully at maximum focal length, the area of acceptable sharpness may be less than a few inches (20mm or so), especially with a wide aperture (low f-stop number).
This can represent either a major problem or a creative tool.
In the latter case, it can force the viewer to concentrate on a specific object or area of a scene. (Our eyes tend to avoid unclear areas of a picture, and they're drawn to sharply focused areas.)
In the case of this picture the photographer backed up and used a telephoto setting so that the foreground and background areas were thrown out of focus. This is also called selective focus, which we'll talk about in more detail later.

Focusing a Lens
The following discussion assumes you are using a camera with a manual focus control, or, in the case of a camera with automatic focus, that you can turn off this feature.
It might seem that focusing a lens is a simple process of just "getting things clear." True, but a few things complicate the issue.
It's probably obvious at this point that you should focus the zoom lens after first zooming into a close shot (using maximum focal length). Since focusing errors will be the most obvious at this point, focusing will be easier and more accurate. Once focused, you can zoom back the lens to whatever focal length you need.
If the scene includes a person, you'll want to focus on the catch light or gleam in one eye for two reasons: a person's eyes are normally the first place we look, and this small, bright spot is easy to focus on.
Note the extreme close-up of the woman's eye in the camera viewfinder in the photo on the right.
If you don't first zoom in to focus, but try to focus while holding a wide shot, you'll inevitably find when you later zoom in the picture will go out of focus. (This will suddenly greatly magnify the focus error was wasn't noticeable before.)

Focusing HDTV Cameras
Complicating the focus issue is the fact that HDTV cameras will readily reveal focus errors. But the level of focal accuracy needed for high-resolution video isn't easy to evaluate in most camera viewfinders -- especially camcorders with small viewfinders.
One solution to this is to loop the video output of the HDTV camera through a high-resolution TV monitor and use this image to focus. Another is to use some type of electro-mechanical focus assist. We'll discuss this when we talk about auto-focus lenses.

Selective Focus
One of the important creative tools available to a videographer or cinematographer is a technique we touched on earlier: selective focus -- making certain some things are in focus and others aren't.
This technique effectively directs attention toward things that are important and away from things that can be distracting and that should be de-emphasized or hidden.
Selective focus is widely used in film and is associated with the so-called "film look" that many people find desirable.
Consider the scene on the left. By throwing the building and the newspaper out of focus, the woman stands out clearly in the photo, and is not lost in a confusion of distracting elements.
If the scene is brightly lit, as this one is, you may have to use a high shutter speed or even a light-reducing neutral density filter -- both of which will enable you to open the iris without overexposing the video. (More about these techniques later.)
Plus, as we mentioned earlier, backing up and using a telephoto lens or zoom lens setting can add to the selective focus effect.

Follow Focus
In video production, a moving subject may move outside the limits of depth of field unless the cinematographer can smoothly refocus the lens.
Professionals know which way to turn the focus control to keep a moving subject in sharp focus. Nonprofessionals often throw a slightly blurry image totally out of focus for a few seconds by first turning the focus adjustment the wrong way.
The technique of follow focus is used to refocus the camera to accommodate subject movement. Don't confuse this with --

Rack Focus
Rack focus is similar to selective focus, except the camera operator changes focus during the scene to shift viewer attention from one part to another.



In the photo on the left above, the woman (in focus) is sleeping. When the phone rings, the focus shifts to the phone (on the right).
As she picks up the phone and starts to talk, the focus shifts (racks) back again to bring her into focus.
To use this technique, you need to rehearse your focus shifts so that you can manually rotate the lens focus control from one predetermined point to another. Some videographers temporarily mark the points on the lens barrel with a grease pencil. After locking down the camera on a tripod, they can then ▲then shift from one predetermined point to another as needed.

Auto-focus Lenses
With most camcorders, you can turn auto-focus on and off. For the following discussion, we'll assume the auto-focus is turned on.
Auto-focus can help in following moving subjects. However, you will encounter problems unless you fully understand how it works.
Most auto-focus devices assume that the area you want in sharp focus is in the center of the picture. The auto-focus area (the area the camera will automatically focus on) is in the green rectangle in this photo.
Remember the rack focus sequence discussed above? Since the area you want to focus on does not remain in the center of the frame, auto-focus would not be useful.
Note in the photo below that the center area is correctly focused (thanks to auto-focus), but the main subject is blurry. Of course, the goal was the opposite.
To make this scene work with auto-focus, you could pan or tilt the camera to bring the main subject into the auto-focus area, but this would change the composition in a way that you may not want.
Some camcorders allow you to center the subject matter in the auto-focus zone and then lock the auto-focus on that area. Once that's done you can reframe the scene for the best composition.
One camcorder attempts to track the photographer's eye movement in the viewfinder and shift focus accordingly. When you (as photographer) look at the woman in this case, the camera would focus on her -- but then as soon as you looked at the building in the background, the camera would shift focus to that point.

Auto-Focus Problems
Auto-focus systems have other weaknesses. Reflections and flat areas with no detail can fool most of them. Most also have trouble determining accurate focus when you're shooting through such things as glass and wire fences.
Finally, auto-focus devices -- especially under low light -- can keep readjusting or searching for focus as you shoot, which can be distracting.
For all these reasons, professional videographers typically turn off auto-focus and rely on their own focusing ability. The only exception may be a chaotic situation in which there is no time to keep moving the subject matter into focus manually.

HDTV Focus-Assist Schemes
As we've noted, focus errors not discernible in SDTV can be obvious in images from high-resolution digital (HDTV) cameras. As we've also noted, small HDTV camcorder viewfinders make critical focusing difficult.
Some lens manufacturers are experimenting with electronic "focus-assist approaches" for HDTV lenses. There are various approaches and at this point it's too early to tell how practical they might be in day-to-day HDTV production.

The Macro Lens Setting
Most zoom lenses have a macro setting that enables the lens to attain sharp focus on an object only a few inches, or even a few millimeters from the front of the lens.
Although lenses differ, to reach the macro position on many zoom lenses, the photographer pushes a button or lever on the barrel of the lens to allow the zoom adjustment to travel beyond its normal stopping point.



Many newer lenses are continuous focus lenses. You can smoothly and continuously adjust these internal focus lenses from infinity to a few inches without manually shifting the lens into macro mode.
Videographers often forget about the macro capability, but it offers many dramatic possibilities. For example, a flower, stamp, or portion of a drawing or snapshot can fill the TV screen.
A tripod or camera mount is a must in using the macro setting. Not only is depth normally limited to just a few millimeters, but unintentional camera movement is greatly exaggerated.
________________________________________

________________________________________
Interactive Test Interactive Crossword

TO NEXT MODULE Search Site Video Projects Revision Information
Issues Forum Author's Blog/E-Mail Associated Readings Bibliography
Index for Modules To Home Page Tell a Friend Tests/Crosswords/Matching
© 1996 - 2010, All Rights Reserved
Module 13-1

Part I



Filters and Lens
Attachments

Lens Shades
In the same way we shade our eyes from strong lighting to see clearly, the videographer must shield the camera lens from direct light.
Even if strong light striking the lens does not create the obvious evidence of lens flare shown here, it may reduce the contrast of the image.
Assuming you can't easily change your camera position, you'll need to block the light in some way by using either a lens shade or lens hood, or by blocking the interposing light in some way. Since most lens flare problems are apparent in the video viewfinder, you can observe and check the effects of lens shades.
The lens shade shown on the left is often used with prime or fixed focal length lenses. Things get a bit more complicated with zoom lenses because their angle of view changes.
You can improvise a lens shade "on the fly" by using dull black paper and masking tape -- or even simply shielding the lens with your hand by zooming the lens to the desired point and shading the lens as you would your eyes. Just be sure to check the edges of he image in the viewfinder to make sure you can't see your hand!
In addition to lens shades, a number of other attachments, such as filters, fit over the front of a camera lens.

Filters
Two classifications of filters are used in television production: glass or gel filters and post-production or electronic filters.
Glass Filters
Glass filters consist of a transparent, colored gel sandwiched between two precisely ground and sometimes coated pieces of glass.
The filter can be the type that screws over the end of the camera lens (as shown here) or is inserted into a filter wheel behind the camera lens.
A type of filter that's much cheaper than glass is the gel, which is a small square or rectangular sheet of optic plastic used in front of the lens in conjunction with the matte box. (See below.)
Post-Production Filters
The use of post-production filters (post filtration) takes place after scenes are shot. Although these electronic filters typically have the same names as the familiar glass or gelatin filters, they often have a slightly different effect.
Tiffen's Dfx 2.0 software and special effect filters -- some 1,000 of them -- represent one example of post-production filters. They are used as plug-ins for programs, such as Apple's Final Cut Pro, Aperture, Avid, Adobe's After Effects, and Photoshop.
Post filtration not only provides a greater range of effects, but, unlike optical filtration, the effects can be readily reversed and modified during editing. At the same time, there are effects that are better achieved with glass and gelatin filters.
Ultraviolet Filters
News photographers often put an ultraviolet filter (UV filter) over the camera lens to protect it from the often adverse conditions encountered in ENG (electronic newsgathering) work. It's considerably cheaper to replace a damaged filter than a lens. Protection of this type is particularly important when the camera is used in high winds where dirt or sleet can be blown into the lens.
Video cameras tend to be sensitive to ultraviolet light, which can add a kind of haze to some scenes. Because UV filters screen out ultraviolet light while not appreciably affecting colors, many videographers keep an ultraviolet filter permanently over the lens to protect it. (Camera lenses are often more expensive than the camera itself.)

Using Filters to Create Major Color Shifts
Although optical and electronic camera adjustments are responsible for general color correction in a video camera, you may sometimes want to introduce a strong, dominant color into a scene.
For example, when one scene called for a segment shot in a ▲photographic darkroom, the camera operator simulated a red darkroom safelight by placing a dark red glass filter over the camera lens. (A safelight is a lamp with a filter that screens out rays that will expose photographic paper. Darkrooms switched to yellow-green filters decades ago, but since audiences still associate red filters with darkrooms, directors feel they must continue to support the myth.)
If the camera has an internal white balance sensor, a video camera must be white balanced before placing the filter over the lens. If not, the white balance system will try to cancel out the effect of the colored filter.

Neutral Density Filters
Under some bright conditions, you may want to reduce the amount of light passing through a lens without stopping down the iris. As we've noted, keeping the iris at a low number (opened up to a large degree) makes selective focus possible.
Although using a higher shutter speed is normally the best solution in these cases (we'll get to that later), using a neutral density or ND filter will achieve the same result. A neutral density filter is a gray filter that reduces light by one or more f-stops without affecting color.
Professional video cameras normally have one or more neutral density filters included in their internal filter wheels. To select a filter, you simply rotate it into position behind the lens. The table below shows ND filter grades and the amount of light they subtract.
0.3 ND filter* 1 f-stop
0.6 ND filter 2 f-stops
0.9 ND filter 3 f-stops
1.2 ND filter 4 f-stops
________________________________________
*Although these numbers represent the official designations, many of today's video cameras use fractions such as 1/8th and 1/64th to represent levels of light reduction.
________________________________________

(Click on "more" for the second half of this section.)
Module 13-2

Part II



Filters and Lens Attachments


Polarizing Filters
You're probably familiar with polarizing sunglasses that reduce reflections and cut down glare. Unlike sunglasses, however, the effect of most professional polarizing filters can be continuously varied and, as a result, go much farther in their effect.
Polarizing filters can:
• reduce reflections and glare
• deepen the color of blue skies
• penetrate haze
• saturate (intensify) colors
Note the difference in the two photos below.

Once you understand a polarizing filter's many applications, it can become one of your most valuable filters. As noted, you can often adjust the degree of polarization. This is done by rotating the double glass elements in the filter.
To eliminate objectionable surface reflections when doing critical copy work, such as photographing paintings with a shiny surface, you can use large polarizing filters over the lights as well as the camera lens. This is one of the areas where post-filtration can't match the effect of optical filters.

Contrast Control Filters
Although the best of the latest generation of professional video cameras is capable of capturing contrast or brightness ranges up to 700:1, most home television sets and viewing conditions limit that range to about 30:1. This means the brightest element in a scene can't be more than 30 times brighter than the darkest element -- with any hope of seeing the detail in each. (Digital/HDTV receivers do considerably better, but until everyone has a digital set, we must play it safe.)
"Real world scenes" often contain collections of elements that exceed the 30:1 brightness range. Although in the studio we might be able to control this with lighting, things become a bit more challenging outside. For critical exterior scenes, the videographer must often consider ways to reduce the brightness range. One way is with a contrast control filter.
Look at the scene on the left below, taken in a setting with contrasty lighting. The use of a contrast control (or low contrast or contrast reduction) filter resulted in the image on the right.

There are three types of these filters: low contrast, soft contrast, and the Tiffen Ultra Contrast.

Filters for "The Film Look"
Compared to film, some people feel digital video can look a bit harsh, overly sharp, and even brassy. Studies have shown that audiences have gotten used to and seem to prefer the slightly softer and grainy effect of film -- leading some post-production houses to electronically add these things to video.
Since some directors of photography (DPs) feel people feel its better to add these things as the video is shot. this link provides more information on achieving "the film look" with optical filters.
At the same time, others feel video is a unique medium that should not try to take on the characteristics of film.

Day-For-Night
A common special effect, especially in the days of black-and-white film and television, was the night scene shot in broad daylight -- a so-called day-for-night. (In those days, film stocks and video cameras were not very sensitive to light, and you couldn't shoot at night.)
With black-and-white film or video, you could place a deep red filter over the lens to turn blue skies dark, even black. (A red filter subtracts blue.) That, together with three or four f-stops of underexposure, completed the illusion.
Although not quite as easy to pull off in color today, you can simulate the effect by underexposing the camera by at least two f-stops and either using a blue filter or creating a bluish effect when you white balance your camera. (We cover this in a section called "lying to your camera" in Module 18.)
A careful control of lighting and avoiding the sky in scenes adds to the effect. Embellishments you can add during post-production make the night-time effect even more convincing.
With the sensitivity of professional cameras now down to one foot-candle (a few lux), "night-for-night" scenes are now possible. Whatever approach you use, experiment using a high quality color monitor as a reference.

Color Conversion Filters
Color conversion filters correct the sizable difference in color temperature between incandescent light and sunlight -- a shift of about 2,000K. Although the differences in color temperature among light sources will make more sense after we examine it in a later module, we need to at least mention it in this section on filters.
Even though professional cameras take care of minor color balancing electronically, colored filters are best for major shifts, such as the difference between indoor and outdoor lighting..
Two series of filters have long been widely used in motion picture production: the Wratten #80 series, which are blue and convert incandescent light to the color temperature of sunlight, and the Wratten #84 series, which are amber and convert daylight to the color temperature of tungsten light.
Since video cameras are optimized for one color temperature, videographers will generally use these filters to make the necessary "ballpark" adjustment. The rest is done electronically with camera color balancing.

Filters For Fluorescent Light
Some lighting sources are difficult to correct. A prime example and one that videographers frequently run into is fluorescent light. These lights are everywhere, of course, and they can be a problem.
Although in recent years camera manufacturers have tried to compensate for the greenish cast that fluorescent lights can create, when it comes to such things as getting true-to-life skin tones (and assuming you can't turn off the lights and set up your own incandescent lights), you may need to experiment with a fluorescent light filter.
We say "experiment" because dozens of fluorescent tubes exist, each with different color characteristics. But one characteristic all standard fluorescent lamps have is a "broken spectrum" or gaps in the range of colors they emit. The eye can more or less "smooth over" these gaps when it views things firsthand, but film and video cameras have problems.
Some other sources of light are even worse -- in particular the metal halide lights often used in gymnasiums and for street lighting. We discuss this in more detail in the lighting module on color temperature. Although the public may accept these lighting aberrations in news and documentary footage, it's a different story when it comes to most commercials and dramas.
As we will see, some color-balanced fluorescent lamps are not a problem, because manufacturers design them specifically for TV and film work. But don't expect to find them in schools, offices, or boardrooms.

Special Effect Filters
Although scores of special effect filters are available, we'll highlight four of the most popular: star filters, starburst filters, diffusion or soft focus filters, and fog filters.
Star Filters - You've undoubtedly seen scenes in which "fingers of light" project out from the sides of shiny objects -- especially bright lights. The camera operator creates this effect with a glass star filter that has a microscopic grid of crossing parallel lines cut into its surface.
Notice in the picture on the right that the four-point star filter also slightly softens and diffuses the image.
Star filters can produce four-, five-, six-, or eight-point stars, depending on the lines engraved on the surface of the glass. The star effect varies with the f-stop used.
A starburst filter (on the left, below) adds color to the diverging rays. Both star filters and starburst filters slightly reduce the overall sharpness of the image, which may or may not be desirable.



Soft Focus and Diffusion Filters - To create a dreamy, soft focus effect, you can use a soft focus filter or a diffusion filter (on the right above). These filters, available in various levels of intensity, were often used in early cinema to hide aging signs in actors. (Some stars even wrote this requirement into their contracts.)
You can achieve a similar effect by shooting through either a fine screen wire placed close to the lens or a single thickness of nylon stocking. The f-stop you choose will greatly affect the level of diffusion. It's important to white balance your camera with these items in place.
Fog Filters - You can add a certain amount of "atmosphere" to dramatic locations by suggesting a foggy morning or evening. Without relying on nature or artificial fog machines, fog filters can create somewhat the same effect. (Note the photo on the right.)

General Considerations
In Using Filters
Using a filter with a video camera raises the black level of the video slightly. Because it creates a slight graying effect, it's advisable to readjust camera setup or black level (either automatically or manually) whenever a filter is used.
Unlike electronic visual effects that an editor creates during postproduction, the optical effects a cinematographer creates while recording a scene can't be undone. To reduce the chance of unpleasant surprises, you need to carefully check the results with the help of a high quality color monitor as you shoot.

Camera Filter Wheels
As we've noted, professional video cameras have filter wheels behind their lenses that can hold a number of filters. You can rotate individual filters on each wheel into the lens light path as needed.
Note the two filter wheels in the photo on the right. One is labeled 1 through 4 and the other A through D. Two filters can be used at once. For example, 2-B would be a 1/4 ND (neutral density) filter, along with a 3,200K (standard incandescent light) color correction filter.
Filter wheels might also contain the following:
• a fluorescent light filter, which reduces the blue-green effect

• one or more special effect filters, including the star filter

• an opaque lens cap, which blocks all light going through the lens
Although the filters shown are located behind the lens, to be most effective you must mount some filters, such as polarizing filters, in front of the lens.

Matte Boxes
A matte box is a device mounted on the front of the camera that acts both as an adjustable lens hood and a way of holding square or rectangular gelatin filters. These are much cheaper than the round, glass filters.
Matte boxes can also hold small cutout patterns or masks. For example, you could use a keyhole-shaped pattern cut from a piece of cardboard to give the illusion of shooting through a keyhole (although, unlike earlier days, we can now see through very few keyholes).
Most of the effects that matte boxes formerly created can now be more easily and predictably achieved electronically with a special-effects generator.

Periscope Lens
A "bug's eye" view of subject matter is possible with a periscope/probe system.
This low angle is useful when actors are electronically keyed into realistic or fantasy miniature models. We can enhance the effect with the wide-angle views of the four lenses that come with the system.



In the photo on the right, the camera operator uses a lens probe to film a miniature prehistoric setting that will later come to life in a full-scale effect. Although this is a film camera, it has a video viewfinder to provide immediate feedback on the image captured on film. (Note the video monitor.)
In the next section, we conclude the discussion of lenses and lens attachments.
________________________________________

________________________________________
Module 14
Updated: 04/20/2010




Lenses: Some
Final Elements


You may recall that the inside of a lens -- a zoom lens in particular -- is packed with many glass elements. Each of those glass elements reflects some of the light hitting it, reducing the amount of light that can go through the lens.
Even if each element reflected only five percent of the light hitting its surface, which is not unusual for glass, no light at all would get to the camera. This, of course, would defeat the purpose of the lens. Fortunately, there is a solution.

Lens Coatings
To reduce the problem of internal reflections the surface of each element is covered with a micro-thin, antireflection coating. This lens coating typically gives the elements a light blue appearance and significantly reduces the amount of light lost due to surface reflections.
This means that in a zoom lens, such as the one shown here, the front and back of each of the more than twenty glass elements will have antireflection coatings.
Although lens coatings are much more resilient than they used to be, they're still relatively easy to permanently scratch. One or more bad scratches on a lens diminishes both sharpness and image contrast.
Because of the way lenses are manufactured, it's generally less costly to replace the lens than to try to repair it.
Since it's easy for an object to come in contact with a camera lens, remember to use a lens cap when you're transporting the camera and, in fact, anytime you're not using it.
A lens cap not only guards against scratching, but also keeps off dirt and fingerprints, which can also reduce sharpness and contrast.
Some lens caps are made of white translucent plastic designed to replace the white cards used to white balance a camera. If you put the capped lens in the dominant light source and push the white balance button, the camera will white balance on the color of the light coming through the lens cap.
Although this is a quick way to color balance a camera, as we'll later see, it's not as accurate as zooming in on a carefully positioned white card.
Cleaning Lenses
Small quantities of dust on a lens will not appreciably affect image quality, but fingerprints and oily smudges are a different matter. Not only do they reduce image sharpness, but if not promptly removed, the acids in fingerprints can permanently etch themselves into some lens coatings.
However, each time you clean the lens, you increase the risk that tiny abrasive particles picked up by the cleaning tissue will create microscopic scratches in the coating. For this reason, you should not just routinely clean your lens; do so only when you see dirt or dust on its surface.
To clean a lens, first remove any surface dirt by blowing it off with an ear syringe or by brushing it off with a clean camel's hair (extremely soft) brush.
If this doesn't remove the dirt, dampen a lens tissue with lens cleaner, and very gently rub the lens in a circular motion. Turn or roll the tissue slightly to avoid rubbing any dirt over the lens surface.
Never drip lens cleaner directly on a lens. It can easily seep behind lens elements and create a major problem. And don't clean a lens with silicon-treated lens tissues or the silicon-impregnated cloths commonly sold for cleaning eyeglasses. The residue may permanently discolor the coating.

Condensation On the Lens
Condensation and raindrops on a lens can distort or even totally obscure an image.
When a camera moves from a cool to a warm area, the lens frequently fogs up. This can be a major problem in cold climates.
Even though you wipe moisture off the lens, the lens may continue to fog up until its temperature equals the surrounding air.
Condensation can also take place within a camcorder and cause major problems. For this reason, many camcorders have a dew indicator that detects moisture or condensation and shuts down the unit until the moisture evaporates. A message such as "dew" will typically display in the viewfinder. To reduce the effect of condensation when bringing a camcorder in from the cold, you should allow thirty minutes or so for the camcorder to reach room temperature.
By the way, laptop computers can have the same problem -- especially if they are stored in the trunk of a car overnight in freezing temperatures and then brought into a warm room. There is no "dew indicator," of course; they may just refuse to boot. As in the case of camcorders, allowing the unit to slowly warm up for thirty minutes or so in warm, dry air should fix the problem.
Rain Jackets
Although manufacturers discourage use of video cameras in rain, snow, and wind-driven sand or dust, news stories often have to be shot under such conditions.
Camera rain jackets, such as the one shown on the right, cover all but the viewfinder and the very end of the camera lens.
Or, in an emergency, you can use a plastic garbage bag. Just cut holes for the lens and viewfinder, and then use rubber bands to secure the plastic around each. Basic camera controls should be operational though the plastic bag. This is much easier if the bag is transparent.
Most camcorders contain many delicate moving parts, and just a bit of dirt, sand, or moisture in the wrong place can put the unit out of commission.

Shot Boxes
In studio work you'll often use a set sequence of shots on a regular basis. Wide-shots, two-shots, and one-shots in a newscast are good examples.
Shot boxes are electronic lens controls that memorize a series of zoom lens positions, complete with zoom speeds and focus settings.
Note the series of white buttons shown here. The camera operator can program each button for a particular shot. This approach adds speed and consistency to studio work.
Today, TV stations are using robotic cameras that don't require an attending camera person. In this case these settings are memorized by a camera control unit in the TV control room.

Image Stabilizers
In 1962, a lens mechanism was introduced that compensated (within limits) for camera vibration and unintentional camera movement. Called an image stabilizer, the first model was a gyroscopically controlled mechanism that resisted short, fast movements by shifting lens elements in the opposite direction.
Things have advanced significantly since then, and today the simplest, digital stabilization, is totally electronic -- it "floats" an active picture frame within a slightly larger one.
As the camera moves, the smaller frame shifts within the larger target area in an attempt to compensate for the movement.
If, for example, the camera moves slightly to the right, the digital frame will electronically move in the opposite direction, canceling the movement on the camera's target. Many consumer grade camcorders use this approach.
Although this electronic image stabilization approach has seen some major technical improvements in recent years, the reduction in the size of the usable target image area still results in a slight loss of image resolution and clarity.
Professional videographers prefer optical image stabilization.
Optical image stabilization uses two parallel, floating optical surfaces within the lens that act as a kind of flexible prism.
These optical surfaces electronically detect the camera's movement, and the voltage that's generated as a result changes the configuration of the prism. This alters the angle of light passing through the prism and shifts the image on the target in the opposite direction. Since the full target image is used, no loss of image quality occurs.
As you might assume, this approach is more complex and costly, which is why you don't see it on consumer-grade camcorders.
With all types of stabilizers the camera operator must learn "to compensate for the compensation." In panning from left-to-right, for example, a short delay occurs as the camera tries to compensate for the move. But once beyond a certain point, the stabilizer can't compensate for the movement and the image starts to move as intended.
At the end of the pan, however, the image may continue to move for a moment until the system comes back into balance. This means the camera operator may have to end the pan a moment early and allow the camera to complete the move.
Today, many "high-end" image stabilizers use sophisticated fiber optic servo devices. This technology can cancel vibration from a helicopter or a moving vehicle.


The GyroCam helicopter mount shown on the left above not only compensates for vibration, but can also be completely controlled (e.g., pan, tilt, zoom, iris) from within the helicopter. Pilots use this type of device to follow fugitives and car chases on the ground.
The best image stabilizers today are so sensitive that they can cancel out the camera movement induced by the heartbeat of the camera operator.


Lens Mounts
Many types of video cameras, especially consumer-type cameras, have zoom lenses permanently mounted to the camera body, and the lens can't be removed. Some video cameras, however, allow you to change lenses to meet specific needs. With these, you can either unscrew the lens (in the case of C-mount lenses) or turn a locking ring (in the case of the bayonet mounts).
C-Mounts
With a camera using a C-mount, the lens screws into a finely threaded cylinder about 25mm in diameter.
The C-mount was the first type of lens mount used with small video cameras because it takes advantage of a wide array of 16mm motion picture camera lenses.
Today, it's primarily industrial video cameras, including closed-circuit surveillance cameras, that use C-mount lenses.
Bayonet Mounts
Most professional video cameras use some type of bayonet mount. It's easier to use than the C-mount, because you can remove the lens without going through many rotations.
B4 Lens Mounts
Professional video cameras with a 2/3-inch or 1/2-inch chip (imaging device) commonly use a B4 lens mount.
35mm Lens Mounts
The primary consumer camcorder that uses interchangeable lenses is the Canon XL type. It uses a bayonet mounting system that accepts Canon's extensive array of 35mm still camera lenses. Another manufacturer makes a video camera adapter for lenses designed for Nikon cameras. The HDTV video cameras that look like 35mm still cameras can use similar adapters. We'll cover these in a future module.

Three Categories of Video Camera Lenses
We can classify the lenses used on video cameras into three categories:
• Studio/field lenses are completely enclosed in a metal housing that includes the focus and zoom motors, as well as sensors for the external controls.
• ENG/EFP camera lenses are lightweight and have the controls mounted on the lens. They also feature a macro, or extreme close-up mode, and often a 2X focal length extender that doubles the effective focal length at all zoom settings.

• Electronic cinematography lenses are available in zoom or prime and are designed to accept film camera accessories. They typically have large focus, iris, and zoom scales and incorporate both motorized and manual controls.

Rather than f-stops, the iris settings on these lenses are often calibrated in the similar, but somewhat more accurate, T-stops.

T-stops are based on the actual light transmission of the lens at various openings and not simply on the iris opening diameter formula. Because different lenses vary in light transmission -- even at the same f-stop -- T-stop settings are more consistently accurate when used with different lenses. (The "T" in T-stop is capitalized, but not the "f" in f-stop.)

In the next module we'll explore the fundamentals of color
________________________________________


Module 15
Updated: 03/29/2010




Principles of
Television Color

Knowledge of the physics of color will add to the effectiveness of your work and help eliminate production problems. In fact, it will help you with everything from white-balancing a video camera to color-coordinating your wardrobe.
First, note from the illustration below that visible light represents only a small portion of the electronic spectrum.
This is a spectrum of energy that starts with low frequency radio waves, moves through VHF-TV, FM radio, UHF-TV (which now includes the new digital TV band of frequencies), all the way through x-rays.

The visible light portion of the electromagnetic spectrum consists of all the colors of the rainbow (as shown in the enlarged segment above), which combine to produce white light.
The fact that white light consists of all colors of light added together can be demonstrated with the help of a prism.
If you project white light through a prism, as illustrated below, the light will be expanded to show the individual color components within the light.

Conversely, the opposite is also true: if you add all of the basic colors of light together, you can create white light. By keeping these concepts in mind, you will have the key to the additive color television process.
Before we get further into the additive color process -- a process that's basic to color television -- we need to take a look at a process that's probably better understood: the subtractive color process.

Subtractive Color
The color of an object is determined by the colors of light it absorbs and the colors of light it reflects.
When white light falls on a red object, the object appears red because its surface subtracts (absorbs) all colors of light except red.
The light that is absorbed (subtracted) is transformed into heat. This explains why a black object, which absorbs all of the colors of light hitting it, gets much hotter in sunlight than a white object that reflects all colors.
When the primary subtractive colors of cyan, yellow and magenta pigments are mixed together, the result is black -- or, because of impurities in the pigments, a dark shade of something resembling mud.
To solve the "mud" problem, sophisticated color printing processes use CYMK, with "K" standing for black.
With either process all color is essentially absorbed where all of the inks or pigments overlap, as seen in this illustration.
Also note what happens when you mix the three primary subtractive color pigments (yellow, cyan and magenta). You can see that yellow and cyan produce green, magenta and cyan produce blue, etc.
If you take a few minutes to drag around the colored squares in this additive and subtractive color demonstration you can see in a very clear way how the primary colors interact.
Note: This may take a minute or so to load. (If you still get a blank box or a blank page, then JavaScript is not enabled in your browser.)

________________________________________
When a colored filter or gel is placed over a camera lens or light, the same type of color subtraction takes place.
For example, a pure red filter placed over a camera lens will absorb all colors of light except red.
Many people erroneously assume that the red filter simply "turns all of the light red," which, as you can see, is not the case.

Additive Color
Thus far we have been talking about the subtractive color process -- the effect of mixing paints or pigments that in various ways absorb or subtract colors of light.
When colored lights are mixed (added) together, the result is additive rather than subtractive. Thus, when the additive primaries (red, green and blue light) are mixed together, the result is white.
This can easily be demonstrated with three slide projectors.
Let's assume that a colored filter is placed over each of the three projector lenses -- one red, one green, and one blue.
When all three primary colors overlap (are added together) on a white screen, the result is white light. Note in this illustration that the overlap of two primary colors (for example, red and green) creates a secondary color (in this case, yellow). Again, you can clearly see this in the additive color part of the interactive color demonstration.
The standard color wheel is the key to understanding many issues in color television.
Red, green and blue are TV's primary colors, and yellow, magenta, and cyan are considered secondary colors. If you take the time to memorize the color wheel on the left, you will find it useful in many areas -- not just TV.
If any two colors exactly opposite each other on the color wheel are mixed, the result is white. Note that instead of canceling each other as they did with subtractive colors, these complementary colors combine for an additive effect. (One definition of complementary is "to make whole.")
Objects with colors that are opposite each other on the color wheel tend to exaggerate each other when seen together. For example, blue objects will look 'bluer' when placed next to yellow objects and reds will look 'redder' when placed next to cyan.


It may be obvious at this point that by combining the proper mixture of red, green and blue light, any color of the rainbow can be produced. Therefore, in color television only three colors (red, green and blue) are needed to produce a full range of colors in a color TV picture.
In essence, the color TV process is based on the process of separating (in the camera) and then combining (in a TV set) different proportions of red, green and blue.
Although this explanation has long sufficed for a basic understanding of the process, technically, things go beyond this. For a far more in-depth explanation, click here.

Simultaneous Contrast
Question: Which of the small rectangles in the center of these illustrations is the lighter shade of blue?
Answer: they are exactly the same. It's the level of brightness (saturation) of the surrounding color that can make the square on the left appear lighter.
According to the concept of simultaneous contrast the way we perceive the brightness of an object depends on its background.
In television production this concept can be especially important in commercials for wearing apparel where certain colors and shades are critical to harmonizing accessories, or where an advertiser wants to promote subtle colors that are "in vogue."
Not understanding simultaneous contrast can lead to some unpleasant surprises. One TV director was doing a food commercial for tuna fish and, unfortunately, saw fit to put the tuna on a magenta-colored plate. This made the tuna fish look green--not exactly an appetizing appearance for this kind of product.
Clearly, a little knowledge can save you problems and embarrassment.

Three-Chip Video Cameras
Let's use our knowledge of color to understand how a three-chip video camera works. (You will recall that we covered chips and CCDs in Module 8 .)
The full-color image "seen" by a professional video camera goes through a beam-splitter (on the right half of the drawing) that separates the full-color picture into its red, green and blue components.
Note, for example, that all red light in a color scene is split off by a color-selecting mirror and directed to one of the three ▲CCDs.
In the same way, all of the blue light in the original picture is directed to the blue receptor. The green light is allowed to pass through to the CCD at the back of the prism block.
Thus, what was a full-color picture has now been separated into the percentages of red, green and blue light contained in the original scene.
All CCD and camera imaging devices are basically "color blind"; they just respond to the amount of light focused on their surface.
The red, green and blue information from a full-color picture is shown below. When the appropriate color is added to each of the three "black and white" images (the first three illustrations), and combined, you get the full-color result shown in the final picture.

Red Channel
Blue Channel

Green Channel
The Three Colors Combined

Note that the red laser light is detected primarily by the red channel and that the blue-green laser housing (bottom-right of each picture) is detected primarily by the blue and green color channels.
Few colors are "pure"; most contain some white light. Thus, they are normally detected to some degree by more than one color channel. Note that the white shirt is detected equally by all three color channels.
This takes care of color; but how does a color camera detect pure black or white?
You can probably guess.
Since white is the presence of all colors, the camera's chips or imaging devices respond to pure white as the simultaneous presence of all three colors. Black is simply the absence of all three colors.

One-Chip Color Cameras
Although professional cameras use three chips, it's possible (and less expensive) to use one chip with an overlay of millions of tiny colored filters as shown here.
A greatly enlarged section of ▲another type of mosaic color filter is shown in this pop-up illustration.
After the color image from the lens activates points on these mosaic filters that respond to the red, blue, and green light, an electronic circuit is able to separate each of the three colors and send them on their way as three separate electronic signals.
It is also possible to make a mosaic camera filter that responds to only two colors of light, with the third color added through extrapolation. (See "A Little Simple Algebra" below.)
Although mosaic filters make possible smaller and less expensive camcorders, this approach sacrifices resolution (image sharpness) and light sensitivity.

How the Eye Sees Color
You might assume from the above that in color television "white" would simply result from an equal mix of the primary colors.
Unfortunately, it's not that simple. (Alas, few things in life are as simple as they first seem!) For one thing, the human eye does not see all colors with equal brightness.
The eye is much more sensitive to yellowish-green light than to either blue or red light. Due to the greater sensitivity of the eye to the green-to-orange portion of the color spectrum, an equal percentage mix of red, green and blue colored light will not appear white.
Because of this, and because of the limitations and variations of the color phosphors used in different TV screens, the actual color mix used in color television to produce white ends up being an unequal mix of red, blue and green.

A Little Simple Algebra
In the equation: A + B + C=100, if the values of "A" and "B" are known, it's easy to figure out "C."
In the same way, it's not always necessary to know the values of all three primary colors, only two. Thus, color cameras can be made that have only two chips.
Assuming that for a particular TV system 59 percent green and 30 percent red and 11 percent blue equals white (a common mix), then the camera would only need to "know" two of the three colors when it was white-balanced.

Component and Composite Video
Although using all three colors throughout the TV process may be the most consistently accurate way of reproducing color, the requirement of three separate color signals at every stage of the process can be technically demanding
Using even more complex math, it's possible to reduce the three color signals into a single signal. When the three color signals are combined into one, we refer to it as composite video. (Definitions for composite include "merged" or "combined.")
Unfortunately, in the process of combining the three signals subtle interactions take place between the colors (color bleeding); plus, there is a general loss of video quality.
Although these problems may not be noticeable to the untrained eye, they become progressively worse (and noticeable) when the video is copied. Since this is what happens in the editing process, editing composite analog video can be a problem.
One solution it to keep the color signals separate and to use high-quality (and expensive) digital video equipment. Unfortunately, this can put camera and recording equipment out of the price range of most consumers. So, we are looking at a trade-off between price and quality. We'll revisit this issue when we discuss the various video recording processes, starting in Module 46.
________________________________________
Green, Yellow, Blue, and Red Square Reminder
________________________________________

________________________________________

Module 16-1
Updated: 04/09/2010

Part I


Maintaining Video Quality

Today's video equipment includes circuitry that can automatically adjust audio and video levels. However, manufacturers program these automatic controls to maintain only the most basic technical parameters, which often is not consistent with the best possible results.
Thus, to consider yourself a pro (a professional who produces consistently good results), it's essential to understand the elements in this module.
In monitoring and controlling picture quality, two pieces of equipment are necessary.
• A waveform monitor, which graphically displays and measures the brightness or luma level of the video. (In video, the more accurate term of luma is now replacing luminance.)
• A vectorscope, which measures relative color (chroma) information.
Although these are generally separate instruments, in some cases you can display both on a single TV monitor or the screen of a computer-based editing system.
In this module, we'll cover some of the most basic elements of the waveform monitor and vectorscope (things every professional videographer should know about), and we'll stay away from their technical dimensions.
The Waveform Monitor
In critical professional video work we use waveform monitors as scenes are taped. Note the waveform display on the right.
During editing, the device is used to monitor and maintain video quality and scene-to-scene consistency.
By looping the video signal from a camera through a waveform monitor, the resulting electronic graph shows critical elements of the camera's video.

What you see will tell you a lot about video quality and provide information needed to fix many problems.
Let's see how this works.
The photograph on the left contains tonal values from full black to bright white. It gives a normal waveform pattern such as the one shown above.
The bottom of the waveform scale (marked "black level" above) represents the dark areas of the picture, and the white areas appear at the top (marked "white level").
Based on units established by the Institute of Radio Engineers, a scale alongside the waveform monitor starts at around -40 IRE units (at the very bottom) and goes to about +120 (at the very top).
Ideally, video levels for an average picture should be somewhat evenly distributed between 7.5 (where "black" should start) and 100 (where "white" should end) -- as illustrated in the waveform above.
A grayscale pattern picked up by a camera should distinctly reproduce the various divisions across the scale.
Ideally, with a properly adjusted computer monitor (set to 256 colors or more), you should see ▲16 divisions in this gray scale.
Waveform monitors, together with light meters (which we'll discuss in the modules on lighting), are your primary tools in ensuring proper camera exposure and good video quality. In this regard, it's helpful to know that one f-stop in a light meter translates into 20 IRE units on a waveform monitor.
In TV, as in life, things are not always the way they are supposed to be; so let's look at some problem areas.
________________________________________
Camera underexposure (insufficient light on the target) results in low video levels (a dark picture). On a waveform monitor this is immediately obvious, because the peak video level may come up to only 50 or so on the waveform monitor scale.
You can normally fix this by opening the lens iris one or more f-stops.
If you initially leave the video at a low level and then raise or boost it later in the video recording or transmission process, the resulting picture may look grainy because of video noise, as shown here (in a somewhat exaggerated form). This is why you need to make sure things are right to start with..

________________________________________

If the target of the camera is significantly overexposed (too much light), the waveform monitor will show a video signal significantly above 100. Left uncorrected, this will cause significant distortion in the video picture.
Under these conditions, some camera circuits clip off the white level as shown above. Note that detail has been lost in the white areas.
On a waveform monitor, the result would be similar to what you see on the right. (Two identical fields typically display on a waveform monitor, but to simplify things we'll show just one in these drawings.) On the gray scale below, you can also see the loss of detail in the white areas. You can fix this problem by bringing down the video level (generally by closing down the iris of the camera by moving it to a higher number).

________________________________________

Another problem is compressed blacks.
In this case, the resulting video will be dark, without any detail in the dark areas.
A gray scale would show a loss of separation between the divisions on the right side of the scale, as shown below.


You can fix this problem by raising the black level setting of the video equipment, opening up the camera iris, or a combination of both.
In Part II of this module, we'll look at the issue that causes the most problems for video quality: brightness.
________________________________________


Module 16-2
Updated: 03/29/2010
An explanation of the features of this course, including AnswerTips™, can be found here.
Part II

Maintaining
Video Quality

A final quality concern, and something that often results in poor video quality, is caused by subject matter that exceeds the brightness range capabilities of camcorders.
A video camera is only capable of reproducing a limited range of brightness ---something you have to constantly keep in mind when bright lights, windows, white walls, etc., appear in a scene.
A range in brightness that exceeds ▲about 30:1 (with some major picture elements 30 times brighter than others) will cause problems.
Rather than "clip off" the offending areas with a resulting loss of detail in the light areas of the picture (as shown earlier), many video circuits will automatically bring down the entire video level so that it will all fit into the standard (limited) range.

Note in the waveform above that all the video is within the 7.5 to 100 range, but that "spikes" (caused by light reflections from the waterfall) take up more than half of the range. As a result, the rest of the video ends up in a small (and rather restricted) area.
In the photo on the right above the middle-to-dark range of the video is compressed into a small area. The result: a dark picture. If a person were standing in this picture, their skin tones would be much darker than normal.
Now lets compare the resulting gray scales. On the left is a gray scale with a normal range; below is one that illustrates the problem discussed above.
The problem of exceeding the brightness range of the video system (and a resulting compression of the gray scale) is one that you often see in amateur videos.
Note that in the photo on the left the brightness range of this scene greatly exceeds the capability of the video system. This is caused primarily by the bright sky in the background.
The automatic exposure camera setting that was relied upon results in a complete loss of detail in the horse.
Although this example represents extremely difficult subject matter -- a situation that would be best avoided, if at all possible -- note on the right how the picture can be significantly improved if the camera's iris is manually opened up three or more f-stops. (Of course detail in the sky disappears, but we'll assume you are more interested in the primary object in the scene, the horse.)
Can you have it both ways? Possibly, at least with some professional cameras.
A knowledgeable engineer may be able to adjust the brightness response curve of the camera to bring the bright areas into the basic picture. However, doing so, will distort the gray scale, which may objectionably distort the rendering of the other subject matter.
As we will see when we look at the subject of lighting, adding light to dark areas, or darkening the bright areas, represent better ways to solve this problem.
Most automatic cameras, like the ones that gave us the "black" horse above with no detail, give you the option of turning off automatic exposure and adjusting the iris manually.
If you can't do that, remember that the camera's backlight control will provide you with some control in scenes that have bright subject matter such as windows or bright backgrounds. Keep in mind that even someone wearing a white or yellow shirt will often cause problems.

________________________________________
Before we leave the discussion of the waveform monitor, we need to mention few other things.
First is the information displayed below the black level (the 7.5 ▲ IEEE or IRE) point on the waveform monitor.
In this "blacker-than-black" area there are some important timing signals referred to as sync, a term that is short for synchronizing pulses. These are the high-speed timing pulses that keep all video equipment "in lock step."
These pulses dictate the precise point where the electronic beam starts and stops while scanning each line, field, and frame. In fact, without these timing pulses, electronic chaos would instantly break out between pieces of video equipment -- you would have no picture at all.
A sync generator is used to supply a common timing pulse for all equipment that must work in unity within a production facility.
On a waveform monitor the bottom line in the sync should be at -40 (the very bottom of the waveform scale) and the top of the sync signal should go up to the baseline, or the 0 point on the scale.
Too much sync and the black level of the video will be pushed too high (graying out the picture); too little and the black level will cut into the sync, and the picture will roll and break up.
In monitoring video levels we are primarily interested in the range of luminance (visible picture information) that extends from 7.5 (the darkest black) to 100 (maximum white). on a waveform monitor.
If the video (white level) significantly exceeds 100, there will be a loss of detail in the lighter area of the picture. Faces in particular will look washed out. A signal well beyond 100 will also result in technical problems.
Conversely, skin tones that are in the lowest part of the waveform range will be so dark as to have no detail. Properly exposed faces generally fall in the +50 to +80 range.
To keep this from getting too technical, we've sidestepped an issue here that has implications for TV graphics. This information fills in a bit of that gap.
Now we get to the second quality monitoring device.
The Vectorscope
The eye sees color very subjectively, so when it comes to making accurate judgments about color, our eyes can be easily fooled.
Thus, we need a reliable way of judging the accuracy of color, as well as for setting up our equipment to accurately reproduce colors.
The device that does this is called a vectorscope and it's commonly seen in TV control rooms and as part of computer editing systems.
We'll skip the technical stuff involved in this, and just concentrate on six little boxes marked R, G, B, Mg, Cy and Yl on the face of the vectorscope.
As you might suspect, these stand for red, green, blue, magenta, cyan and yellow, the primary and secondary colors used in color TV.
When a camera or any piece of video equipment is reproducing color bars, (shown below on the right) the primary (red, green and blue) and secondary (magenta, cyan and yellow) colors should appear in their marked boxes on a vectorscope.
Without a vectorscope you can often balance the colors fairly accurately by simply making sure the yellow bar is really yellow. In fact, by adjusting yellow correctly, the other colors will often move into place.
But "often" isn't "always."
If primary or secondary color bars wander significantly out of their assigned vectorscope areas, there are problems. Sometimes things are easy to fix (like a simple, twist of the phase adjustment or hue knob); sometimes they're not, and you will have to call in an engineer.
In addition to hues (colors), the vectorscope also shows the amplitude or saturation (purity) of each color. Color saturation, which is measured in percentages, is indicated by how far out from the center of the circle the color is displayed. The further out, the more saturated (pure) the color is.
The SMPTE (Society of Motion Picture and Television Engineers) test pattern above is for television in the 4:3 aspect ratio. The SMPTE test pattern for the 16:9 HDTV television system is shown below.

Since professional nonlinear editing systems have both vectorscopes and waveform monitor screens, you can keep a constant eye on quality and make scene-to-scene adjustments as necessary.
________________________________________
Of course all of these quality measures have to be displayed accurately on a TV monitor in order to be verified, so it's important to be able to trust your video monitor.
This link describes the eight steps involved in setting up a video monitor to display accurate color and contrast.
________________________________________
The zone system that many Directors of Photography and professional still photographers use to insure accurate tonal renditions can also be applied to video production. This is discussed here.
________________________________________
Module 17-1
Updated: 05/24/2010



Part I

Cameras:
The Basics


With all that has gone before as background, we can now turn to the first in a series of modules on the camera and its associated equipment.

Camera Imaging Devices
The very heart of a video camera is its imaging device. The first TV cameras used rather large tubes, as shown on the left.
Some early color cameras had four of these tubes (for red, blue, green, and luminance), which explains why early color TV cameras weighed more than 200 kilograms (500 pounds) and had to be hauled around in trucks.

An example of one of these cameras, which was used in broadcasting in the 1950s, is shown next to the woman on the right. Note how it compares to one the latest pocket sized cameras (complete with a video recorder) shown in the insert at the bottom of the photo.
The latter camera, and in fact most of today's video cameras, use an imaging chip, such as the CCD shown on the left. Many cameras have now moved to a CMOS chip, but at this point in the discussion the distinction is not that important.
The most common chip sizes are 1/2 inch and 2/3 inch (the size of the little box shown near the center of the CCD chip above).
The 1/2 inch chip has a diagonal surface distance of 8 mm (less than a third of an inch), while the 2/3 inch has a diagonal of 11 mm (less than a half of an inch).

Video Resolution
Video resolution is a measure of the ability of a video camera to reproduce fine detail.
The higher the resolution -- the more distinct lines in a given space that the camera can discern -- the sharper the picture will look. We'll take a closer look at how this is measured in a moment.
The standard NTSC broadcast TV system can potentially produce a picture resolution equal to about 300 lines of horizontal resolution. (This is after it goes through the broadcast process. What you see in a TV control room is generally much higher.)
CATV, DVD, HDTVD, and digital satellite TV transmissions go beyond 400 lines of resolution. Note that the number of lines of resolution (sharpness) is different than the total number of horizontal scanning lines, which is 525 or 625 in SDTV.
Three- to four-hundred lines of resolution equal what viewers with 20-20 vision can see when they watch a TV screen at a normal viewing distance.
"Normal" in this case translates into a viewing distance of about eight times the height of the TV picture. So, if the TV screen were 40 cm (16 inches) high, a so-called 25-inch (64-centimeter) picture, the normal viewing distance would be about 2 meters (10 feet).
HDTV/DTV, with its significantly higher resolution, makes possible both larger screens and comparatively close viewing distances.
Although most SDTV sets in homes are capable of only 300 or so lines of resolution, TV cameras are capable of much higher resolution -- up to 1,000 lines or more.
And so this question arises: Why bother with high resolution in cameras with their added costs when the home TV set can't reproduce this level of sharpness?
Answer: As in most aspects of TV production, the better quality you can start out with the better the quality will be for the TV viewer -- even with all the broadcast-related losses.

Determining Resolution
Charts that contain squares or wedges of lines on a light background can indicate the limits of sharpness.
Within a particular area of one of these resolution charts there are lines that converge. The illustration on the left was taken from the full test pattern image shown above. (Note the red rectangle in the full test pattern above.)
Numbers such as 200, 300, etc., appear on the chart next to the corresponding line densities. By exactly filling the camera viewfinder with the resolution chart and observing the point on the chart where the lines appear to blur together and lose definition, we can (in a general way) establish the limits of resolution.
High-quality standard definition, NTSC cameras can resolve about 900 lines; HDTV/DTV cameras well over 1,000 -- well off the chart shown here.

Color Resolution
The resolution we've been discussing is based on the sharpness of the black and white (luma or luminance) component of the TV image.
It was discovered early in the experiments with color TV that the human eye perceives detail primarily in terms of differences in brightness and not in terms of color (chroma) information.
When NTSC color television was developed an ingenious and highly complex system of adding a lower-resolution color signal to the existing black-and-white signal was devised. Using this system, color information can be added to the existing monochrome signal without having to greatly expand the information carrying capacity of the original black-and-white signal.

Minimum Light Levels for Cameras
Television cameras require a certain level of light (target exposure) to produce good-quality video. This light level is measured in lux or foot-candles. The latter unit is used in the United States and lux is used in other countries.
A foot-candle is a measure of light intensity from a candle at a distance of one foot (under very specific conditions). The origin of the term "lux" is not known, although it's assumed to refer to lumens (a measure of light power) times ten.
Since we'll refer to both lux and foot-candles throughout these modules, you'll need to know that a foot-candle is equal to about 10 lux. (Actually it's 10.76, but 10 is generally close enough, and it's much easier to use in conversions.)
Most professional video cameras require a light level of at least 75 foot-candles (750 lux) to produce the best quality video. However, some will produce marginally acceptable video under a few lux of light.
With consumer-type camcorders you will find advertising literature claiming that a particular camera is capable of shooting pictures under less than one lux of light. (The light falling on a subject from a 60-watt light bulb 3 meters [10 feet] away is about 10 lux.)
However, if you have ever tried this with a consumer-type camera, you know that you can't expect much in the way of impressive video quality.
Although an EIA standard is in place in the United States to specify minimum quality standards for light levels, adherence to this standard is not mandatory. Since manufacturers know that consumers want cameras that shoot under low light levels, they are reluctant to use the EIA standard and look inferior to a competitor who is not adhering to the standard.
Suffice it to say, if you are in the market for a camera and you don't see the EIA standard specified, you need to check out any low-light level claims. By just zooming in on the darkest corner of the room and observing details in the darkest areas, you can make a rough comparison of the light sensitivity of different cameras.
At low light levels the iris of a camera must be wide open (at the lowest f-stop number) to allow in the maximum amount of light. As the light level increases in a scene the iris of the lens must be stopped down (changed to a higher f-stop number) to maintain the same level of exposure on the camera target.
Under low light conditions video can quickly start to look dark with a complete loss of detail in the shadow areas. To help compensate, professional cameras have built-in, multi-position, video gain switches that can amplify the video signal in steps from 3 up to about 28 units (generally the units are in decibels or dB's).
But, the greater the video gain boost, the greater the loss in picture quality. Specifically, video noise increases and color clarity diminishes.
Still and motion picture cameramen who are used to working with IE/ISO (film speed) exposure indexes may want to determine the light sensitivity of their video cameras. This information is often not available from camera manufacturers, but can be determined in this three-step process.

Night Vision Modules
For situations that require video under very low light, night vision modules are available that use electronic light multipliers to amplify the light going through a lens. (Note photo on right.)
The most refined of these can produce clear video at night using only the light from stars (a light level of about 1/100,000 lux).
Under conditions of "no light" most of these modules emit their own invisible infrared illumination, which is then translated into a visible image.
In recent years camera operators covering news have found night vision devices useful in covering nighttime stories where any type of artificial lighting would call attention to the camera and adversely affect the story being covered.
At this point we'll take up a new topic, and one that will be continued into the next module.

Camera Mounts and
Hand-Held Camera Shots
Although a tripod may be a hassle to carry and set up, the results can be worth the effort -- especially when displayed on HDTV screens where camera movement on static scenes can make an audience a bit "seasick."
Hand-holding cameras for a period of time can get tiring. After trying to hold a camera steady for some ime, the inevitable fatigue translates into progressively less steady shots.
In addition, camcorders with CMOS chips (which is getting to be most of them) can evidence an unsettling electronic "jello" image effect when cameras are hand-held.
The traditional exceptions to using a tripod are in news and sports where you must be mobile enough to follow moving subjects, documentary style production where shots are brief and rapid, and subjective camera shots that simulate what a moving subject is seeing.
That having been said, in recent years some commercials and many episodic TV dramatic productions have been routinely shot with handheld cameras. For one thing, it saves production setup time, which means money.
It is also claimed by some that "tertiary movement" (camera movement) holds viewer attention. This is especially true when the subject matter, itself, is relatively static.
Today, not only do you routinely see hand-held camera shots (held by professional camera operators) but frequent tracking, jib (crane) and zoom shots.
The award-winning film, Traffic, released in 2001, had handheld shots designed to impart a documentary feel to some of the scenes. In the action scenes in the 2007 film, The Boune Ultimatum, hand-held shots were used to convey periods of total frenzy. Although The Boune Ultimatum was an excellent film, reviewers widely complained about the long, frenzied, hand-held scenes.
In the hands of a professional director of photography this effect can work; however, when less experienced videographers attempt to handhold a camera (especially while zooming, panning and tilting) the effect can look amateurish and even make viewers a bit ▲nauseous.
If you examine most exemplary mainstream films and video productions you will generally always find solid, steady shots -- the kind that are only possible with a solid camera support.

Camera Pan Heads
On most tripods the pan and tilt head (which attaches the camera to the tripod) is not meant to be used for smooth panning and tilting while shooting -- only to reposition and lock the camera into position between takes.
And, this may be just as well, given the fact that a cut from one scene to another is faster and generally better than panning, tilting or zooming to new subject matter.
Even so, pans and tilts are commonly seen -- especially for following action, for revealing the relationship between objects in a scene, etc. Therefore, many tripods have heads designed to smooth out pan and tilt movements.
There are many types, but the most-used type is the fluid head shown here. It provides an adjustable resistance to pans and tilts -- just enough to smooth out the process.

Bean Bags
A simple camera "mount" that works in many situations is the beanbag. The photo on the left shows one on the doorframe of a car.
The "beans" inside are small round soft plastic balls that can assume the shape of the surface the bag sits on. The top of the bag can adjust to the bottom of a camcorder, providing a degree of camera stability.
When used on accommodating surfaces, bean bags can represent a quick approach to getting shots.

Wireless Camera Modules
Although camera operators doing "live" broadcasts from the field used to have to be "hard wired" to a production truck, today's cameras can be equipped with an RF (radio frequency) transmitter.
The camera signal is transmitted to the production truck where it appears on a monitor just like any other video source.
One is shown here on the back of a camcorder.
These units are commonly used in award programs, allowing cameras operators to freely roam throughout the isles to get shots of audience members without the problem of trailing and hazardous camera cables.
________________________________________

(Click on "more" for the second half of this section.)
________________________________________
More

Module 17-2
Updated: 05/22/2010



Part II

Cameras:
The Basics




Basic Camera Moves
In Module 6, we introduced the basic camera moves. As you'll recall, we refer to moving (rolling) the entire camera toward or away from the subject as a dolly ("dolly in" for a close shot or "dolly back" for a wide-shot).
A lateral move (rolling the camera to the left or right on the pedestal) is trucking, as in "truck left" or "truck right."
And, finally, you'll recall that a zoom optically achieves somewhat the same effect as a dolly, but without moving the entire camera.


The photo on the right above shows a typical rocker switch (next to a camera lens) that controls the direction and speed of a zoom.

Studio Camera Mounts
In the studio the entire camera assembly is mounted on a pedestal or dolly (shown here) so that the operator can smoothly roll it around on the floor. The three wheels in the base of the pedestal can be turned using the steering ring.
The camera is directly attached to a pan head, which enables the pan and tilt (horizontal and vertical) camera movements to be adjusted.
Controls on the pan head allow the camera either to move freely, to be locked into position, or to offer controlled resistance to facilitate smooth pans and tilts.
Although the camera may weigh more than 100 pounds (45kg), internal counter-weights allow an operator to easily raise and lower the camera when the telescoping column in the center is unlocked.
The photo above shows some of the other key parts of a manually controlled studio camera pedestal. Most TV production facilities now use robotic cameras that are remotely controlled from the TV control room. (See below.)



A simpler camera support is the collapsible dolly shown on the left. This type of mount is used for remote productions and in some small studios.
Unlike the elaborate studio pedestal that can be smoothly rolled across a studio floor (even while the camera is on the air), the wheels on small dollies are intended to move the camera from place to place between shots.


Robotic Camera Mounts
Camera operators have disappeared at many, if not most, production facilities -- replaced by remotely controlled, robotic camera systems. (Note photo.)
From the TV control room, technicians can adjust the pan, tilt, zoom and focus, and even remotely dolly and truck these cameras around the studio.
Although robotic cameras are not desirable for unpredictable or fast-moving subject matter, for programs such as newscasts and interviews (where operating cameras can get pretty boring anyway) they significantly reduce production expenses.


Innovative Camera Mounts

The Segway HT Platform
The Segway platform (below on the left) can move over a smooth surface while automatically maintaining balance on its two wheels. To initiate a smooth, stabilized change of direction with one Segway model, the rider gently pulls the steering handle forward, back, left, or right.



The "Follow-Me" Camera Mount
As news departments strive to reduce expenses, station managers look for ways to cut down on one of the most expensive items in their budgets -- personnel. The little invention shown above on the right can (with varying degrees of success) eliminate a camera operator.
A reporter or on-camera person wears a belt-pack transmitter, and the receivers on the extended arms on either side of the camera can pan the camera to keep them in the frame.

Remember, throughout these modules we're introducing you to equipment that you could encounter on a job or internship, and not the kind of equipment that's typical for schools and training facilities. (See Footnote)


Camera Jibs
A device that's come into wide use in the last decade is the camera jib, essentially a long, highly maneuverable boom or crane-like device with a mounted camera at the end. You frequently see them in action swinging overhead at concerts and major events.



The operator and controls for the jib are shown above on the right. Note the two video monitors (one for camera output and one for program video) and the heavy weights that help balance the weight of the camera and crane.
A jib allows sweeping camera movements from ground level to nine meters (thirty feet) or more in the air. This is another concept we'll revisit in more detail later.
For more mobile camera work outside the studio, handheld camera supports allow significant mobility while still offering fairly steady camera shots.
The most famous of these is the Steadicam® (shown on the right), which is used with both film and video cameras.
The camera is mounted on a flexible arm that uses a series of spring balances to hold its position. A camera operator can walk, run, and even dash up a flight of stairs and still get a reasonably steady shot.
In addition to being costly, these units are heavy and require an experienced operator.
For smaller cameras, such as the one shown below, Steadicam JR® and similar units can provide smooth camera moves at a fraction of the cost and weight. The separate viewfinder (at the bottom of the picture) allows the unit to be held away from the body, where it won't be inadvertently bumped.
With a bit of practice an operator can walk in front of or behind a moving subject without undue camera movement.
Walking around with a full cup of coffee in your hand is good practice for using one of these. When you can go up and down stairs without spilling the coffee, you'll probably do a good job with one of the smaller Steadicam™-type units.

Camera Tracks and "Copters"
For elaborate productions, installing camera tracks allows the camera to more smoothly follow talent and move through a scene. Although a camera operator can ride with the camera (as shown below), some cameras are remotely controlled.


Looking like a giant mosquito with a TV camera in its nose, the miniature (four-foot long) helicopter shown above can provide aerial views of various sporting events. A ground observer remotely controls the entire unit, and the unit's omnidirectional microwave relays the video to the production van.
We'll look at specific cameras and their features in later modules, but before we do we need to look at some key elements in camera operations. We'll start with color balancing cameras, a topic in the next module.
________________________________________
* A student who was about half way through my course told me that on her first day at her internship with a group of interns from different area schools they confronted some equipment she that she had learned about in my class. She was the only one in the group that understood it. She was offered a job right then. And, no, I'm not making this up!!
________________________________________


Module 18
Updated: 05/22/2010




Color Balancing
Cameras


Except possibly for Martians (who at this point are of unknown complexion), having green skin tones signals a technical problem.
Consumer-type cameras typically have automatic white balance circuitry that continuously monitors the video and attempts to keep colors true.
Although a difference exists between white balancing and color balancing, we often use the terms interchangeably. Technically, you white balance on a white card, and then you may need to make subtle color balance changes to match cameras, especially on skin tones.
In white balancing, a sensor on or within the camera averages the light in the scene and automatically adjusts the camera's internal color circuitry to zero out any generalized color bias. The assumption is that when all colors and light sources in the scene are averaged, the result will be a neutral (light) gray or white (i.e., all colors will "zero out".)
As with all automatic circuitry, however, automatic white balance is based on certain assumptions, that may or may not be valid.
A problem arises if there are strong, dominant colors in the scene or (with some cameras) if different light sources illuminate the camera and subject matter.
Automatic white balance circuitry will work reasonably well under the proper conditions, and for the typical videographer with simple equipment, this is certainly better than nothing.
But in the professional realm where consistent color balance is expected, automatic circuitry cannot be relied on to always produce accurate color. In this case, no substitute exists for a knowledgeable camera operator equipped with a white card or piece of white paper. (This has to be the cheapest technical aid in the entire video field!)

White Balancing On a White Card
Since we know from our earlier discussions that red, green, and blue must be present in certain proportions to create white, it's relatively easy to white balance a professional camera to produce accurate color.
With the camera zoomed in full frame on a pure white card, the operator pushes a white balance button and the camera's chroma channels will automatically adjust to produce pure white. The camera in effect says, "Okay, if you say that's white, I'll balance my electronics so that it will be white."
Focus is not critical, but you must place the card full frame within the dominant light source of the scene.
This illustration shows color balance that is too reddish, normal, and too blue (if your computer monitor correctly shows these differences).
When the dominant light source in a scene changes in any way, you must again white balance your camera.
Going from sunlight to shadow will necessitate white balancing the camera again, as will moving from outside to inside. When shooting outside, even the passing of a few hours will result in a slight color shift in illumination.
If you do not white or color balance your camera, you risk scene-to-scene color changes. This is especially noticeable with skin tones in multiple-camera productions.

Lying to Your Camera
You can also "lie to the camera" during the white balancing process to create interesting effects.
White balancing the camera on a blue card can create a warm red color bias in a scene; color balancing on a yellow card will create a blue effect (below).
In an effort to compensate for colors presented as "white" the camera's white balance circuitry will push the camera's color balance toward the complement (opposite) of the color in the card.
Note the different effect in these two photos.
Although an editor can electronically try to alter white balance in postproduction, starting out with proper color balance at the camera is always best. Otherwise, it may not be possible to perfectly match sequential scenes during editing.
Sometimes directors will want to skew color balance during production to create certain effects. For example, in the award-winning film Traffic, director Steven Soderbergh gave different locations specific color tones, suggesting different feelings. He gave scenes in Washington, D.C. cold blue tones, and made scenes in the San Diego area warm with gold overtones.
Often, we see commercials skewed strongly toward blue or yellow-gold. As we will note in an upcoming module on composition, colors can suggest moods.

Black Level and Black Balance
Professional video cameras also have black level and black balance adjustments. These are typically set by capping the lens (so that no light enters) and allowing automatic circuitry to appropriately balance the three colors for optimum black.

Color Balancing Multiple Cameras
Color balancing a single camera is relatively easy, especially since the editor can often fix minor problems in postproduction.
The problem comes when you have to match multiple cameras — either in the studio or in the field. If you don't get everything just right, you may see an annoying shift in color, brightness, contrast, or sharpness as you switch from one camera to another.
The camera's internal digital signal processor (DSP) controls camera setup adjustments.
Some studio and field cameras are designed to use a "smart card." About the size of a credit card, it records all the parameters on the first camera you (carefully) set up. Then when you insert this card into successive cameras, they will adjust them to conform to the first camera's parameters.
Sometimes it's necessary to store these settings for use later or even to send them to another location where a different crew is doing segments for the same production.
You can check camera color match by focusing two cameras on the same subject and doing a split-screen -- putting the two images side-by-side on a single TV monitor, as shown here.
Assuming that the camera on the left is correct, then all other cameras (placed on the right) can be adjusted to match it. In this example note that the camera on the right has too much magenta.
To match cameras for skin tones (generally, the critical part of a scene) you can use a mannequin, a large color photo, or "real, live person."
If the split screen approach isn't available, you can quickly switch from one camera to another while viewing the results on a single, high quality monitor.
As we've noted, if you are after a particular "film look," you can even duplicate different film stocks (types of motion picture film) by manipulating the video camera's ▲gamma curve (gray scale response).
You can also create sophisticated film effects, such as fogging, push or pull processing (over- or underdeveloping the film). Unlike the case of film, however, you can immediately see the effect.
Studio engineers use a central CCU (camera control unit) or DSP, to adjust all studio cameras at a central location. (Note photo on the right.)
DSP or CCU adjustments include iris, which controls the video gain or brightness; pedestal or black level; the subcarrier phase or SC control, which is similar to the hue control on your TV; and the gamma curve or the relative response to the various tones from white to black.
You'll find a more in-depth look at the various camera settings here.

Color Reproduction Is Subjective
Even though you are apt to notice undesired color variations between cameras, overall, color perception is quite subjective. In fact, when it comes to judging color, the human eye can be easily fooled.
To explain part of this issue, we'll look at the two primary standards of illumination: sunlight and incandescent light.
Sunlight contains a roughly equal mixture of all colors of light. The color of light is measured in Kelvin (K) degrees. On the Kelvin scale, the lower the color temperature the redder the light and, as you might assume, the higher the color temperature, the bluer the color.
Compared to sunlight, with a color temperature of about 5,500K, the light from a standard 100-watt light bulb is only about 2,800K. The light from standard portable lights used in video field production measures 3,200K. (We'll discuss the color temperature of light in more detail in the chapter on lighting.)
For now, we can see this difference by looking at the photo on the right. The woman is lit on our right side by sunlight coming through a window and from the left side by standard indoor (incandescent) light.
Through a process called approximate color consistency, the human eye can automatically adjust to color temperature changes in the 2,800 to 5,500K range.
Daylight color temperature varies, depending on location, time of day, and other factors, so normal daylight color temperature is considered to be between 5,400 and 6,000K.
If you look at a piece of white paper in sunlight, you should have no trouble verifying it's white.
When you take the same piece of white paper inside under the illumination of a normal incandescent light, it still looks white. By any scientific measure, however, the paper seen under a standard light bulb is now reflecting much more yellow light. A yellow (2,800 to 3,200K) light falling on a white object creates a yellowish object.
But your mind says, "I know that paper is white." And so (through approximate color consistency), you unconsciously adjust your internal color balance to make the paper seem white.
In so doing, you're able to shift other colors slightly so that you perceive them in their proper perspective also.
Although we make such color corrections for "real-world scenes," we tend not to make them when viewing television or color photos. In the latter case, we generally have a color standard within our view (for example, sunlight or an artificial light source) that influences our perception.
Since we know human color perception is quite subjective, it's important to use some objective measure or standard to white balance and color balance video equipment accurately and consistently. That measuring instrument, which was introduced earlier, is the vectorscope.
Good Color vs. Real Color
You might assume that television viewers want to see colors reproduced as accurately and faithfully as possible. Not necessarily. Studies show that people generally prefer their TV colors more saturated (exaggerated) than in "real life."
Color saturation preferences even differ in different countries. Compared to European countries, U.S. viewers, seem to prefer to see skin tones "healthier" than they actually are -- as well as grass greener, and the sky bluer.
In terms of the vectorscope, this preference does not mean that hues are inaccurate, only that they are stronger and more saturated.
________________________________________


Module 19
Updated: 03/29/2010



Creative Control
Using Shutter Speeds

In addition to the focus, iris, and color balance adjustments on camcorders, most video cameras have an adjustment for shutter speed.
Knowing how to use shutter speed is another example of an important creative control that can separate the amateurs from the professionals.
Unlike the shutters used in still cameras, the shutter used in most video cameras is not mechanical. Chip camera "shutter" speeds simply represent the time that the light-induced charge is allowed to electronically build in the imaging chip before being discharged.
With speeds as high as 1/12,000 second in some consumer camcorders, almost any movement can be "frozen" without blur or smear -- speeding cars, golf balls, or hockey pucks.
If you think of dividing a second into 12,000 parts and then allowing the image to be exposed for only one of those 12,000 intervals you get an idea of just how fast this is.
When we move to these speeds we can see things in new ways. For example, an ultrahigh speed exposure (achieved with special equipment) was used to show this bullet going through an apple.


Shutter Speeds and Resulting Exposure
By setting a video camera to a "normal" shutter speed of 1/60th second (the time it takes to scan one video field in the NTSC standard), the electronic sampling is done at the maximum time allowed by the field rate of the TV system. This represents the maximum exposure possible with normal sampling.
In this graph the bars represent total exposure at various shutter speeds.
In very low light conditions the shutter speed on some video cameras can be slowed down to allow much more light to register on the chip (the bars below 1/60th second in the graph).
Although this results in much brighter video, if there is any action, a jumpy, stroboscopic effect will be obvious.
If there is a need to stop (freeze) action, faster shutter speeds than "normal" can be selected. Most professional video cameras have a series of shutter speeds from of 1/60 second (normal), to 1/2,000th second. Many go beyond this to 1/5,000, 1/10,000th, and, as we've noted, even 1/12,000 second.
At the same time, note in the graph above how this reduces the total light on the chip (exposure). To compensate, the iris of the lens must be opened up.
The higher speeds (1/1,000th and above) make possible clear slow-motion playbacks and freeze-frame still images, such as the one we see here.

Shutter Speeds
And F-Stops
Just as in traditional still photography, with video cameras there is a direct relationship between shutter speeds and f-stops. Each of the combinations in the table below represents the same exposure (total light on the chip).
________________________________________
Typical Relationship Between
CCD Shutter Speed and Exposure

CCD Shutter speed: Normal 1/100 1/250 1/500 1/1,000 1/2,000 1/4,000 1/8,000 1/10,000
Corresponding f-stop (or T-stop): 16 11 8 5.6 4.0 2.8 2.0 1.4 1.2
________________________________________
You will note from the table above that each time the shutter speed is doubled the lens must be opened up one f-stop to provide the same net exposure. (The increased shutter speed cuts the exposure time in half, but opening the iris one f-stop lets twice as much light through the lens to compensate.)
The combinations shown in the above table are for a scene with a specific amount of light. A different scene may call for an exposure of 1/100 at f/4 instead of f/11, for example, and then all of the other shutter speed and f-stop relationships will shift accordingly.
The variable we are holding constant in this example is the light sensitivity of the camera. However, keep in mind that this can also be changed on some cameras with a video gain boost control.
These f-stop and shutter speed numbers may seem confusing at first, but once you get them in mind, they will serve you well in video, still photography, and even in motion picture work. It's all the same and it has remained essentially unchanged worldwide for well over a century. (And, in case you need it, you will recall that earlier we explained how to determine the IE sensitivity of a video camera.)

Shutter Speed and Stroboscopic Effects
A stroboscopic effect (where you see a rapid sequence of discrete images associated with movement) can occur in video cameras with very high (above 1/250th second) and very low (below 1/60th second) shutter speeds.

Low Shutter Speeds
As we've noted, it's possible for some video cameras, to use exposure rates below the normal 1/60th second. This allows the effect of the light to build in the chip beyond the normal scanning interval.
But the catch is that in this process some of the normal fields and frames must be omitted (ignored) at regular intervals.
If no movement is involved, the loss of frames will go unnoticed. However, with movement the loss of frames results in a discontinuity in action and a jerky, stroboscopic effect.
Note the separate images in photo on the left and the jerky motion in the reel on the right.
Besides the (somewhat questionable) special effect this provides, there are occasions — primarily very low light news and documentary situations — where imperfect video is better than no video at all.

High Shutter Speeds
Now let's consider the other end of the shutter speed range. With shutter speed intervals shorter than 1/250th second, action tends to be cleanly frozen into crisp, sharp, still images.
Without the slight blur that helps smooth out the transition between successive frames, we may notice a subtle stroboscopic effect when we view rapid action. Even so, the overall effect is to make images clearer, especially for slow-motion playbacks.
Video cameras dedicated to slow motion (slo-mo) applications can be speeded up to between 100 and 200 frames per second with correspondingly high shutter speeds. By slowly playing back footage taken at this speed you can study sharp, discrete slices of the action.
To see the difference that shutter speeds can make in stopping action, study the following sequence of roller coaster photos. The first was taken at 1/30 second, the second at 1/100th second, the third at 1/500th second, and the final photo at 1/1,000 second.



A final note on shutter speeds.
When taping under fluorescent lights, it's advisable to stick to a 60th second (normal) shutter speed. Using a faster shutter speed typically results in a flickering effect in the video as the chip exposure interval interacts with the normal flicker of fluorescent lights.

Variable Frame Rate Cameras
As we noted in Module 8, the normal frame rate for video cameras is 30 per-second or 25 per-second, depending on the country. However, the latest generation of professional video cameras can be overcranked or undercranked.
The "cranking" term originated with film back in the early 1900s when motion picture cameras had to be cranked by hand. (One of the earliest motion picture cameras is shown on the left.)
Since film projectors were supposed to be operated at a set speed, if the film camera was overcranked and the frame rate was speeded up, the action was effectively slowed down when the film was later projected at normal speed.
In the process of speeding up the frame rate, the shutter speed of the camera was also shortened, requiring either more light or a wider f-stop. (Recall our earlier discussion on shutter speeds and exposure.)
If the motion picture camera was undercranked, the frame rate was slowed down. This speeded up the action on playback (and, as you would assume, increased the exposure on the film).
Even when electric motors replaced cranks, film cameras retained the ability to speed up or slow down frame rates, but they typically retained the terms "overcranked" and "undercranked." Film personnel seem to hang on to their terms throughout the years, even though new technology comes along; whereas, video personnel seem to be willing to, or maybe have to, adjust to new terms on a regular basis.)
Although we often see obvious slow-motion effects and even scenes that are speeded up for effect, sometimes the intent is more subtle.
For example, a scene may be slightly undercranked (resulting in a slight increase in action speed) to add intensity and frenzy to a scene. Or, the camera may be overcranked to slightly slow down action and add a subtle fluid motion to a scene. These are just a couple more creative tools.
________________________________________

________________________________________
Module 20
Updated: 03/29/2010




The Camera
Viewfinder

We are gradually sneaking up on the operation of the total video camera. But, before we can really use one like a professional, there are a few more things we need to cover, starting with --

Viewfinder Types
The viewfinder of a camcorder can be a CRT, tube-type (like those used in the original TV sets), or a flat panel (LCD) display similar to those in laptop computers.
Some HDTV camcorders now use LCoS (liquid crystal on silicon) viewfinder displays instead of LCD. Precise focusing is critical in high-definition video and the sharper LCoS images aid in seeing focus problems.
Unlike studio cameras that typically use at least seven-inch displays, the viewfinders for camcorders must be much smaller. They typically use a miniature video screen viewed through a magnifying eyepiece.
Since camcorder viewfinder images are rather small for high-definition needs, various focus assist devices are available. One technique is the electronic magnification of a small area of the image that can be used to tweak camera focus.

Accommodating Left
And Right-Eyed People
With cameras that use side-mounted optical viewfinders, the viewfinder can often be flipped from one side of the camera to the other for operators who prefer to use their left or right eyes.
When the viewfinder is flipped in this way the image ends up being upside-down, unless a reversal switch is flipped. (This also explains why an image might inexplicably be upside down when you first look in a viewfinder.)
Holding your eye to one of these viewfinders for a long period of time can be quite fatiguing.
Cameras employing flat panel viewfinders (which you can view from a distance) can help. This type of viewfinder (pictured here) is also an aid in shooting at very low or high angles.
Flat panel viewfinders can also be used to compose shots that you, yourself, want to be in. You can simply mount the camera on a tripod and (on many cameras) turn the viewfinder around so you can see it.
The main disadvantage of the flat panel display is that the images lose contrast and brightness when viewed in bright light. This can make the camera hard to focus.
Once you get used to their operation, viewfinder goggles that resemble virtual reality goggles allow even greater flexibility. This type of viewfinder can be used to supplement a standard side-mounted viewfinder.
Since the viewfinder is connected to the camera by a long cable, you can easily hold the camera over your head, as shown here, place it flat on the ground, or even shoot backwards with the camera mounted on your shoulder.
For critical, professional work the best "viewfinder" is an external monitor, preferably, a bright, high-resolution color monitor. Even though this type of standalone monitor requires extra power and limits your mobility, it's the only accurate way of checking subtle lighting effects and critically evaluating things such as depth of field.

Camera Safe Areas
Because of overscanning and other types of image loss between the camera and the home receiver, an area around the sides of the TV camera image is cut off.

To compensate for this, directors must assume that about ten percent of the viewfinder picture may not be visible on home receivers.
This area (framed by the red lines in the photo) is referred to by various names including safe area and essential area.
Some directors confine all written material to an "even safer" area, the safe title area (the area inside the blue frame).
Although flat-panel TV displays don't inherently evidence overscanning as much as TV sets that use picture tubes, it's still a good idea not to include important information (such as writing) in the outer edges of the TV frame. ( ▲HDTV overscanning note.)

Shoot-and-Protect
As we noted in Module 9, HDTV/DTV uses the 16:9 aspect ratio shown above, and standard TV (SDTV) a narrower 4:3 aspect ratio.
Most producers are now shooting their shows in the 16:9 format. But, since a large percent of home viewers still have sets with 4:3 aspect ratios, productions need to be shot so they can be used in either format.

As we've previously noted, the term shoot-and-protect refers to shooting scenes in 16:9 while "protecting" the 4:3 area-making, sure that it still contains all the essential information. To do this a 4:3 grid (shown in red here) can be superimposed over the 16:9 viewfinder image.

Adjusting the Viewfinder Image
Viewfinders need to accurately represent the nature and quality of the video coming from the camera. Although flat screen viewfinders generally remain accurate, viewfinders that are based in miniature picture tubes (CRTs) can drift resulting in an inaccurate picture.
Because the image in a camera's viewfinder is actually the image from a miniature TV screen, it's subject to brightness and contrast variations. In addition, with tube-type viewfinders there may also be an electrical focus problem and the occasional lack of proper image centering.
Adjusting the viewfinder image does not affect the video coming from the camera itself; but adjustments to the camera video will affect the viewfinder image.
To make sure that the contrast and brightness of the viewfinder are set correctly, the camera's built-in, electronically generated color bars (if they are available in the camera you are using) can be switched on and checked in the viewfinder.
The viewfinder brightness and contrast controls can then be adjusted until a full, continuous range of tones from pure white to solid black are visible.
If the camera doesn't have a built-in test pattern, the quality of the camera video should first be verified with the help of a test pattern and a reliable external video monitor before the viewfinder controls are adjusted.
Checking Viewfinder Accuracy
The next step is to check viewfinder alignment to see that the area shown exactly corresponds to what the camera is "seeing."
Although flat panel viewfinders normally remain stable over time, the frame area accuracy of a tube-type (CRT) camera viewfinder can drift to a point of not accurately showing the output of the camera.
This is relatively easy to check.
First, a video monitor must be used that has itself been perfectly aligned with the help of a test pattern. The output of the camera in question is then hooked up to the monitor and the camera is focused on a test pattern so that the outermost edges of the test pattern just fill the viewfinder image.
Any discrepancy between the viewfinder image and the monitor image should be obvious. Viewfinder alignment may have to be adjusted with the help of an engineer or technician.
Occasionally the electrical focus will also drift out of adjustment on a tube-type viewfinder. This will make optical focusing difficult until it is corrected, generally with the help of a test pattern and an engineer. (Since there are very high voltages within CRT housings, these adjustments should be left to someone familiar with these matters.)
Wearing glasses while using a side-mounted CRT camera viewfinder can present problems -- especially in seeing all four corners of the image at the same time.
Therefore, many side-mounted eyepiece-type viewfinders have a control in the eyepiece to correct for variations in eyesight. This is referred to as diopter correction. If adjustable correction isn't built in, eyepieces can sometimes be purchased for the viewfinder that can eliminate the need for basic types of eyeglasses.

Status Indicators-Viewfinder Variety
To help you keep track of everything you need to know while shooting, video camera manufacturers have added an array of status indicators to viewfinders. (And you thought only things like designer jeans were status indicators!)
First, there are miniature colored lights around the edges of the video image. Red, yellow, and green are common colors. Sometimes they even blink to get your attention.
Next, are the indicators that are superimposed on the viewfinder video. Boxes, bars, and lines are common configurations.
Some of the viewfinder messages may be superimposed over the image in plain English (or the language of your choice). For example, "Tape remaining: 2 min."
Finally, some camcorders have small speakers built into the sides that announce (again, in the language of your choice) such things as "low battery," or "remaining recording time: five minutes."
Because every manufacturer uses a slightly different approach, you need to study the camera guide to determine what a camera is trying to tell you. The time spent becoming familiar with the meaning of these indicators will more than pay for itself in avoiding disappointments and failures.
Viewfinder status indicators can include the following:
• a tally light indicating that the camera is recording or "on the air"
• a low battery warning
• minutes of tape remaining
• color or white balancing may be needed
• low light; insufficient exposure
• low-light boost (gain selector switch) circuit in operation
• indoor/outdoor filter in place
• zoom lens setting indicating how much further you can zoom in or out
• auto/manual iris status
• f-stop setting
• shutter speed setting
• audio level meter
• remaining tape (or recording medium) time
• a zebra pattern for setting maximum video levels
• superimposed masks for the safe area and the 4:3 and 16:9 aspect ratios
• the presence of customized camera setup profiles to accommodate specific types of subject matter or desired image effects
• camera warm-up diagnostics
In the next module we'll take up camera prompters.
________________________________________
Green, Yellow, Blue, and Red Square Reminder
________________________________________


Module 21
Updated: 03/30/2010



Camera Prompters


People who work in front of the camera use various prompting methods to aid in their on-camera delivery.
Most prompters (often referred to as TelePrompTers or Teleprompters after the original manufacturer) rely on a reflected image of a script that's visible in a half-silvered or two-way mirror in front of the camera lens.
The side view of a camera prompter illustrates how this works. The image from the video monitor (displaying the text to be read) is reflected into a half-silvered mirror mounted at a 45-degree angle to the lens.
The image of the text as seen by the prompter camera is electronically reversed left-to-right so that the mirror image will appear correct.

Because the mirror is only half-silvered, it ends up being a two-way mirror. First, it reflects the image from the video monitor screen, allowing the talent to read the text. Note the photo on the right.
Second, being semitransparent, the mirror allows much of the light from the scene being photographed to pass through its surface and go into the camera lens.
When the talent looks at the prompter mirror to read the text, it appears as if they are looking right at the camera lens, and, therefore, at the audience.
In order not to give the appearance of constantly staring into the camera lens, most on-camera people using prompters periodically glance at their scripts, sometimes as a way of emphasizing facts and figures. (Plus, having a paper script is always a good idea in case something goes wrong with the prompter.)
Some on-camera people prefer large poster board cue cards with the script written out with a bold black marker. This approach has definite limitations.
Not only does the use of cue cards require the aid of an extra person (a card puller), but also the talent must constantly look slightly off to the side of the camera to see the cards. Plus, since the cards can't be reused, the approach ends up being rather expensive.
Many news reporters working in the field simply rely on handheld note cards or a small notebook containing names, figures and basic facts. They typically memorize their opening and closing on-camera comments and then speak from notes, or even read a fully written script, while continuing with off-camera narration.
A few field reporters have mastered the technique of fully writing out the script, recording it, and then playing it back in a small earphone while simultaneously repeating their own words on camera. Although this technique demands practice, concentration, and reliable audio playback procedures, once mastered, it can result in highly effective on-camera delivery.
Even so, a camera prompter (Teleprompter) is the most relied upon form of prompting, especially for long on-camera segments. There are two types of camera prompters: hard copy and soft copy.

Hard Copy Prompters
The first type of on-camera prompter to be used, what became known as a hard copy prompter, used long rolls of paper (see photo below) or clear plastic.
When paper is used, the on-camera script is first typed in large letters in short (typically, two to four-word) lines. The paper is attached to two motor driven rollers and the image is picked up by a video camera (at the top of the photo) and displayed on a video monitor, as previously illustrated.
The script has to be scrolled at a carefully controlled speed while the talent reads the text. By means of a handheld control either prompter operators or the talent, themselves, regulate the speed of the prompter.
Hard copy prompters have now largely been replaced by --


Soft Copy Prompters
Soft copy prompters display the output of a computer, much the same as the computer monitor displays the text you are reading right now. This approach has several advantages.
First, because the text is a direct, electronically generated image, it's sharp and easy to read. Revisions are easy to make without the legibility problems associated with crossing out words or phrases on paper and penciling in last-minute corrections.
Once the script is entered into the computer it can be electronically reformatted and displayed in a standard prompter format -- narrow lines with large bold letters as shown below.
If a color video prompter monitor is used, the text can be color-keyed to set off the words of different speakers, or special instructions to the talent that are not meant to be read aloud. The following are some possible formats.
-- (JOHN OC) --
THIS IS A SAMPLE OF TEXT JOHN WOULD READ ON CAMERA FROM A TELEPROMPTER.
--(MARY VO)--
THIS IS A SAMPLE OF THE
NARRATION MARY WOULD
READ OVER A VIDEO SOURCE.
OC means on camera; VO means voice over associated video.
--------- (JOHN OC) --------
THIS IS A SAMPLE OF TEXT JOHN WOULD READ ON CAMERA.
---------- (VO) -----------
THIS IS A SAMPLE OF
NARRATION READ
OVER A VIDEO SOURCE.

Issues in Using Prompters
When using cue cards or any type of on-camera prompting device there is always the issue of the compromise involved in the camera-to-subject distance.
If the camera is placed close to the talent (making it easy for them to read the prompter), the constant left-to-right reading movement of their eyes can be distracting to an audience.
Moving the camera back and zooming in reduces this problem by narrowing the left-to-right motion of the eyes; but, at the same time, the extra distance makes the prompter harder to read.
The solution is to work with the talent to arrive at an acceptable compromise, and then hold to the agreed upon camera distances throughout productions.

Three Recent Innovations
Three new innovations have recently been introduced in prompters.
To get away from the computer and cables that are normally a part of a teleprompter system a recent approach is to transfer the script from a word processor to a USB flash drive for uploading into a special, self-contained prompter. In the process the text is automatically reformatted to the needs of the prompter display. On-camera talent uses a wireless hand control to start and stop and prompter and adjust speed.
The next innovation is voice activated prompting. The unit recognizes speech and moves the text accordingly. With this system the talent doesn't have to worry about controlling the prompter as they speak.
Finally, there are now high-intensity prompter displays that can be used outside in direct sunlight. Previously, under bright light conditions prompter displays were often hard to read or the text would even wash out completely.
________________________________________


Module 22
Updated: 03/30/2010



Composition:
Setting the Scene

Have you ever wondered why certain paintings endure for centuries and become priceless, while others end up at garage sales?
Art critics agree that the difference hinges on an elusive element called artistic talent.
Although talent is hard to define, we do know that it goes far beyond a familiarity with the basic elements of the medium — in this case paint, brushes and canvas — to an ability to use the medium to create an emotional experience in the viewer.
In video production an understanding of cameras, lenses, etc., is fundamental. But those who never get beyond this basic understanding, as essential as that might be, never distinguish themselves. At best, they will be considered good technicians.
We can make an analogy to musical performances. There are many people who can "get all the notes right." But, if the performance lacks heartfelt interpretation (emotion), we feel that something is missing, especially if we have an opportunity to hear someone who can interpret and "put themselves into" the same music.
Only after you master the basic tools of the medium and are able to go beyond them to express your ideas in creative and even artistic ways will your work will be considered praiseworthy — even exemplary.


Form vs. Content
A scene can be well exposed, in sharp focus, have perfect color balance, and be well lit (i.e., have good form) and still be empty of emotional meaning and impact (be void of meaningful content).
If a scene in a production is striking, dramatic, or humorous, we will tend to overlook minor technical weaknesses. This leads us to the following:
Content takes precedence over form.

In other words, the intended message of a production is more important than things such as technical excellence or flashy embellishments.
At the same time, significant technical problems — poor sound, a jittery camera, or a lens going in and out of focus — will quickly divert attention away from the message: the content.
When production elements call attention to themselves, either because they are poor or because they are ostentatious, attention is shifted away from content. This is especially true in dramatic television.
If the content is predictable or somewhat pedestrian in nature, a director may try to hold audience attention by deluging viewers with visual effects. This practice is common in some music videos, where there is competition to come up with ever-more-bizarre and far-out effects.
TV series such as CSI use visual effects to embellish content; but the major emphasis is on the story line and, of course, the "chemistry" between principal characters.
In a series such as Friends, one of the most popular sitcoms of all time, content alone carries the series, and there is almost never a need for visual effects. (Friends aired its last episode in May 2004, but reruns will undoubtedly be broadcast for many years.)

A Director Directs Attention
Although we generally assume that the term "director" refers to the person's role in directing (steering) the work of production personnel, the term actually has a more important meaning: one who directs the attention of viewers.
In this role the director moves from form into content and centers on skillfully and creatively using the tools of the medium to regularly direct the audience's attention to critical aspects of the message.
In a sense, the director is a kind of "tour guide" for viewers.

Insert Shots and Cutaways
A director will use an insert shot, to call attention to something significant within the basic scene. This shot -- generally a close-up -- highlights details of something that may not have been apparent in the wider shot. Note photos below.

Good tour guides also help people understand things by adding significant information along the route. Good directors do the same. This could be considered a cutaway shot — cutting away from the central scene to bring in related material.
For example, while covering a parade, a director might cut away to a shot of a baby sleeping peacefully in a stroller. Or a sequence showing buyers in a busy marketplace in the Philippines might cut away to a shot of a child watching it all as shown in the photo on the right.

Enhancing the Message
A major role for production tools is to enhance, amplify, or explain the message.
Music is a production tool when it enhances the atmosphere, tips us off to danger, or sets the mood for romance.
As we will see, lighting can suggest a cheerful atmosphere or a dark, dim, and seedy environment. Sets and props can do the same; plus, in a dramatic production they can tell us a great deal about characters — even before we meet them.
An example of this is an atmosphere introduction, a technique where a director tips us off to important things about characters by introducing us first to their surroundings.
Contrast the setting shown here with starting a dramatic production with a slow pan across a bright, immaculate, airy penthouse garnished with ultramodern furniture and paintings. What does each say about the people involved?
There is a saying in videography and film:
Never just say it if you can show it.

Let's say you are doing a documentary on air pollution. You could talk about how bad things are, or you could simply cut to a scene like this.
Since what people see on TV typically carries much more of an impact than what they hear, you are much better off showing things rather than talking about them.
In a sense, all of the things we've been discussing can be included in the general term, composition (the elements that comprise a scene). However, for the remainder of this section we'll concentrate on a narrower and more traditional definition of the term.

Defining Composition
Composition can be defined as the orderly arrangement of elements in a scene which, when taken as a whole, conveys intent and meaning. (How's that for a genuine textbook-type definition?)
Television production involves both static composition and dynamic composition.
Static composition covers the content of fixed images, such as paintings or still photos.
Dynamic composition goes a step further and takes into consideration the effect of time — moment-to-moment change. This change can be within a single shot (including camera or talent moves), or it can apply to the sequence of scenes created through editing.
By studying the most enduring and aesthetically pleasing paintings over the centuries, as well as the most effective film and video scenes during the past 50 years, certain artistic principles emerge.
Why not take an afternoon and go to a traditional art gallery and see if you can draw some conclusions for yourself, or study some dramatic videos (movies) that have won awards for cinematography.
Studying the work of those who have achieved recognition in art and cinematography can be both beneficial and enjoyable.


Guidelines, Not Rules
Even though the principles that have emerged for good composition seem rather clear, they should always be considered guidelines and not rules.
Composition is an art and not a science.
If composition were totally a science, it could be dictated by a fixed set of rules and would end up being rigid and predictable, without room for creativity.
Since composition is in part an art, the guidelines can occasionally be broken. But when they are it's generally by someone who understands the principles and recognizes how, in the interest of greater impact, they can be successfully transcended in specific instances.
When most individuals break the guidelines, it's because they are not "visually savvy." The results speak loud and clear: weak, confusing and amateurish-looking work.
With all this as a background in the next module we'll look at some specific guidelines for composition.
________________________________________



________________________________________
Module 23
Updated: 03/30/2010

Part I

Elements of Composition


The next series of modules will address 15 guidelines on composition, starting with the most important of all —

Clearly Establish Your Objectives
1. First, clearly establish your objectives and hold to them throughout the production. Your objectives in doing a production may be anything from creating an experience of pure escapism to doing a treatise on spiritual enlightenment.

Few people would start writing a sentence without any idea of what they wanted to say. Visual statements are no different.
Good writers, producers, directors, and editors know the purpose of each and every shot.
Before you decide to include any shot, be able to justify its purpose in the overall message or goal of the production.

"I couldn't resist it, it was such a pretty shot," is not a legitimate reason for including an extraneous scene in a production — no matter how pretty or interesting it is. It will either slow down the pace of the production or confuse your audience by suggesting that the shot carries some special meaning that they need to keep in mind -- or it will do both.

Slow = Boring
And speaking of slowing things down, "slow" is commonly associated with "boring" — excuse enough to switch the channel to try to find something more engaging. And, with dozens of TV channels to choose from, there's real competition for viewer attention.
• If information is presented either too slowly or at a level that is beneath an audience, the production will be perceived as being boring.

• If it is presented too quickly or in too abstract a fashion, the audience can become lost and frustrated.
In either case they will probably quickly consider other options.

The speed at which ideas are presented in productions has increased dramatically in recent years.
We can clearly see this in long-running TV series. Compare specific soap operas (afternoon dramas) of five years ago to the same series being done today. In order to stay competitive (i.e., hold an audience) these programs now feature exotic locations, faster cutting, greater and more frequent emotional swings, faster-moving and richer story lines, and...
...those two ingredients that are always relied upon to increase the flow of adrenaline: regular dips into violence (or the threat of violence) and sex (or at least the possibility of sex).

In novels authors used to spend many pages elaborately setting scenes. Now readers are apt to say, "Enough! Get to the point!"

As a university professor who has been teaching television production for a few decades, I can attest to the fact that the vast majority of video projects I see are too long. Shots are held long after the point is made. In fact, a good editor could cut most of these projects or productions down by at least half and in the process make them more effective and interesting.
This brings us to an important maxim:
If in doubt, leave it out.

"But," the question is often asked, "Isn't good production always good production, no matter how much time passes?"

From a commercial perspective the answer is "no."

Most of yesterday's classic films are rather boring to today's audiences. Among other things, they simply move too slowly.
Citizen Kane is considered by many film historians to be this country's greatest film. In terms of production techniques it was far ahead of its time. But, now, after a few decades, its production techniques are so behind the times that it's difficult to get a group of average people to sit through this film.

TV writers used to be content following a single dramatic idea (plot) for an entire show. Today, dramatic television typically consists of parallel stories and numerous plots and subplots intricately woven together.

Depicting Emotional States
Videographers and filmmakers find it challenging to effectively convey emotional states.
For example, quick, seemingly unrelated scenes of stalled city traffic, lines of people pushing through subway turnstiles and shots of people jamming escalators might be important in establishing a frenzied state of mind in a character trying to cope with life in the city. But a close-up of "a darling little girl sitting on a bench" in this sequence would not only leave the audience wondering what her role was, but it would probably mislead them into believing that there is a relationship between her and the central story line.
Viewers assume that every shot, gesture, and word of dialogue in a production is there to further the central idea. Thus, each shot you use should contribute to the story or idea you are trying to convey.

Strive for a Feeling of Unity
2. Strive for a feeling of unity. If a good film or prize-winning photo is studied, it's generally evident that the elements in the shot have been selected or arranged so they "pull together" to support the basic idea.
When the elements of a shot combine to support a basic visual statement, the shot is said to have unity.
The concept of unity applies to such things as lighting, color, wardrobes, sets, and settings.
For example, you might decide to use muted colors throughout a production to create a certain feeling or atmosphere. Or, you may want to create an overall atmosphere by using low-key lighting together with settings that contain earthy colors and predominant textures.

By deciding on certain appropriate themes such as these, you can create a consistent feeling or look that will give your production or segments within your production unity.

Compose Around A
Single Center of Interest
3. The third guideline applies to individual scenes: compose scenes around a single center of interest.
Multiple centers of interest may work in three-ring circuses where viewers are able to fully shift their interest from one event to another. But competing centers of interest within a single visual frame weaken, divide, and confuse meaning.
Think of each shot as a statement.

An effective written statement should be cast around a central idea and be swept clean of anything that does not support, explain, or in some way add to that idea.
Consider this "sentence": "Man speaking on phone, strange painting on the wall, coat rack behind his head, interesting brass bookends on desk, sound of motorcycle going by, woman moving in background...."
Although we would laugh at such a "sentence," some videographers create visual statements (shots) that include such unrelated and confusing elements.
We are not suggesting that you eliminate everything except the center of interest, just whatever does not in some way support (or at least, does not detract from) the central idea being presented.
A scene may, in fact, be cluttered with objects and people, as, for example, an establishing shot of a person working in a busy newsroom.
But each of the things should fit in and belong, and nothing should "upstage" the intended center of interest.

A master (wide) shot of an authentic interior of an 18th-century farmhouse may include dozens of objects. But each of the objects should add to the overall statement: "18th-century farmhouse." Just make sure you put these supporting elements in a secondary position.

The viewer has a limited time — generally only a few seconds — to understand the content and meaning of a shot. If some basic meaning isn't obvious before the shot is changed, the viewer will miss the point. (Recall that one of the definitions of a "director" is one who "directs attention.")

Selective Focus to the Rescue
Part of the "film look" that many people like centers on selective focus, covered in an earlier module.
Early film stocks were not highly sensitive to light and lenses had to be used at relatively wide apertures (f-stops) to attain sufficient exposure.
This was fortunate in a way. By focusing on the key element in each shot and throwing those in front and behind that area out of focus, audiences were immediately led to the scene's center of interest and not distracted by anything else.
Even with today's high-speed film emulsions directors of photography often strive to retain the selective focus effect by shooting under low light levels and using wide lens apertures.
The same principles that have worked so well in film can also be used in video.
Note how foreground and background elements here have been thrown out of focus so that attention will center on the young woman.
This level of image control takes extra planning with with today's highly sensitive video cameras because the auto-iris circuit can adjust the f-stop to an aperture that brings both the foreground and background into focus.
To make use of the creative control inherent in selective focus, high shutter speeds, neutral density filters, or lighting control must be used.

Where There Is Light...
The eye is drawn to the brighter areas of a scene.
This means that the prudent use of lighting can be a composition tool, in this case to emphasize important scenic elements and to de-emphasize others. We'll see more examples of this in the modules on lighting.

Shifting the Center of Interest
In static composition scenes maintain a primary center of interest; in dynamic composition centers of interest can change with time.
Movement can be used to shift attention. Although our eye may be dwelling on the scene's center of interest, it will quickly be drawn to movement in a secondary area of the picture. Someone entering the scene is an example.
As we noted in an earlier module, we can also force the audience to shift their attention through the technique of rack focus, or changing the focus of the lens from one object to another.

Observe Proper Subject Placement
4. The fourth general guideline for composition is: observe proper subject placement.

In gun-sight fashion most weekend snapshooters feel they have to place the center of interest — be it Uncle Henry or the Eiffel tower — squarely in the center of the frame.
This generally weakens the composition of the scene.
Rule of Thirds
Except possibly for people looking directly at the camera, it's often best to place the center of interest near one of the points indicated by the rule of thirds.
In the rule of thirds the total image area is divided vertically and horizontally into three equal sections.
Although it's often desirable to place the center of interest somewhere along the two horizontal and two vertical lines, generally composition is even stronger if the center of interest falls near one of the four cross-points illustrated in the photo on the right below.
A few still cameras even have the rule of thirds guidelines visible in their viewfinders.



Note that both photos above have centers of interest consistent with the rule of thirds.
Here are two more examples.



But, remember, we are speaking of a rule of thirds, not law of thirds. The rule of thirds is only a guideline — something that should be considered while composing a scene. Although composition is often stronger using the rule of thirds, many scenes (see below) "work" that do not follow this guideline.


Horizontal and Vertical Lines
Weekend snapshooters also typically go to some effort to make sure that horizon lines are perfectly centered in the middle of the frame. This also weakens composition by splitting the frame into two equal halves.

According to the rule of thirds, horizon lines should be either in the upper third or the lower third of the frame.
In the same way, vertical lines shouldn't divide the frame into two equal parts. From the rule of thirds we can see that it's generally best to place a dominant vertical line either one-third or two-thirds of the way across the frame.
It's generally also a good idea to break up or intersect dominant, unbroken lines with some scenic element. Otherwise, the scene may seem divided.
A horizon can be broken by an object in the foreground. Often, this can be done by simply moving the camera slightly. A vertical line can be interrupted by something as simple as a tree branch.
Although the horizon line is in the center of the frame in this picture, the masts of the boats break it up and keep it from dividing the frame in half.
Even so, when possible, it's generally more desirable to follow the rule of thirds and put the horizon line in the top third or lower third of a frame.

Leading the Subject
Generally, when a subject is moving in a particular direction, space is provided at the side of the frame for the subject(s) to "move into." This is referred to as leading the subject. In a close-up (see below on the right) we might refer to it as "looking room."



Note that in the photo on the left above that space is allowed for the subjects to "walk into." In the photo on the right above "looking space" is provided on the left side of the frame.
________________________________________
The required reading for this module relates to an important social issue: television production and violence.

Module 24
Updated: 03/30/2010
Part II



Composition


In this module we'll cover composition guidelines 5-10, starting with —

Maintaining Tonal Balance
5. The tone (brightness and darkness) of objects in a scene suggests weight. For example, against a medium background dark objects seem heavier than light objects. (Note the illustration here.)
Once you realize that brightness influences mass, you can begin to "feel" the visual weight of objects within a scene and strive for balance.
Note, for example, the tonal balance in the photo at the beginning of this module.

Balance Mass
6. Somewhat related to this is the sixth guideline: balance mass.
Just as a room would seem out of balance if all of the furniture were piled up on one side, a scene must be balanced to be aesthetically pleasing.
Regardless of their actual physical weight, large objects in a scene seem heavier than small ones. By objectively viewing the elements in a scene, you can learn to see their "perceptual weight."
To do this it helps to imagine a fulcrum or balance point at the bottom center of each of your shots.
Several things can be done to try to balance a shot: the camera can be panned to the left or right, a new camera angle can be selected, or the lens can be zoomed in or out to include and exclude objects. Seldom will objects actually have to be moved around.

Create a Pattern of Meaning
7. The seventh guideline for effective composition is: use a combination of scenic elements to create meaning.
Most people are familiar with the inkblot tests used by psychiatrists. By presenting someone with a "meaningless" collection of shapes and forms an individual draws from his or her background and thoughts and projects meaning into the abstract images. ("That looks like a father scolding his son," or "That looks like a school being crushed by a bulldozer.")
In the same way, if a variety of objects appear in a still photo or video scene, we — possibly even unconsciously — try to make sense out of why they are there and what they represent.
We assume that things don't just come together by accident. Good directors take advantage of this tendency and pay careful attention to the specific elements included in a scene.
The most obvious example of this is the atmosphere introduction, which we discussed earlier, where a director opens on a scene full of clues about the central characters—long before we see them.
What would be suggested by opening a dramatic production with the shot on the right?
Elements in a shot may be bold and obvious, or they may be subtly designed to suggest almost subconscious meaning.
Film critics have spent many hours discussing the symbolism and subconscious levels of meaning in films by directors such as Frederico Fellini. American films such as The Graduate and The Da Vinci Code, contain telling and meaningful background elements that most people will not "catch" until they are pointed out.
While the director of a dramatic piece should be a master at creating illusions and emotional responses, the job in ENG (electronic news gathering) and documentary work is to clearly show things the way they are and let events and facts speak for themselves. This will be covered in Module 62.
However, this approach does not rule out striving for new and creative ways to present subject matter. Often, it's only by presenting the familiar in an entirely new way that an audience is awakened (or possibly reawakened) to its significance.

The Concrete and the Abstract
Whereas in news the object is to present images as completely and clearly as possible, a shot in a dramatic production might be intended to lead viewers toward intended meaning without being totally concrete. Savvy viewers want a bit of room to think and interpret on their own.
The phrase, "too much on the nose," is used in feature film writing to denote script dialogue or shots that have gone too far in "saying it all." To sophisticated audiences this can come across as overly simplistic.

In deciding just how far to go along the abstract-to-concrete continuum videographers must know their target audience.
Considering the economic realities of the marketplace, videographers—at least those who wish to be successful—don't have the luxury of blithely going along "doing their own thing" and not concerning themselves with their particular audience demographics.
Good composition is primarily effective visual communication, and the most effective communication takes place when a videographer understands an audience. This generally involves steering a middle path between being totally concrete and on the nose, and being so abstract that the target audience misses the intended message.

Including Multiple Levels of Meaning
It is possible to have it both ways? Yes, sometimes. Films and television programs can be designed to have multiple levels of meaning.
Animated films such as Cars, Over the Hedge, Finding Nemo, Aladdin, The Lion King, and Shrek are examples. While the animated characters and the simple story line are entertaining children, the grown-ups pick up on the adult humor.
This, of course, makes it much easier for adults to sit through these "kids" films, and makes it more likely that they will take their kids to another such film.
Most movies and television programs strive for a broad-based appeal. If a writer (and director and editor) can "layer" a production with multiple levels of meaning and successfully provide something for everyone — admittedly, not an easy task — the production will have a much greater chance of success.

Using Lines
8. The eighth guideline for visual composition is: make use of lines.
The boundaries of objects in a shot normally consist of lines: straight, curved, vertical, horizontal, and diagonal.
Our eyes tend to travel along these lines as they move from one part of the frame to another.
Knowing this, it becomes the job of the videographer to use these lines to lead the attention of viewers to the parts of the frame they wish to emphasize.
When used in this way, lines are referred to as leading lines, because they are selected or arranged to lead the viewer's eyes into the frame, and generally to the scene's center of interest.
In addition to moving our eyes around the frame, lines can suggest meaning in themselves. Straight, vertical lines suggest dignity, strength, power, formality, height, and restriction.

Horizontal lines suggest stability and openness. Diagonal lines can impart a dynamic and exciting look. Curved lines suggest grace, beauty, elegance, movement, and sensuality.
The S-curve is particularly effective in gracefully leading the eye to a center of interest. (Note the photos above and on the right.)
In contrast to curved lines, sharp jagged lines connote violence or destruction, and broken lines suggest discontinuity.

Frame Central Subject Matter
9. The ninth guideline for effective composition is: frame the central subject matter.


By putting objects at one or more edges of the picture, a shot can be framed.
Framing a scene holds attention within the shot and keeps viewer attention from wandering or being distracted from the center of interest.
To cite a common example, a leaning tree branch at the top of a scenic shot breaks up a bright sky and acts as a visual barrier or "stop point" for the top of the frame.
Note in the photo here how framing a shot with foreground objects adds depth and dimension.


Make Use of Visual Perspective
10. The tenth guideline is: use the effect of visual perspective to enhance or support the scene's basic idea.
As noted previously, camera positions and lens focal length can alter both the apparent perspective in a shot and the apparent distance between objects.
A minimal camera-to-subject distance coupled with a short focal length lens (or a zoom lens at its widest position) exaggerates perspective.
In the case of this photo note that the parallel lines are wide apart in the foreground and converge on the center of interest. Selective focus is also used to good advantage.
By creatively controlling such things as lens focal lengths and camera distance, quite different impressions about a subject can be conveyed. You may recall that there were a number of examples in Module 11.
Additional examples of composition can be found here.
________________________________________
Module 25
Updated: 04/01/2010

Module 30
Updated: 04/20/2010




Lighting
Instruments


"Quartz" Lamps
Almost all incandescent lamps used in TV production are tungsten-halogen lamps (commonly called quartz lamps). They normally range from 500 to 2,000 watts.
This type of lamp is more efficient than the common light bulb type incandescent lamp, and it does not darken with age.
Quartz lamps get extremely hot, which makes ventilation important. Because of the great heat associated with tungsten-halogen lighting instruments, burnt fingers are a hazard.
Special care must be taken when these lamps are changed (in addition to unplugging the lights and letting them cool down) to make sure that oil from fingers is not deposited on the outer glass (quartz) envelope of the lamp. Because of the great heat associated with these lamps, any residue of this sort will create an area of concentrated heat that will cause the lamp to fail -- and they can be rather expensive to replace.
Care must also be taken not to subject the lamp to jolts while they are turned on, or the fragile internal element can break.
Tungsten-halogen lamps are used in several common types of lighting instruments including the type that has been used for decades, the Fresnel (pronounced fra-nell).

Fresnels
Although Fresnels used to be so bulky and heavy that they were confined to studios, recent versions are small enough to be packed away in lighting kits and used on location.
The Fresnel lens, invented by French physicist Augustin-Jean Fresnel, consists of concentric circles that both concentrate and slightly diffuse the light. Note the photo on the left below. The coherence (quality) of the resulting light represents an ideal blend between hard and soft. In the studio these lights are typically hung from a grid in the ceiling.



A C-clamp or pipe clamp (on the right, above) is used to attach the light to the studio's ceiling grid.
Because of the safety hazard a falling Fresnel light some 5 meters (17 feet) feet overhead represents, a safety chain or cable should always be used along with the C-clamp. These wrap around the grid pipe and will keep a heavy light from falling if the C-clamp fails.
The distance between the internal lamp and the Fresnel lens can be adjusted with this type of light to either spread out (flood), or concentrate (spot or pin) the light's beam. This adjustment provides a convenient control over the intensity of the light, as well as the coverage area.

LED Lights
In recent years LED (Light-emitting diode) lamps have started being widely used in in TV studios. A basic studio light is shown here.

LED lights have at least nine important advantages over other types of lighting elements.

1. They produce more light per watt than incandescent bulbs, not only reducing power costs, but making them useful on locations and in battery powered devices, such as camcorder lights

2. They can emit light in a range of color temperatures without the use of color filters.

3. Unlike incandescent and fluorescent sources that often require an external reflector to collect and direct light, they can be designed to some degree to focus and direct light.

4. When dimming is required, LEDs do not change color as voltage is reduced.

5. Being solid state, they are difficult to damage. Fluorescent and incandescent bulbs are easily broken, ▲ especially if dropped.

6. They have a long life -- 35,000 to 100,000 hours. This is longer than fluorescent tubes and far longer incandescent bulbs.

7. Unlike some types of lights, they light up and stabilize almost instantly.

8. They do not generate the amount of heat that many other lighting instruments do, reducing studio cooling costs.

9. On some types the color temperature can be readily shifted to accommodate indoor and outdoor color temperature needs.
Although these are important advantages, especially in this era's need to reduce energy consumption, LED lamps have some disadvantages -- most of which can be controlled or accommodated.

1. They are currently more expensive than more conventional lighting technologies. However, the initial cost can be made up over time in reduced energy and lamp replacement costs.

2. Performance and life depends on the temperature of the operating environment. High surrounding temperatures or heat build-up (if this is allowed) will reduce both.

3. They require stable voltages and electrical current, which can involve regulated power supplies.

4. Although not as pronounced as with fluorescent lamps, the color spectrum of some LEDs have energy spikes that can cause color distortion. Some white LED lights have a dip or a hole in the color spectrum that cannot be corrected with white balancing.
5. Finally, as with most lamps, the output of LED lamps will start to dim with age.

Scoops
Scoops produce a softer light than Fresnels. The incandescent (tungsten-halogen) lamps they normally use range from 500 to 2,000 watts.
Because there is no lens, the light is not projected any significant distance. As we will see, scoops are commonly used in the studio for fill light along with ▲ LED soft lights.
Note that this scoop shown here has a square filter frame attached to the front. Colored gels, diffusers, and scrims can be slid into this frame to change the light in various ways.

Ellipsoidal Spots
The ellipsoidal spot produces a hard, focused beam of light. Used with gels, these lights can project colored pools of light on a background.
Some ellipsoidal have slots at their optical midpoint that accept a "cookie" (cucalorus), a small metal pattern (shown in red in the middle of the drawing below) that can be used to project a wide variety of patterns on a background.






In some cases, a background pattern (see samples on the left) may be all you need in a medium shot or close-up to suggest a complete setting. For example, a colored stained glass pattern behind a person suggests that person is in a church.
Abstract patterns, or patterns suggesting the theme of a program, can also be used to break up what might otherwise be a blank background.
These can either be in the form of a cookie inside the light as indicated in the drawing above, or a large pattern mounted on a stand. When a coherent light source such as an ellipsoidal spot is directed at the pattern, a shadow of the pattern is projected on the background.
These large patterns are referred to as gobos, a term which stands for "go between."
Backgrounds, sets, and settings are discussed in this section.
Although Fresnels, scoops, and ellipsoidal spots are the most used types of studio lights, there are also several other types of lighting instruments including HMI lights. These are covered here.

Camera Lights
In ENG (electronic newsgathering) where quality is often secondary to getting a story, camera-mounted, LED, tungsten-halogen, or HMI lights (often called sun-guns) are sometimes used as a sole source of illumination.
These lights can be mounted on the top of the camera as shown here or held by an assistant.
Camera lights are typically powered by batteries -- often, the same batteries that power the camcorder.
The camera light shown here is a 24-watt HMI, a fixed output Frezzi fill light, with a full spectrum output sufficient to compete with sunlight for many applications.
Both tungsten-halogen (quartz) and HMI lamps are being replaced by LED units, which provide a softer light and consume much less power. The color temperature of some LED camera lights can be varied, which is important when they are used as a fill under different lighting conditions.
When used as the only source of light they provide the same (questionable) quality as the familiar single-flash-on-the-camera does in still photography. As a result of the straight-on angle involved, picture detail and depth are sacrificed.
Plus, because of the relationship between distance and light intensity, the detail and color of foreground objects often becomes washed out, and objects in the distance typically go completely dark. (Recall the problem with tonal mergers.) Consequently, camera lights are best used as a fill for a more dominant source of light.

Attachments to Lighting Instruments

Barn doors
From lighting instruments themselves we now turn to attachments that are used with these lights.
Adjustable black metal flaps called barn doors can be attached to some lights to mask off unwanted light and to keep it from spilling into areas where it's not needed.
While barn doors provide a soft cutoff (edge) to the perimeters of the light, flags provide a sharper, more defined cutoff point.

Flags
Flags consist of any type of opaque material that can block and sharply define the edges of the light source. They are often created and shaped, as needed, from double or triple layers of aluminum foil.
Flags are generally either clipped to stands or attached to the outer edges of barn doors. The further away they are from the light source, the more sharply defined the light cutoff will be.
Filter Frames
Filter frames are typically a part of the barn door attachment that slides over the front of lighting instruments. They can hold:
• one or more scrims to reduce light intensity
• one or more diffusers to soften the light, or
• a colored gel to alter the color of the light
Each of these simply slides into the filter frame, which attaches to the front of the lighting instrument.
Now that we know the basics of lamps and lighting instruments, we're ready to put them into use. In the next Module we'll start with the most important light, the key light.
________________________________________

________________________________________
Interactive Test Interactive Crossword
Module 31
Updated: 04/16/2010



The Key
Light



It's impossible to make an actor or set look good from three opposing angles at once unless it's lit like a Wal-Mart.
--Robert McLachlan, Cinematographer, Bionic Woman, 2007

In typical lighting setups, lighting instruments serve four functions:
• key lights
• fill lights
• back lights
• background lights
The photo below was shot with so-called formula or three-point lighting.
Even though some lighting directors say there is no such thing as a "formula" for lighting, the formula we'll discuss will provide excellent results for most of your ▲ video work.
Later, we'll have a series of examples that shows this formula in action.
If you study this photo you may detect four light sources:
• one on the left (the key light )

• one of the right (a much dimmer fill light )

• one on the hair (a back light ),
and

• one on the background (a background light )
Note: black and white photos and movies are often preferred when studying lighting because lighting effects are more readily apparent without the dimension of color. By the way, in case you are wondering, we call this three-point lighting, even though it involves four lights. Since the background light is not on the subject, it doesn't count in three-point lighting.
The combination effect of these four lights (put in exactly the right place, at exactly the right intensity and with the right quality/coherence), creates an optimum over-all effect.
We'll start with the key light in this module and take up the other lights later.


Key Light Considerations
As the name implies, the key light is the main light.
The key light highlights the form, dimension
and surface detail of subject matter.

In terms of coherence or quality the key light should be in the middle of the hard-to-soft range. As you can see from some of the illustrations in these chapters, light that is either too hard or too soft is not desirable for most subject matter -- especially people. A "middle ground" is achieved with a Fresnel light.
In three-point (formula) lighting the key light is placed at an angle of between 30- and 45-degrees from either the left or the right of the camera.
In the photograph of ▲ the model at the start of the module the key light is on the left, just as it's shown in this drawing.
Forty-five degrees off to one side is best because, among other things, it brings out optimum texture and form (dimension) in the subject. For the sake of consistency, the 45-degree angle will be used throughout this discussion.
This brings us to the rule we'll need to keep in mind, especially if multiple cameras and camera angles are involved in the production:
________________________________________
Light for the close-up camera.
________________________________________
In multiple-camera dramatic productions you will have to confer with the director during the camera-blocking phase of preproduction to find out which cameras will be taking most of the close-ups of each person.
Does it matter if the key is on the right or the left? Possibly. There are four things you need to think about in making this decision.
• the person's best side Put the key on this side. It will emphasize the positive and downplay the negative facial characteristics.

• follow source Is there an apparent source of light in the setting such as a window or nearby table lamp? If so, be sure to key from this direction.

• consistency In most settings it will look a bit strange if two people are sitting next to each other and one is keyed from the left and one from the right.

• what's most practical If there is a wall or obstruction on one side of the subject -- a possible problem when doing on-location shoots -- you will generally want to key from the side that will enable you to use a 45-degree angle.
One thing you don't want is to "put lights everywhere" in a frantic effort to wipe out every shadow from every conceivable camera angle. In a studio setting where there are multiple areas to light, you can end up with scores of lights. Three-point lighting for a close-up position will end up being 20-point lighting, which is the same as poor lighting.
In typical studio and on-location news programming the best lighting effect is often sacrificed in favor of rather flat, shadowless lighting, which is simpler, less demanding, and holds up over more camera angles.

It's not unusual for a large set in a major dramatic setting to require more than 100 lights -- but they are grouped to light specific areas. Unless basic lighting simplicity is preserved on the major close-up talent positions, things can end up in a mess, which brings us to another lighting guideline:
________________________________________
The simpler the design, the better the effect.
________________________________________
Among other things, the key light creates a catchlight in the eyes of subjects -- a (single) spectral reflection in each eye that gives the eyes their "sparkle." In the ▲ eyes of the model notice the single catchlight in the eyes.
When you "put lights everywhere," it not only results in a multitude of catchlights in the eyes, but it generally results in flat, lifeless lighting. Numerous lights hitting talent areas also create a confusing horde of shadows. Barn doors and flags can be a great help in keeping light out of unwanted areas.

The Key's Vertical Angle
We have established that the horizontal angle for the key light is approximately 45-degrees to the left or right of the subject in relation to the camera. One other key light angle should be considered: elevation.
As shown below, this angle is also commonly 45 degrees for the key light. We'll cover the other lights shown later.

Some lighting directors prefer to place the key right next to the camera, or at a vertical angle of less than 30 degrees. Sometimes in limited on-location conditions this may be unavoidable.
However, three problems result from reducing these angles:
• the full illusion of depth and form will be sacrificed (not especially desirable unless you want to create a flat effect with minimal surface detail)

• there is a risk of having shadows from the key light appear on the background directly behind the subject (where they are most objectionable)

• the talent is forced to look almost directly into a bright key light when they try to look at their camera, which can result in squinting, not to mention make reading a camera prompter difficult
Ideally, when the talent face their close-up camera they should see the key light 45-degrees off to one side of the camera at an elevation of about 45 degrees -- which is not unlike the effect we often see outside in sunlight.

In recent years there has been a move to flatter, softer lighting in non-dramatic productions. This gives on-air talent a more youthful appearance, is less demanding in terms of lighting expertise, and it allows the use of multiple camera angles without the fear of shadows.
But, as you can see in the photos below, flat lighting (on the left) comes at the expense of form and perceived dimension (on the right).


Even so, as we've mentioned, some lighting directors feel that relatively flat lighting has advantages for news and interviews. This effect is similar to what you see in the photo on the left above.


A commonly used lighting setup for this is shown on the left.

Note that color-balanced fluorescent or LED light banks are used for keys. Although no fill light is needed, the use of backlights behind the subjects is recommended. Because of the soft, diffused key lights, a background light may not be necessary. More on these lights a bit later.


Keys and Boom Mics
Returning to our formula approach to lighting, since the key light -- typically a Fresnel -- is the brightest light on the front of a subject, it's the one that will create the darkest shadows.
Shadows from boom mics (microphones suspended from long poles over the talent areas) can be minimized by positioning the boom parallel to (directly under) key lights.
By not placing talent too close to a background, the boom shadow will end up on the floor rather than creating distracting shadows on the background -- assuming you keep the key at the recommended height of 45 degrees.

The Sun As a Key
When shooting on location during the day, the sun will normally be your key light. However, direct sunlight from a clear sky results in deep, black shadow areas with a major loss of detail.
If the sun is directly overhead, a "high-noon effect" will be created, producing dark eye shadows. Put technically, in both instances you've grossly exceeded the contrast or brightness range of the video system.
Suffice it to say, direct sunlight, especially for close-ups, can look unflattering, not only to the person in front of the camera, but for your mastery of production skills.
To get around the "high noon effect," it may be best to shoot sunlit, on-location productions in mid-morning or mid-afternoon when the sun is at an elevation of about 45 degrees.
If subjects can also be oriented so that the sun (the key light) ends up being 30 to 45 degrees off to one side of the camera, lighting will be best -- especially if a fill light (to be discussed in the next section) is used to slightly fill the shadows caused by the sun.
On an overcast day the diffused sunlight will provide a soft source of light.
If the diffused sunlight is coming from behind the subject, it can provide good back lighting, while the ambient light from the overcast sky furnishes soft front lighting.
With the proper level of cloud cover this can result in soft, flattering lighting, as shown in this illustration.
But there can be a problem.
Note the bright background in this photo. In camcorders with automatic exposure control this will result in underexposure (with unnaturally dark skin tones) unless the back light control is used to open up the iris two or three f-stops.
If the camera has a manual iris control, you have an even better option. You can manually open the iris while carefully observing the result in the viewfinder. (Recall that the module on quality control discussed this concept.)
The soft light effect in direct sunlight can be achieved with the help of a large translucent screen. A thin white sheet can sometimes be used, but for professional applications commercial versions, such as this Griffolyn screen, are available. Although this setting is in direct sunlight, the subjects sitting in the Jeep are softly lit.
________________________________________
Module 32
Updated: 04/01/2010



The Fill, Back and
Background Lights



We noted in the last module that the key light establishes the dimension, form, and surface detail of subject matter. Although the remaining lights have less important roles, they are nevertheless critical in creating an acceptable lighting effect.
The key light by itself — whether it's the sun in a clear sky or a focused quartz light in the studio — produces distracting shadows. (We'll see some examples later.) The purpose of the fill light is to partially (but not entirely) fill in the shadows created by the horizontal and vertical angles of the key light.

The Fill Light
Ideally, the fill light should be about 90 degrees away from the key light.
This means that if you draw lines from the key to the subject and then to the fill light, you'll create a right angle.
Although the fill can be positioned at any point from right beside the camera to 45 degrees away, it's safest to place the fill 45 degrees from the camera.
By lighting a full 90-degree area, an important margin of safety is created in case subjects unexpectedly move and camera angles have to be changed during the production.
Having to stop a production to change the position of lights can represent a time-consuming and costly delay — not to mention, making you a bit unpopular with the cast, crew and director.
Although the horizontal angle for the key should be about 45 degrees, the vertical angle of the fill is less critical.


Generally, the fill is placed just above the camera, as shown above, which means it ends up being slightly lower than the key. In this position it can easily do what it's intended to do: partially fill in the shadows created by the key light.
The height of the fill can be lowered from the grid to the proper angle by an extension rod (pipe) or by a counterbalanced extension device shown above on the right.
We've suggested that the fill light should be softer than the key. A soft light source is able to subtly fill in some of the key's shadows without creating a second catchlight in the eyes.
Note in the photo here how the shadow from the key on the cheek is only partially removed by the fill, creating a gradual rounding off of the key light on the cheek.
This key-fill difference provides much of the perception of three dimensions that's desirable in a medium that's basically limited to two dimensions.

Fill Light Options
A good choice for a studio fill light is a scoop, or a bank of color-balanced fluorescents.
When doing on-location work a portable quartz stand light can be used with a diffuser. The diffuser not only softens the fill light, but it can appropriately reduce its intensity. We'll cover the relative intensity of each of the lights in the next module.
Outside, when the sun is being used as a key, a reflector board can be positioned at about 90 degrees from the sun to reflect sunlight into the shadow areas.
Large white Styrofoam or foam core boards are often used in doing close-ups. A large blank, white artist canvass in a wooden frame, available at most artist supply stores, is being used here.
Although more expensive, folding silver reflectors, available at photo supply stores, are easier to transport and can reflect light much greater distances.




These photos illustrate a subject in harsh sunlight with and without a reflector fill.



If a key light puts out a wide beam of light, part of this light can be bounced off of a reflector board to act as a fill.


The Back Light
At this point in formula lighting we've covered two of the three lights on the subject.
The third point is represented by the back light. The function of the back light is to separate the subject from the background by creating a subtle rim of light around the subject.




The back light, sometimes called a hair light, should be placed directly behind the subject in relation to the close-up camera.
From an overhead perspective you should be able to draw a straight line from the lens of the close-up camera, through the subject, directly to the back light. Note drawing above.
Although the elevation of the back light is often dictated by conditions, a 45 degree angle is most desirable.
If the back light is too low, it will be picked up by the camera in wide shots; if it's too high it will spill over the top of the subject's head, lighting up the tip of the nose, creating "the Rudolph effect," after a well-known reindeer.
Compared to the key, a smaller, lower-wattage instrument can be used for a back light for two reasons. First, back lights are often placed closer to the subject than the key light, and, second, with subjects confined to a limited area like a chair, the beams of most Fresnel lights can easily be "pinned down" (focused into a narrower beam) to intensify the beam.
By using only back lights with no front lighting a silhouette effect can be created. This can be used for dramatic effects or to hide someone's identity. (Note ▲ the photo of the woman reading the script.) In trying to successfully eliminate all front lighting — especially in an effort to hide someone's identity — watch out for reflected light from walls and the floor.


Outside the studio, the use of back light (generally in the form of sunlight) can add depth and separation to subject matter.
Note the effect of strong backlight in these photos.



At the same time, strong back light without adequate front light can create an exposure problem — unless you intentionally want to achieve a partial silhouette effect.
Remember, on many camcorders there is a back light control that's designed to compensate (to some degree) for this exposure problem. A careful balance between front light and back light can add a 3-D quality to scenes.
At this point you should study the effects of the various lights as shown here.

Background Lights
Background lights are used to illuminate the background area and add depth and separation between scene elements. (Remember that a back light is designed to light up the back of subjects and a background light is designed to light up the front of backgrounds.) The effect of the backlight is shown below.


Once the background light is added, the lighting setup is complete, as shown in the drawing on the right above.
Any type of light can be used as a background light as long as it provides fairly even illumination across the background, does not hit the central subject matter, and is at the appropriate intensity.
If the background has detail or texture, you will want to put the background light on the same side as the key, as shown in the drawing above. This keeps the dominant light consistent in the scene.
Note in the photo on the left above that you can see the effect of both the back light and the background lights.
This brings us to the last major issue in formula lighting: the relative intensity of each of the lights. We'll cover that in the next module.
To see all of the lights we've discussed and their effect carefully study this photo.
________________________________________
Module 33
Updated: 03/10/2010



Lighting
Ratios

Unless each of the four basic lights we've discussed is at the proper intensity, the formula lighting approach — or any good lighting approach — will not work.
Since the key light is the dominant light on the subject, it must be stronger than the fill light. In color production the fill should be about one-half the intensity of the key.
This key-to-fill brightness difference is expressed in terms of a lighting ratio.
If the key light is twice as bright as the fill, the ratio will be 2:1, which is the standard for most TV applications. At the same time, as we've noted, some lighting directors, especially in TV news, prefer to make the key and fill the same intensity, resulting in a ▲flat, high-key effect. This option will be discussed more fully later.
Using the 2:1 ratio, if the key light is 2000 lux, the fill will be 1000 lux; if the key light is 90 foot-candles (FC) the fill light would be 45 FC. Although many lights may be used in a scene, the lighting ratio refers to the ratio between just two lights: the key and the fill.
The key-to-fill ratio affects how the form, dimension, and surface texture of subject matter will be rendered. To achieve dramatic effects, and occasionally to meet the needs of special subject matter, ratios other than 2:1 can be used.
If a lux or foot-candle meter isn't available to establish the proper lighting ratios, a standard photographic light meter can be used. The f-stop difference between the intensity of lights can be translated into a lighting ratio.
To achieve a standard 2:1 ratio, for example, we assume that a light that (by itself) calls for an exposure of f:16 on a meter is twice as bright as one that registers f:11. Using this principle we can set up our key and fill lights according to the lighting ratios below.
________________________________________
Lighting Ratios
With differences (in f-stops) required
between key and fill light intensities
• 1:1 - no difference (flat lighting)
• 2:1 - One f-stop (for most color photography and videography)
• 3:1 - One and two-thirds f-stops (for general black and white photography)
• 4:1 - Two f-stops (for low-key dramatic effect)
• 8:1 - Three f-stops (for a very low-key dramatic effect) Because of the video contrast range limitations, by using ratios beyond this the dark areas will probably just be rendered as black, without discernible detail.
________________________________________
Recall that a simple way of establishing lighting ratios is by controlling the distances between the lights and the subject.
Sometimes it's desirable to minimize or smooth out the surface detail. If highly diffused key and fill lights are used close to the camera there will be a flattening of the appearance of subject matter and a minimizing of surface detail and texture.
Reducing the key-to-fill lighting ratio to 1:1, with the key intensity equal to the fill intensity, adds to this flat lighting effect. We'll re-visit our jewelry box to illustrate this. The first photo below was shot with a low lighting ratio (flat lighting), the second goes to the other extreme with a high key-to-fill lighting ratio.



Although form and dimension are sacrificed in flat lighting, this type of lighting can be useful in minimizing wrinkles and skin problems, and in creating a soft, flattering effect for the human face. This could be very important in a cosmetic commercial, for example.
In contrast, by increasing the key-to-fill ratio to 1:5 and beyond, surface detail and texture will be emphasized—especially if a hard key light is used at an angle from 65 to 85 degrees off to one side, as shown on the right above.
The same hard-soft lighting differences are present outside. The two photos below were taken of the same section of concrete blocks on a wall. The photo on the left was taken on an overcast day, and the photo right was illuminated by the overhead sun on a clear day. Here we can see the difference in both the quality (hardness and softness) of the sunlight and the lighting ratio. On the overcast day the key-fill ratio ends up being about the same, because the light is diffused.


Back Light Intensity *
To provide the subtle rim of light around subjects the back light has to be slightly brighter than the key. In the case of an on-camera person, back light intensity will depend on the hair color and clothes.
Subjects who have brown hair and clothes in the mid-gray range will require a back light one and one-half times the intensity of the key. Assuming a key light intensity of 1,500 lux, the back light would then be 2,250 lux.
If you don't have a meter that reads in lux or foot-candles, you can simply move the back light slightly closer to the subject than the key light (with the key and fill lights on), until you see the desired subtle rim of light around the subject.
A person with dark hair and clothes will take more back light than a blond wearing light clothing. Be careful to observe the effect on a monitor or in a well-adjusted camera viewfinder.
With subjects who have hair and clothing of similar reflectance, the intensity of the back light is not too difficult to determine. But difficulties arise when a person has dark hair and a light coat, or blond hair and dark clothing. In such cases the beam of the back light(s) can be partially masked off with barn doors so that the brightest part of the beam will hit the dark areas.
The color temperature of the back light is not nearly as critical as it is with key and fill lights. Within limits, dimmers can be used.

Background Light Intensity
Because the background is of secondary importance to the center of interest, it should receive a lower level of illumination. Generally, the intensity of background light should be about 2/3 the intensity of key light. This will insure that the central subject matter stands out slightly from the background.
In case you've forgotten Math 101, you can get two-thirds of any number by multiplying it by two and dividing the result by three. Therefore, if the key is 2,000 lux, the light falling on the background should measure about 1,300 lux.
If you are using a photographic meter, you can set the background light 1/2 to 2/3 of an f-stop less than the key light.
Since backgrounds are typically one-dimensional (flat) and of secondary importance to the main subject matter, the placement of the lights and their angles is not critical.
But, the light across the background should be even, especially if you are using visual effects such as chroma key. By walking across the background area with an incident light meter, you can find dark or bright spots.

Subject-to-Background Distance
Shadows on backgrounds from mic booms, moving talent, etc., can be distracting and annoying. Background lights will lighten, but normally not eliminate, shadows. However, by moving subjects 3 meters (9 or more feet) away from a background, you will find (if the key is at an elevation of 45 degrees) that shadows will end up on the floor, out of sight, instead of on the back wall behind the subject.
Sometimes, however, it's necessary for talent to move in close to a background. An example would be someone explaining a chart on a wall. The use of a large softlight would render the shadows from the front lights almost invisible — if you don't mind the soft, diffused look it will create. Otherwise, you will just need to use a key angle that doesn't create distracting shadows.
Unduly dark backgrounds can be brightened up by using a higher level of illumination, and bright, intrusive backgrounds can be "pulled down" by lowering background illumination.

Multiple Purpose Lights
Occasionally, you can make lights serve dual purposes and still maintain the three-point lighting effect. Here, a one-on-one interview is lit with three lights instead of six. Note that each of the (very carefully placed) lights serves two purposes.
If distances are carefully controlled, the lights will be 50 percent brighter as back lights than as keys.
This can work well under carefully controlled situations where you know in advance the color of each person's hair (or, possibly, lack of it) and the color of their clothes.
In using this approach you won't have much latitude in accommodating special needs. For example, the chairs can't be moved without upsetting the lighting balance.
Now that we've covered the basics of lighting, in the next module we'll take up some special lighting situations.
________________________________________
*Back light can be one or two words, depending on the context.
________________________________________

Module 34
Updated: 04/02/2010



Special Lighting
Situations


Although the three-point lighting approach we've discussed can be relied upon to produce excellent results in most situations, we also need to look at some special lighting needs.
Let's start with a simplified design that creates a softer effect than the three-point formula approach.
In the drawing below note that a soft front light replaces both the key and fill. The umbrella reflector shown here, or a light bounced off a large white card, will provide similar results. In this case the broad area covered by the key acts as both a key and a fill.
Although the picture produced will not provide the same depth and dimension as formula lighting, the softer effect may be more flattering for subjects, especially if wrinkles and age lines are an issue.
If the background is close behind a subject, this may eliminate the need for a background light. Since you are using a diffused light source, the background will probably be softly lit and shadows on the background will be less noticeable. A backlight is still desirable to provide needed subject-background separation.

Using a Window Light as a Key
Sunlight from a window can also be used as a key. The fill light shown here comes from an incandescent light on a stand.
Without a blue filter over this type of light, it will look unacceptably yellow compared to the sunlight.
The ▲ the photo at the beginning of this module shows the uncorrected difference between these two color temperatures.
A light meter (or a good video monitor) can aid in creating the desired 2:1, key-fill lighting ratio.

Bounced Light
For short ENG segments bounced light can be used. The drawings below show approaches for large and small rooms.
Although the soft lighting effect leaves a bit to be desired, this approach may be adequate for short segments.
Note that this design uses a single light bounced off the ceiling. Obviously, you must be working in a room with a relatively low white or light-gray ceiling. The white, acoustic tile commonly found in offices works well.
Bounced light creates a soft, even light throughout the entire room, an effect that is similar to what we are used to seeing with overhead fluorescent lights.
If the camera is back far enough, a light mounted on top of the camcorder can be aimed at the ceiling for a bounced light effect. The camera and attached light should be far enough back from the subject so that the light will come down at an acceptable angle. If the light is too close to the subject, dark eye shadows will result. If the walls of the room are a light, neutral color, they will reflect part of the bounced light and more fully fill in shadow areas.
The second drawing assumes a smaller room. To keep the light from coming down on the subject at too steep an angle, it's aimed at the back wall. Again, this approach creates an extremely soft effect, which may or may not be desirable.
Although much light will be lost in the double-bounce process, with today's highly sensitive professional cameras this should not be a problem.
To help compensate for the color that the ceiling and walls will add to the light, be sure to color balance the camera under the bounced rather than the direct light.
More traditional (and better) lighting for a typical office interview is covered in this article.

Lighting Multiple Subjects
Thus far we have covered the lighting of one subject only. Life isn't always that simple, of course.
First, we'll take a look at a typical three-person interview setup.
Note below that even though things appear to be much more complex, we've only just repeated the basic three-point lighting setup for each of the three people.

A simple news, weather and sports set is shown below. By panning to one side or the other, the two cameras can get one of the co-anchors, or the sports or weather person.
Also note that the positions of the key and fill lights provide three-point lighting coverage for each of these camera positions. Background lights are not shown — and they may not be needed because of the accumulation of ambient light on the background.
By studying the drawing you can see how each subject has a key, a fill, and a back light. Through the use of barn doors the light from the Fresnels (in red) can be confined to the intended subjects.


As previously noted, some lighting directors prefer a flat, high-key approach to lighting news anchors. This is done by either making the key-fill ratios even (1:1), or by placing key lights directly over the cameras. But, as we've noted, putting key lights directly over cameras can make teleprompters difficult to read.
________________________________________
Now let's take a look at a rather complex lighting setup.

Note that two large Fresnels are used to key all of the subjects on the left and five smaller Fresnels are used as back lights on these subjects. One of these back lights also keys the person on the right. Two scoops provide the necessary fill light for the entire set. Barn doors keep lights intended for one person from spilling over onto another person or area.
Sometimes you may need to develop non-formula lighting designs for special dramatic effects. The author once did a cabaret-style network series where the following low-key lighting design was used.

Area Lighting
So far, we've covered subjects conveniently confined to one place. But what if one or more subjects must be free to roam around a set while on camera? There are four ways this can be handled.
1. First, the entire area can be flooded with a base light, which is an overall, even light. Scoops, LED banks, or color-balanced fluorescents will work here, assuming the area isn't too large.
Important close-up camera talent positions are then keyed with lights at twice the intensity of the base light. Small pieces of tape placed on the floor can provide marks for the talent to "hit" as they move from one major camera position to another.
With this approach you will probably not want to barn off the lights any more than necessary, because illuminated areas should be kept large enough to give the talent a margin of error in missing their marks.
2. The second approach involves keying, filing, and backing the entire area (generally, a dramatic setting). Here the whole working range — assuming it's not too large — is treated as a single subject. This will require a powerful (high-wattage) key light positioned at a great enough distance to cover the entire area.
If the key is placed in the center of the set, 90 degrees to the back wall, the angle will be appropriate for cameras positioned at each side of the set. One or more Fresnels with diffusers placed at either side of the set can serve as fills. (Scoops or banks of color-balanced fluorescent lights will not throw light far enough to reach the back of a large area.)
If multiple keys are needed to achieve a high enough level of illumination over the set, they should be positioned as close together as possible to reduce the problem of multiple shadows and multiple catchlights in eyes.
Over a large area multiple back lights will have to be used. They should be aimed to create slightly overlapping pools of light over the whole talent area. The talent should be able to walk from one area to another without obvious variations in back light.
3. The third approach to lighting a large area is to divide the set into individual areas, and key, fill, and back each area. Often, large interior settings are divided into four or more parts for keying, filling, and backing.
Typically, the lights at the edge of each of these areas will just begin to merge. With this approach it's important to make sure that close-ups on the talent will not be in the transition points between lighted areas.
Keep in mind the sources of light that may be suggested by the setting — visible table lamps, windows, etc. Place the key lights so they will be consistent with these suggested sources of illumination. As we've previously noted, this is called following source.
4. The last approach to lighting a large area would be appropriate to simulate an interior at night. This technique would use a low key lighting ratio from 3:1 to 6:1, and the talent would move in and out of specifically defined set areas. Only important, close-up areas would be lit, leaving the rest of the scene relatively dark.
With this approach it's especially important to follow source; i.e., place keys so that they are consistent with the visible or suggested sources of light within the setting. If a person were sitting next to a reading lamp, the key would have to be angled so that the light would appear to be coming from the table light. In some cases you may want to use a low-level base light over the entire set to keep in-between areas from going too dark.

Using a Stand-In
Whatever lighting approach you use, the lighting can be checked on camera by having a stand-in (a person of similar height, skin color, and clothing as the talent involved). This person should slowly walk through the various designated talent positions on camera as the lighting is carefully observed on a good color monitor.
During the show's dress rehearsal with the actual talent any remaining problems can be spotted and then fixed during the break between the dress rehearsal and the actual production.

Existing (Natural) Light
In news and documentary work the most "honest" approach to lighting is to make use of the existing (natural) light present at the location. This shows things as they really are (within the limitations of the video process), rather than after they have been altered or embellished by "artificial" lighting.
The problem is that existing light is often unsuitable. The contrast ratio can be too extreme; there can be mixed sources of light (daylight, incandescent light and fluorescent light all at the same location); or, the light level can be too low for a quality video signal.
Note that the existing light photo here suffers from both underexposure and a high contrast ratio.
There's also another consideration: people are used to seeing interviews, etc., enhanced by good lighting. Without it, it appears to many viewers that "the picture's dark," or "the picture isn't good."
This is not unlike the situation photojournalism faced a few decades ago when existing light still photography was first used in publications such as Life magazine. Since people were used to seeing flash-on-the-camera photos, natural light photography seemed unnatural — even though it accurately showed the actual conditions being photographed. (Flash was necessary in the early days of photojournalism because of the relatively slow speed of film and lenses.)
Occasionally in dramatic productions you have to fake lighting to make it look real.
Here, the light from a computer screen is insufficient to illuminate the child's face. So to simulate this setting, a light with a blue filter is positioned on the other side of the computer monitor.
________________________________________
As videographers strive to add an artistic dimension in their work, they start to rely more and more on shadows to convey meaning. By studying these photos again you can see a few of the ways this can be done.
Module 35
Updated: 04/02/2010




Altering Appearances

There are situations when you will want to consider special lighting approaches to accommodate difficult subject matter or to alter the look of a subject.

Minimizing Surface Detail
First, let's look at how you can combine what we've covered to completely minimize surface detail. Although we've previously seen how quality (coherence) of light and flat lighting can do this, now we're going to combine three approaches to further enhance this effect. Three are -
• decrease the key and fill angles
• use soft light sources
• reduce the lighting ratio
This lighting plot shows these modifications.
1. Note that here the front lights have been moved as close to the cameras as possible. In the process, the detail revealing shadows have been virtually eliminated.
2. Next, note that the soft light sources are equipped with spun-glass diffusers. The resulting ultra-soft illumination further minimizes detail-revealing shadows.
3. Finally, the lighting ratio between the two lights has been reduced from the normal 2:1 to 1:1, which means that the two front lights are of equal intensity.
Keep in mind that you only need to change front lighting; the position or intensity of the back light and background light (if needed) will not change.
In most cases you will not want to go to the extreme of making all of these changes. (Note below this approach makes the chocolate chip cookie on the left look pretty dull and lifeless.)

For example, you may decide to stay with a 2:1 lighting ratio and just move the lights closer to the camera and use diffusers; or, you might want to check the effect of just using diffusers.
Of course, for the ultimate in soft lighting you can use a lighting tent as illustrated in an earlier module. However, this approach is impractical for lighting people or large areas.)
________________________________________
Now let's go in the opposite direction and maximize surface detail, as shown on the right above. This can be done by reversing all three previous modifications.
• increase the key-to-fill angles.
• use a hard light source for a key
• increase the lighting ratio
1. Note in the drawing below that the key has been moved to about an 85-degree angle to maximize shadows and surface detail.
2. Next, you will want to use a hard source of light for a key. A focusing light such as the ellipsoidal spot we talked about earlier will create the desired effect.
3. Finally, we would want to increase the lighting ratio to at least 4:1. By eliminating the fill light altogether we would go considerably beyond 4:1 and maximize the lighting ratio.
Again, it's generally not desirable to alter the back light position or intensity, or the intensity or position of a background light.
The photo on the left below shows how effective side lighting is in bringing out detail in this ancient carving. Without the effect of this type of light, much of the detail would be lost.




Two hard key lights, one on either side of the coin, lit the close-up of a 50-cent piece above. The angles of these lights are so oblique (and precisely aimed) that only the raised portions of the coin are illuminated. This keeps the background areas dark.
We are used to seeing primary light sources coming from above subjects -- typically, sunlight, or the 45-degree angle of a key light.
When the primary (key) light is placed at a low angle, a dramatic or even mysterious effect can be achieved. (Note photo above.)

High Key and Low Key
Two terms that are often misunderstood in lighting are high key and low key.
These terms have nothing to do with overall light intensity; instead, they refer to the angle of front lighting (generally the key light) and the resulting presence or lack of shadow areas. The photos below might help in seeing this distinction.
A high key scene would be an evenly lit scene, one that has no intrusive shadow areas. (Note the surface of the strawberries.)
Sitcoms, variety shows, etc., are normally lit high key. The first cookie shown at the beginning of this module was lit in a very high key fashion.
On the other hand, a scene lit in low-key would have pronounced shadow areas. (Note photo on the right.)
It's important to remember that in all these examples the intensity of the key light could be exactly the same. We are not talking about "bright lights" in high key setups and "dim lights" in low-key setups.
In actual fact, under "dim light" we would simply have to open the camera's iris in order to achieve adequate exposure, and then our so-called "low key" effect would disappear. It's only the angle of the key and the lighting ratio that make the difference between high key and low key.
________________________________________
If you already haven't done so, check out the examples shown here to study how lighting affects the appearance of subject matter.
________________________________________
Module 36
Updated: 04/02/2010



Lighting:
Some Final Issues

Before we close the topic of lighting, there are still a few important issues that need to be covered.
Video cameras used to be inferior to film stocks in their ability to handle brightness ranges. This gave film a definite advantage.
However, today's CCD/CMOS cameras have not only caught up with the brightness range capability and general look of film, but the best video cameras can handle brightness ranges beyond those of typical film stocks.
Not long ago lighting directors had to approach lighting for film in a different way than lighting for video. Not anymore. George Spiro Dibie is an award-winning Hollywood lighting director (five Emmy awards and seven additional nominations) with many years of experience lighting for film and TV. Dibie says, because of today's CCD cameras, he can now "...light for my video cameras exactly the way I light for film cameras."

Following Source
Dibie also emphasizes a concept we introduced earlier: following source. "Windows, doors, lamps...these are the sources of light in a scene. [For]...one camera or multiple cameras, you deal with the feel of the source."
The technique of following source has now become a standard approach in many dramatic productions.
To do this a lighting director must first determine where the sources of illumination might be or appear to be within a scene.
If no sources are obvious or suggested, you then decide where logical sources of illumination might be.
In a poolroom scene, for example, the light source might be a light above the pool table — even though it might not be visible in the scene. It then becomes a matter of keying important camera close-up positions so that they are consistent with this suggested source of illumination.

Drawing a Lighting Plot
Part of the preproduction process involves carefully thinking through your lighting design.
In positioning lights in a studio considerable time is involved in climbing ladders and hanging and connecting lights. To reduce the problems, especially in complex settings, you need to plan out the whole effect with a paper and pencil or a computer drawing program before you start.
If you are lucky enough to be able get someone else to actually hang the lights for you, you'll still need to have a way of indicating where you want each one placed. But even if you must hang the lights yourself, having it all planned out on paper will save you time and trouble.
In either case you will need to draw a lighting plot with all of the lights and their positions noted. Most production facilities will have a basic lighting grid form for their studio that you can use as a starting point. Here's an example.

Studio drawings should include the grid lines shown above. Note that any lighting position can be indicated by a letter and number designation. For example, position "J-7" ends up being in the middle of this studio.
Once the lights are hung, they can be plugged into electrical outlets — typically near the grid pipe cross-points — and the lights can be programmed into a lighting control system to be remotely switched on and off or dimmed.
The typical three-prong, locking twist connector used for connecting lights is shown on the right.


Setting up Lights
In the studio, lights are commonly attached to a lighting grid with C-clamps and safety chains. On-location lights are normally mounted on floor stands. A portable lighting kit is shown on the right.
Key and fill lights are generally easy to position; stands are just placed at 45-degrees on either side of the camera at an appropriate height.
On locations, back lights can't be hung from a lighting grid as they can in a studio, so other solutions must be considered. A back light may be clipped to the top of a bookcase, an exposed rafter, or any convenient, out-of-view anchoring point.
If this option isn't available, you might consider constructing a lighting goal post out of black plastic (PVC) pipe over the background area (outside of the camera's view) and clipping the light to the center. One or more back lights can be hung from the middle and the wires can be taped to the pipe with black electrical tape.

Lighting Boards
Typically, dozens of lights are required for each studio setting. In an elaborate dramatic production involving several sets (all of which may have to be ready for use at the same time), several hundred lights might be involved.
Being able to control all of these lights — switch them on and off, dim them to required settings on cue, etc. — can be a daunting process.
Although simple lighting boards such as the one pictured on the left can handle basic studio productions, major dramatic productions require a computer-based system.
Typically, a software program displays each of the ceiling grid connectors on the computer screen. (Note the photo on the right below.)
You will recall from an earlier illustration that any grid lighting connector can be indicated by a letter and number combination. After a lighting instrument is plugged into one of these connectors, power settings can be programmed into the computer.
Once this is done, the lights can be assigned to groups and all controlled together. For example, switching an interior scene from a daylight to a night effect may involve simultaneously dropping the level of the background and fill lights while dropping the intensity of several key lights.
Once all the lights are programmed, scene and time-of-day changes require only a few mouse clicks to activate preprogrammed settings.

On-Location Power Issues
In setting up on-location lighting it's often necessary to figure out how many lamps a fuse or circuit-breaker can handle.
Although the standard house current voltage in the United States and many other countries is between 110 and 120, in doing calculations it's common to assume a voltage of 100. This not only makes it easier to do calculations, it automatically provides a safety factor. By assuming a voltage of 100, the following formula can be used:
________________________________________
watts divided by 100 = amps
________________________________________
(The standard voltage in your country may vary, and the base of these calculations will have to change accordingly.) Assuming 100 volts, and using this formula, a 500-watt lamp would draw 5 amps. A 20-amp fuse or breaker could handle up to 2,000 watts, a 30-amp fuse up to 3,000 watts, etc.
When setting up multiple lights the total wattage is simply added together.
If a 1,000-watt key light, a 500-watt fill, a 500-watt back light and a 500-watt background light were all plugged into the same circuit, the combined amperage (which comes to 25 amps) would blow a standard 20-amp fuse or breaker. (Actually, it might take a few minutes to heat up the breaker enough to trip it — just long enough to get a good start on taping a segment!)
To keep from overloading a fuse or circuit breaker, it's often necessary to run extension cords from separately fused circuits — possibly from an adjoining room. But if these extension cords are not made of heavy gauge wire they can lower voltage to lamps, resulting in drops in color temperature.
Since power (total wattage) in some locations is limited, you may have no choice but to bring in an electrician to run a temporary, high-amperage line from the main fuse box.
In remote areas a generator truck or a portable, gas-powered generator may have to be considered. Generators are available for film and TV production that have sound damping enclosures.

Film vs. Typical TV Lighting
Compared to dramatic film scenes, video (especially sitcoms, game shows, etc.) often looks a bit flat and dimensionless. Although some people conclude from this that the "look" of video is inferior to film, as we've noted, the reason actually centers primarily on differences in lighting. Since film is almost always shot in single camera style, lighting angles and intensities (not to mention audio, make-up, etc.) are optimized for this one camera angle and distance.
A typical television sitcom involves three or four cameras spanning almost 160 degrees. Since the director needs to be able to cut to any camera at any time, the lighting must be able to hold up throughout this entire range.
To avoid the possibility of having major shadow areas, the safest way of lighting this type of production is to light relatively flat (high-key) using multiple key lights to cover every camera angle. This is not an issue in shows such as comedy, where lighting is normally kept bright and high key, but it can hurt dramatic productions where mood and "atmosphere" should be significant factors.
If the time and budget allow, video can be shot single camera, film-style. When this is done, video — and especially digital/HDTV — can achieve the same dramatic quality we're used to seeing in film.

The Art of Lighting and Conclusion
In describing the basic techniques for lighting in these modules we've covered approaches that will provide good results for most studio and field work. At the same time, no attempt can be made to cover complex lighting needs. Numerous books have been written on this subject
The lighting required for sophisticated, multiple-camera dramatic productions requires the skill and artistic ability of an experienced lighting director. At this level of sophistication lighting moves into the realm of a true art form (and one that is even recognized with Emmys and Academy Awards).
In the next section we'll turn to the audio part of the TV medium.
Module 37
Updated: 04/03/2010



Television Sound:
The Basics

Until rather recently in television far more attention was paid to video than to audio. "Good sound" was when you could make out what was being said; "bad sound" was when you couldn't.
This has changed. With the advent of stereo, 5.1 surround-sound, and home theaters, audiences have much greater expectations.
Before we can discuss some of the basic audio production concepts, sound itself must be understood.
Sound has two basic characteristics that must be controlled: loudness and frequency.

Loudness
Although sound loudness is commonly measured in decibels (dBs), that term actually refers to two different things.
First is dBSPL (for sound pressure loudness), which is a measure of acoustic power. These are sounds we can directly hear with our ears.
These decibels go to and beyond 135, which is considered the threshold of pain and, by the way, the point at which permanent ear damage can occur. If your ears "ring" after being around a loud sound, this should be a warning sign that you have crossed the threshold of potential hearing damage. (The damage, which is irreversible, often goes unnoticed, which probably explains why the average 50-year-old in some countries has better hearing than many young people in the U.S.)
Musicians who must be around high-level sound use musician's plugs -- special earplugs that attenuate sound level without distorting the frequency range. In case you are thinking about starting your own rock band, HEAR, (Hearing Education and Awareness for Rockers) at hearnet.com has more information.
Various sound pressure decibel levels (in dBSPL's) are shown here.
Sound dBs
Jet Aircraft Taking Off 140-150
Rock Concert / Gunshots 135-140
Jackhammer at 15 meters / Subway 85-90
Average City Street / Restaurant 70- 75
Quiet Conversation / Phone Dial Tone 60-80
Office Environment 45
Whisper at 3 meters (10 feet) 30
"Silent" TV Studio 20
________________________________________
The second use of the term decibel, dBm (for the milliwatt reference level) is a unit of electrical power.
In audio production we are primarily interested in dBm, which represents levels of electrical power going through various pieces of audio equipment.
Two types of VU meters for measuring the loudness of sound are in wide use: the digital type and the analog type.
The 0 to 100 scale on the left side of this illustration indicates modulation percentage (percentage of a maximum signal), and the scale on the right is in dB's.
Contrary to what logic might dictate, 0dBm (generally just designated 0dB on a VU meter) is not "zero sound" but, in a sense, the opposite, the maximum desirable sound level. (Granted, that's a bit confusing, but, then again, we didn't make up this system!)
The 0dB point on the meter is just a reference point. Therefore, it's possible to have a sound level on the meter that registers in negative dBs, just as it's possible to have a temperature of -10 degrees Centigrade or Fahrenheit.



The animated versions above illustrate how digital meters respond to sounds.
The VU meter on the right is the traditional analog meter that has been around in one form or another since the dawn of radio.
Although easy to read, most versions do not accurately respond to short bursts of loud sound.
The dB level going through audio equipment must be carefully controlled. If the signal is allowed to pass through equipment at too low a level, noise can be introduced when the level is later increased to a normal amplitude (audio level).
If the level is too high (significantly above 0 dB or into the red areas on the VU meter), distortion will result -- especially with digital audio. To ensure audio quality, you must pay constant attention to maintaining proper audio levels.
The animated meter shown here indicates a sound level that is a bit too high. Ideally, the needle should not go deeply into the red area this often.

Frequency
Frequency relates to the basic pitch of a sound -- how high or low it is. A frequency of 20 Hz would sound like an extremely low-pitched note on a pipe organ -- almost a rumble.
At the other end of the scale, 20,000 Hz would be the highest pitched sound that most people can perceive, even higher than the highest note on a violin or piccolo.
Frequency is measured in Hertz (Hz) or cycles per second (CPS). A person with exceptionally good hearing will be able to hear sounds from 20-20,000 Hz.
Since both ends of the 20-20,000Hz range represent rather extreme limits, the more common range used for television production is from 50 to 15,000 Hz. Although it doesn't quite cover the full range that can be perceived by people with good hearing, this range does cover almost all naturally occurring sounds.

The Frequency-Loudness Relationship
Even though sounds of different frequencies may technically be equal in loudness (register the same on a VU meter), human hearing does not perceive them as being of equal strength.
The red line on the graph (roughly) shows the frequency response of the ▲human ear to different frequencies.
Because of the reduced sensitivity of the ear to both high and low frequencies, these sounds must be louder to be perceived as being equal to other frequencies. (A much more detailed version of the relationship between and audio frequency and perceived loudness is available here.)
You'll note that a good-quality microphone (the green line) is relatively "flat" in the all-important 50-15,000 Hz. range.

Listening Conditions
Equipment and listening conditions also greatly affect how different frequencies will be perceived. To compensate for some of these problems, we can adjust bass and treble controls of playback equipment.
More sophisticated equipment will include a graphic equalizer, which goes a step further and allows specific bands of frequencies to be individually adjusted for loudness.
A graphic equalizer may be necessary to help match audio segments recorded under different conditions, or simply to customize audio playback to the acoustics of a specific listening area.
Note that the graphic equalizer shown here can control nine specific frequency areas (bands).
Any piece of audio equipment -- microphone, amplifier, recorder, or audio speaker -- can adversely affect the fidelity of sound. However, it's the microphone (the initial device that transduces sound waves into electrical energy) and the audio speaker (the device that changes electrical energy back into sound waves) that represent the weakest links in audio quality.
To some degree it's possible to use graphic equalizers and similar audio equipment to "clean up" the frequency response of a poor microphone. However, even the most sophisticated audio techniques can't work miracles. Thus, the better the original audio signal, the better the final product will be.

Room Acoustics
Sound, both as it's recorded and played back, is more affected by the acoustics of a room or studio than most people realize.
In an effort to create totally soundproof studios, early radio stations used to use thick carpets on the floors and heavy soundproofing on the walls.
Although possibly successful as soundproofing, the result was a lifeless and dead effect that we're not used to hearing in a normal listening situation, such as our living room. Therefore, a slight bit of reverberation is both desirable and realistic.
Two types of soundproofing material are shown on the left.
A room with a tile floor and hard, parallel walls will reflect sound so much that it interferes with the intelligibility of speech. Sometimes it's desirable in these situations to place free-standing sound absorbing items in the room -- things like sofas and rugs -- to break up sound reflections and reduce reverberation.
The ideal room for recording or listening to sound has just enough reverberation to sound realistic, similar to your living room possibly, but not enough to reduce the intelligibility of speech.
________________________________________

________________________________________
You can find information on film sound theory here.
________________________________________
Module 38
Updated: 04/03/2010


Part I


Microphones


Major Microphone Designs

There are six common microphone designs:
• hand held -- the type held by on-camera talent or used for on-location interviews
• personal mic (lavaliere / clip-on mic) - Whether hung from a cord around the neck (lavaliere) or clipped to clothing, these are all referred to as personal mics.
• shotgun - used for on-location production to pick up sounds a moderate distance from the camera
• boundary effect microphone -- also called PZ or PZM mics These rely primarily on reflected sounds from a hard surface such as a tabletop
• contact mics -- which pick up sound by being in direct physical contact with the sound source. These mics are generally mounted on musical instruments.
• studio microphones -- the largest category of microphone. These include a number of application designs that we'll discuss.
These six categories include different transducer types, or approaches to converting sound waves into electrical energy.
In this module we'll discuss the most popular types of mics and their characteristics, starting with —


Dynamic Microphones
The dynamic mic (also called a moving-coil microphone) is considered the most rugged professional microphone.
This type of mic is a good choice for electronic newsgathering (ENG) work, where a wide variety of difficult conditions are regularly encountered (such as this ENG report on a fire).
In a dynamic microphone sound waves hit a diaphragm attached to a coil of fine wire. The coil is suspended in the magnetic field of a permanent magnet.
When sound waves hit the diaphragm they move the coil of wire within the magnetic field. As a result, a small electrical current is generated that corresponds to the original sound waves. This signal must be amplified thousands of times.
When small size, optimum sensitivity, and the best quality are all prime considerations, another type of mic, the condenser mic, is often preferred.

Condenser/Capacitor Microphones
Condenser microphones (also called capacitor or electret condenser mics) are capable of top-notch audio quality.
As shown on the left, they can be made so small that they are almost invisible. (But, the smaller they are, the more expensive they tend to be!)
Condenser mics aren't as rugged as dynamic mics, and problems can result when they are used in adverse weather conditions.
Condenser mics work on the principle that governs an electric condenser or capacitor. An ultra-thin metal diaphragm is stretched tightly above a piece of flat metal or ceramic. In most condenser mics a power source maintains an electrical charge between the elements.
Sound waves hitting the diaphragm cause fluctuations in an electrical charge, which then must be greatly amplified by a preamplifier (pre-amp). The pre-amp can be located within the microphone housing or in an outboard electronic pack. Although most pre-amps output an analog signal, some of the newer models immediately convert the output to digital.
Because they require a pre-amp, this means that, unlike the dynamic mics discussed earlier, most condenser mics require a source of power, either from an AC (standard Alternating Current) electrical power supply or from batteries.
An AC power supply for a condenser mic is sometimes built into an audio mixer or audio board. This is referred to as a phantom power supply. When this type of power supply is used, the mic cord ends up serving two functions: it delivers the signal from the mic to the mixer and it carries power from the mixer to the pre-amp of the condenser mic.
Some camcorder instructions recommend condenser mics because the pre-amp provides a high enough audio level to reduce undesirable system noise.
Of course, using batteries to power the pre-amp of the condenser mic is more convenient -- you don't have to use a special mixer or audio board connected to an electrical power source.
But battery-powered condenser mics introduce a problem of their own: at the end of their life cycle the batteries can go out without warning.
To get around any unexpected problems, especially on important productions, two miniature condenser mics are often used together. If one mic goes out, the other can immediately be switched on. This double microphone technique is called dual redundancy, a term that is somewhat redundant in itself.
________________________________________
Summary of Dynamic and Condenser Mic Pros and Cons
Dynamic Mic Advantages Condenser Mic Advantages
Rugged More Sensitive
Lower Cost Better Audio Quality
No Power Required Can Be Extremely Small
Dynamic Mic Disadvantages Condenser Mic Disadvantages
Lower Sensitivity and Power Output Higher self-noise
Larger and Heavier More Fragile
Slower Response Time More Expensive
Not the Best Choice for Maximum Audio Quality Prone to Weather Problems and RF Interference
________________________________________

Ribbon Mics
Except possibly for an announce booth (shown here), ribbon mics are seldom used in TV production.
Although they can impart a deep, resonant "coloring" to sound, they are fragile and highly sensitive to moving air. This precludes their use outside the studio and on most booms -- which covers most TV production applications. Ribbon mics were primary used in radio studios.


Boundary Effect Mics
PZ (also called PZM) stands for sound pressure microphone, which comes under the heading of a boundary effect microphone. This mic relies entirely on reflected sound.
In specific situations, such as when placed on a tabletop, a PZ mic will provide a pickup that's superior to that of other types of mics.
Contact Mics
As the name suggests, contact mics pick up sound by being in direct physical contact with the sound source. These mics are generally mounted on musical instruments, such as the surface of an acoustic bass, the sounding board of a piano, or near the bridge of a violin.
Contact mics have the advantage of being able to eliminate interfering external sounds and not being influenced by sound reflections from nearby objects. Their flat sides distinguish them in appearance from small personal mics.

Directional Characteristics
In an earlier module we talked about the angle of view of lenses -- the area that a lens "sees." Microphones have a similar attribute: their directional characteristics, or, you might say, the angle of view that they "hear."
In microphones there are three basic directional categories:
• omnidirectional
• bi-directional
• unidirectional

Omnidirectional Mics
Omnidirectional mics (also called nondirectional mics) are (more or less) equally sensitive to sounds coming from all directions.
One of their advantages is that they are less sensitive to breath popping and close mouth-to-mic use, such as ▲ the reporter doing an ENG report.
However, in general video production where the mic isn't hand-held it's almost always more desirable to use some form of directional mic. For one thing, this will reduce or eliminate unwanted sounds (behind-the-camera noise, ambient on-location noise, etc.) while maximizing sound coming from talent.

Bi-directional Mics
In a bi-directional sensitivity pattern (bipolar pattern) the mic is primarily responsive to sounds from two directions. Note drawing above.
Although commonly used in radio interviews for people sitting across from each other at a table, until the advent of stereo, bi-directional (also called figure eight) sensitivity patterns had limited use in television. We'll get into stereo and the need for this type of directional pattern in a later module.

Unidirectional Mics
The term unidirectional simply refers to a general classification of mics that are sensitive to sounds coming primarily from one direction.
There are four subdivisions in this category -- each being a bit more directional:
• cardioid
• supercardioid
• hypercardioid
• parabolic
Although these terms may sound as if they belong in a medical textbook, they simply refer to how narrow the mic's pickup pattern ("angle of view") is.

Cardioid
The cardioid (pronounced car-dee-oid) pattern is named after a sensitivity pattern that vaguely resembles a heart shape. (You will be able to see this later in a top view illustration.)
The drawing here is a highly simplified depiction of three directional patterns.
Mics using a cardioid pattern are sensitive to sounds over a wide range in front of the mic, but relatively insensitive to sounds coming from behind the mic.
Although this pattern might be useful for picking up a choir in a studio, the width of a cardioid pattern is too great for most TV applications. When placed two or more meters (7 or more feet) from a speaker, it tends to pick up unwanted, surrounding sound, including reverberation from walls.
When hand held, cardioid mics pick up less background noise than omnidirectional mics, but when used in this way they they require thicker pop filters to reduce the pops from plosive sounds such as "Ps" and "Bs. They also tend to exaggerate bass when held close to the mouth. (We'll have about these issues when we talk about hand-held mics in the next module.)

Supercardioid
The supercardioid is even more directional than the cardioid sensitivity pattern. Whereas the cardioid has about a 180-degree angle of acceptance, the supercardioid has about 140-degrees of coverage. When this type of mic is pointed toward a sound source, interfering (off-axis) sounds tend to be rejected.
This polar pattern is similar to that of our ears as we turn our head toward a sound we want to hear and try to ignore interfering sounds.
Hypercardioid and Lobar
Even more directional are the hypercardioid and lobar patterns with less than 140-degrees of coverage. Because off-axis sounds will be largely rejected, they have to be accurately pointed toward sound sources. Some highly directional shotgun mics (below) are included in the hypercardioid category.

Shotgun Mics
So called shotgun mics with their hypercardioid or narrower angles of acceptance are one of the most widely used types of mics for on-location video work. Since they are quite directional, they provide good pickup when used at a distance of 2 to 4 meters (7-13 feet) from the talent. Like other types of directional microphones, they tend to reject sound that would interfere with the on-camera talent.
Highly directional mics should not be used close to talent because they exaggerate bass. In addition to on-location settings, they are useful in stage and PA applications where amplified speakers are being used, because they can deliver higher audio levels before feedback starts.
________________________________________
The drawing below shows another way basic microphone sensitivity patterns (polar patterns) can be visualized. These drawings represent top views of the microphones and the light blue arrows represent the direction the mics are pointed. The magenta areas represent the areas of maximum sensitivity.

________________________________________

Parabolic Mics
Parabolic mics represent the most highly directional type of mic application. This category refers more to how a microphone is used rather than the directional pattern of the mic, itself.
In fact, the mic used in the focus point (center) of the parabola can be any general cardioid or supercardioid mic.
The parabolic reflector can be from 30 cm to 1 meter (1 to 3 feet) in diameter.
Because of the parabolic shape of the reflector, all sound along a very narrow angle of acceptance will be directed into the microphone in the center.
Parabolic microphones can pick up sound at distances of more than 60 meters (200 or more feet). These mics are not practical for general field production work, but they are often used in sports.
For parabolic mics, or any type of directional mic used on location, the person controlling the mic should always be wearing a good set of padded earphones connected to the mic's output, especially if subjects are moving.
A slight error in aiming a highly directional mic can make a big difference in audio quality.

________________________________________
Module 39
Updated: 04/02/2010

Part II



Microphones



Handheld Microphones
Handheld mics are often dynamic mics because they are good at handling momentary sound overloads. Although they are commonly called "handheld," the term is a bit of a misnomer, because this type of mic can also be mounted on a microphone stand.
Because these mics are often used at close distances, some special considerations should be mentioned. First, it's best if the mic is tilted at about a 30-degree angle (as shown here) and not held perpendicular to the mouth.
Speaking or singing directly into a mic often creates unwanted sibilance (an exaggeration and distortion of high-frequency "S" sounds), pops from plosive sounds (words with initial "Ps," and "Bs"), and an undesirable proximity effect (an exaggeration of low frequencies).

Most handheld mics are designed for use at a distance of about 20-40cm (8 to 16 inches), but this distance may have to be reduced in high-noise situations.
Pop filters, which are designed to reduce the pops from plosive sounds, are built into many handheld mics.
When a mic is used at close range, it's also wise to slip a windscreen over the end of the mic to further reduce the effect of plosive speech sounds.
In addition to reducing the effect of plosives, windscreens can eliminate a major on-location sound problem: the effect of wind moving across the grille of typical microphones. Even a soft breeze can create a turbulence that can drown out a voice.

The windscreens shown above are typically used over the end of hand-held dynamic mics when they are used outside.

The elaborate windscreen housing shown above on the right is used with directional mics in the field. Often, this type of mic is attached to a "fish pole" and pointed toward the talent, just out of camera range.

Positioning Handheld Mics
When a handheld mic is shared between two people, audio level differences can be avoided by holding the mic closer to the person with the weaker voice. Inexperienced interviewers have a tendency to hold the mic closer to themselves.
The resulting problem is compounded when the announcer has a strong, confident voice, and the person being interviewed is somewhat timidly replying to questions.

Personal Microphones
Personal mics are either hung from a cord around the neck (a lavaliere or lav mic) or clipped to clothing (a clip-on or lapel mic).
This general type of mic can be either a condenser or dynamic type.
Omnidirectional patterned personal mics don't pick up annoying plosive sounds as much as the cardioid pattern mics do, but being less directional they can pick up unwanted audio from nearby speakers.
If there are different speakers on a set (that may start speaking at any time) omnidirectional personal mics not being used should be turned down to a low level until one of these speakers starts talking.
As we saw in the last module, condenser-type personal mics can be made quite small and unobtrusive — an important consideration whenever there is a need to conceal a microphone.
When attaching a personal mic, it should not be placed near jewelry or decorative pins. When the talent moves, the mic can brush against the jewelry creating distracting noise. Beads, which have a tendency to move around quite a bit, have ruined many audio pickups.
Personal mics are designed to pick up sounds from about 35cm (14 inches) away.
If a personal clip-on mic is attached to a coat lapel or to one side of a dress, you will need to anticipate which direction the talent's head will turn when speaking. If a person turns away from the mic, the distance from mouth-to-mic is increased to 50cm (almost 2 feet), plus, the person's voice is being projected away from the mic.
By the way, most of these personal mics make use of an alligator clip. The sharp edges on the back side of this clip can damage clothing. However, if a plastic card or a business card is placed on the back side of the clip separating it from the clothing, damage can be avoided.

Hiding Personal Mics Under Clothing
Often, these mics are hidden under clothes. However, great care must be taken in securing the mic, because annoying contact noise can be generated when the talent moves and the clothing rubs against the mic.
Noise can also result from rubbing against the first 20 cm (eight inches) or so of the mic cord.
To keep this from happening, all three elements — mic, clothes, and the topmost part of the mic cable — need to be immobilized in some way. This can be done by sandwiching the mic between two sticky layers of cloth, camera tape, or gaffer's tape, and securing the tape to both the clothing and the mic.
If sheer or easily damaged clothing is involved, it may be necessary to attach the lavaliere to the talent's skin. In this case paper-based medical surgical tape can be used.
A strain relief should also be considered in case the talent steps on their mic cord, or it becomes caught in some object as they move. A strain relief is any provision that stops the mic from being pulled away when the cable encounters tension. Otherwise, the secured mic can be abruptly pulled out of place, which, if the mic happens to be taped tightly to the skin, might result in the utterance of some non-broadcast terms!
There are various approaches to devising a strain relief. You can have the talent loosely loop the mic cord to a leather belt or a belt loop; you can coil the cord into a couple of loops and then attach that to clothing below the mic; or if one of the talent's hands is free, you can just have them hold onto the mic cord as they walk.
Mic cords are generally not long enough to reach a camera or the studio audio connection box — and that's just as well. Generally, after you attach a lav or personal mic to talent, they need to be free to walk around until they are ready to go on camera.
This is possible if their mic cable is only plugged into the necessary extension cable shortly before they are to go on camera. With the help of a floor director, more than one mic can be plugged into the same extension cable at different times.
It is assumed the audio person will have checked and made a record of the audio levels for each person before their mic is plugged in and switched on. Even during a life show the mic can be checked to make sure it's working by switching it into a audition or cue channel and listing for background sound.

Forced Perception
Finally, when some hidden personal mics are used, the proximity of the mic to the person's mouth can result in unnatural sound — a kind of sterile sound that's not what you would expect in a typical room. If you like technical terms, this is called forced perception.
Sometimes it helps to attach the mic at a lower point on the talent to allow it to pick up a bit of reverberation from the room. If several people are using RF mics in the same room, a solution might be to use all mics as close as possible to the talent, but, in addition, use a boom mic to record a bit of live "room tone." This room tone can then be mixed into all of the audio pickups at an extremely low, almost imperceptible, level.

Headset Mics
The headset mic was developed to serve the needs of sports commentators. Normally, a mic with a built in pop-filter is used. (Note photo on the left below.) The padded double earphones carry two separate signals: the program audio and specific director cues. Having the mic built into the headset assures a constant mic-to-mouth distance, even when the announcer moves from place to place. Performers at concerts often use a much smaller and less conspicuous version of this (photo on the right, below).





Proximity Effects
Question: Why is it that even with your eyes closed you can tell if a person speaking to you is 20 centimeters or 5 meters (6 inches or 16 feet) away?
The first thought might be that the voice of a person 20cm away would be louder than if the person were 5 meters away. That's part of the answer; but if you think about it, there's more to it than that. You might want to say that the voice of a person that's close to you "just sounds different" than a person who is farther away.
This "just sounds different" element becomes highly significant when you try to start editing scenes together. Getting the audio in scenes to flow together without noticeable (and annoying) changes takes an understanding of how sound is altered with distance.
Sound traveling over a distance loses low frequencies (bass) and, to a lesser extent, the higher frequencies (treble). Conversely, microphones used at a close distance normally create what is called a proximity effect — exaggerated low-frequency response.
Some mics have "low cut" filters to reduce unnatural low frequencies when the mics are used at close distances.
When directional microphones are used at different distances the sound perspective or audio presence (the balance of audio frequencies and other acoustical characteristics) will change with each change in microphone distance.
In addition, different types of microphones and different room conditions have different audio characteristics that can complicate the audio editing process.
It's possible to correct these problems to some degree during the audio postproduction sweetening phase where various audio embellishments are added. During this phase such things as graphic equalizers are used to try to match the audio between successive scenes.
Since exact matches can at times be very difficult, it's far easier just to keep in mind (and avoid) the proximity effect problems that will be introduced whenever you use microphones at different distances. These differences will vary, depending on the microphone and the acoustics of the location.

Mic Connectors
To ensure reliability, mic and general audio connectors must always be kept clean, dry, and well aligned, without bent pins or loose pin connectors.
The two connectors on the left of this photo are female and male Cannon or XLR connectors. These three-pin connectors are used in professional audio applications.
To the right of the Cannon connectors are the mono and (with the floating center connector) stereo miniature connectors. Finally, on the right of these is the RCA-type connector, which is common to most home entertainment equipment.
Most consumer and prosumer camcorders have miniature stereo connectors. Since professional microphones have male XLR connectors, an adapter is needed. The simplest solution is an in-line adapter at either end of the microphone cable.
A more versatile approach is the connector box shown on the left that has XLR plugs and volume controls for multiple mics.
This adapter box can be attached permanently to the bottom of a camcorder.
When used on location, audio connectors must be kept dry. However, mic cables can be strung across wet grass or even through water without ill effects — assuming the rubber covering has not been damaged.
If you must work in rain or snow in the field, moisture can be sealed out of audio connectors by tightly wrapping them with plastic electrical tape.
It should be emphasized that this applies to mic cables only. If power cords are used in the field for the camera, lights, or recorder, these cables and connectors must always be kept dry to avoid a dangerous electrical shock hazard.

Positioning Mic Cables
Running mic cables parallel to power cords often creates hum and interference problems. The solution is often as simple as moving a mic cable a meter away from any power cord.
Fluorescent lights can also induce an annoying buzz in audio. Computers and certain types of medical equipment, especially if they are near audio cables or equipment, can also create undesirable noise.
By carefully listening to your audio pickup with a set of high-quality, padded earphones, you can generally catch these problems before it's too late.
Mic cables can often be a problem, so in the next module we'll discuss wireless microphones.
________________________________________
Module 40
Updated: 04/04/2010





Wireless Microphones

Wireless mics can solve many audio problems in production.
They are especially useful when talent must be free to roam, such as when doing an ENG report from the lighthouse shown here.
At the same time, wireless mics can introduce problems.
In a wireless mic, a dynamic or condenser microphone is connected to a miniature FM (frequency modulated) radio transmitter. Because the mic's audio signal is converted into a radio frequency (wireless) signal and transmitted throughout the production area, these mics are also referred to as RF mics.
There are two types of wireless mics: the self-contained (all-in-one) unit and the two-piece type.
In the self-contained, handheld unit, as shown on the left, he mic, transmitter, battery, and antenna are all part of the microphone housing.
When small, unobtrusive clip-on mics are desirable, a two-piece wireless unit is the best choice.
In this case the mic is connected to a separate transmitting unit that can be clipped to the belt, put in a pocket, or hidden underneath clothing.
Many of the problems with interference, fading, etc., which at first plagued wireless mics have now been reduced or eliminated. Today, RF mics are widely used in both studio and on-location productions.
Some camcorders have built-in receivers for wireless mics, thus eliminating the vexatious mic cable that normally connects the reporter or interviewer to the camera.

Transmitting Range
In a wireless microphone the signal from the dynamic or condenser mic is converted to a low-power FM signal and transmitted in a more or less circular pattern.
The transmitter uses either an internal antenna within the mic's case, as shown above, or an external antenna, generally in the form of a short wire attached to the bottom of a separate transmitting unit.
In the latter case the antenna wire needs to be kept relatively straight and not folded or coiled up in a pocket. Some audio engineers will tape the antenna to the skin of talent, but it has been found that the dampness in human skin can degrade the FM signal.
Under optimum conditions wireless mics can reliably transmit over more than a 300-meter (1,000-foot) radius. If obstructions are present, especially metal objects, this distance can be reduced to 75 meters (250 feet) or less.

Interference Problems
Solid objects between the RF mic and the mic's radio receiver often create a condition of multi-path reception caused by part of the signal from the transmitter being reflected off of an object. This is illustrated on the left.
This secondary signal (shown in red) then interferes with the primary (direct) signal.
The problem can be particularly annoying if the talent is moving around interfering objects and the audio begins to rapidly fade in and out. As we will see, this problem can often be avoided.
Because of FCC (U.S. Federal Communications Commission) limitations in the United States, the FM mic signal must be of relatively low power. As a result, other radio transmitters occasionally interfere with the signal. This is called RF interference.
Even though they may be on different frequencies, nearby radio services emit harmonic (secondary) signals that, if strong enough, can be picked up by the wireless mic receiver.
In order for a wireless FM mic signal to be reliable, its RF signal must be at least twice as strong as any interfering signal.
Most RF mics transmit on frequencies above the standard FM radio band in either the VHF (very high frequency) range, or UHF (ultra-high frequency) band. Since the UHF band is less crowded, audio engineers prefer it.
To alleviate the possible interference problem professional wireless mics allow you to select different frequencies. Today, dozens of different frequencies and digital subset frequencies are possible. In fact, some elaborate productions have used almost 100 different mics and mic frequencies in a single production.

Wireless Mic Receiving Antennas
A good signal from an RF mic is of little value unless it can be received without multi-path or other types of interference. One of the most effective ways to eliminate interference is with the proper placement of the receiving antenna(s).
There are two types of wireless mic receivers.
Non-diversity receivers use a single antenna mounted on the back of the receiver. This type is most prone to multi-path problems -- especially if the talent moves around.
Two antennas are used in diversity receivers. Since the two antennas can be placed some distance apart, it's assumed that any time one antenna is not picking up a clear signal the other one will. To keep the signals from interfering with each other electronic circuitry within the receiver can instantly select the stronger and clearer of the two signals.
The receiver should be placed so that, as the talent moves around, no solid object, especially a metal object, can come between the receiver and the wireless mic.
The angle of the receiving antenna sometimes has to be adjusted to bring it in line with the angle of the transmitting antenna on the microphone. For example, if a long wire looped around the belt line is used on the mic transmitter you may have to turn the receiving antenna ▲so it's parallel.
Try to keep the RF mic and the receiver as close as possible. Be aware that such things as neon and fluorescent lights, the high-intensity display of a Seadicam® video monitor, electric or gasoline powered vehicles, and lighting dimmer boards can interfere with the signal.
Do not let a mic cord and a mic transmitter wire cross. The result can be an unpleasant interaction.
And, finally, be aware of the fact that RF mics use batteries with a limited life. Many RF mic "reception problems" can be traced to a weak battery. Audio engineers recommend installing a fresh (or fully recharged) battery every time you start a major production.
________________________________________
Module 41
Updated: 04/08/2010




Using Off-Camera
Microphones

Although it may be appropriate to use handheld, lav, or RF mics for interviews, there are instances in television production when it's desirable to use an unseen microphone.
Examples would be:
• because seeing a mic wouldn't be appropriate, as in the case of a dramatic production

• when mic cords would restrict the movement of talent, such as in a dance number

• when there are too many people in the scene to use multiple personal, handheld or RF mics, such as with a choir
Because of their nondirectional nature, omnidirectional or simple cardioid-patterned microphones used at a distance of 1½ meters (five or six or feet) or more quickly start picking up extraneous sounds. Depending on the acoustics of the location, this can also cause the audio to sound hollow and off-mic.
Consequently, only microphones with a supercardioid or narrower pattern should be used as off-camera mics.
Just as the eye sees selectively and may not notice a coat rack "growing out of" someone's head in a scene, the ears hear selectively and may not notice an annoying reverberation in a room, which, when picked up by a mic, can render speech difficult to understand.

Room Acoustics
Whenever a room has smooth, unbroken walls or uncarpeted floors, reverberation (slight echoes) can be a problem.
Moving mics closer to subjects is the simplest solution, but that's not always possible. Other solutions include using highly directional mics, adding sound absorbing materials to walls, or placing objects within a scene that will absorb or break up sound reflections.
As we previously noted, one type of highly directional mic commonly used for on-location shoots is --

The Shotgun Mic
Because of their highly directional characteristics shotgun mics can be used out of camera range at distances of up to 10 meters (25 to 30 feet).
As with all directional mics, they have to be carefully aimed, preferably with the aid of high-quality earphones.
Shotgun mics are often mounted on --

Fishpoles
The quickest solution for picking up audio, especially in on-location shooting, is to attach a directional mic to a pole and have someone hold it just out of camera range.
As the name suggests, a fishpole consists of a pole with a mic attached to one end.
A sound person equipped with an audio headset can monitor the sound being picked up and move the microphone according to changes in camera shots and talent position. Supercardioid and hypercardioid mics mounted in a shock mount (a rubber cradle suspension device) are commonly used. Note the shock mount in the photo below.

Microphone Booms
In the studio the simple fishpole moves into the much more sophisticated category of boom mic.
Microphone booms range from a small giraffe (basically a fishpole mounted on a tripod) to a large perambulator boom that weighs several hundred pounds, takes two people to operate, and can extend the mic over the set from a distance of 10 meters (more than 30 feet).
The largest booms have a hydraulically controlled central platform where operators sit and watch the scene on an attached TV monitor while controlling such things as the
• left or right movement (swing) of the boom arm
• boom extension (reach of the arm)
• left to right panning of the attached microphone
• vertical tilt of the microphone

Hanging Microphones
Often, you can get by without a boom mic, especially if the talent is confined to a limited area.
For example a mic can be suspended over a performance area by tying it to a grid pipe or fixture just above the top of the widest camera shot. The disadvantage of this approach, of course, is that the mic can't be moved during the production.
Both boom mics and suspended microphones should be checked with the studio lights turned on to make sure they do not create shadows on backgrounds or sets.

Hidden Microphones
It's sometimes possible to hide microphones close to where the on-camera talent will be seated or standing during a scene. This will eliminate both the need for personal or handheld mics and the problems that the associated mic cords can introduce.
Microphones are sometimes taped to the back of a prop or even hidden in a table decoration, such as the vase of flowers shown here.
When placing mics, keep in mind the proximity effect discussed earlier. You may find during an editing session that the audio from different mics used at different distances will not "cut together" (edit together) without noticeable changes in quality.
Sometimes several mics must be used on a set at the same time. In this case when a mic not being used at a particular moment it should be turned down or switched off. This not only reduces total ambient sound, but also eliminates something called --

Phase Cancellation
Phase cancellation, which results in low-level and hollow-sounding audio, occurs when two or more mics pick up sound from the same audio source.
Because the sounds arrive at the mic at slightly different times, they end up being out of phase and to various degrees they can cancel each other out.
When multiple mics are used on a set there are four things you can do to reduce or eliminate the resulting phase cancellation:
• place mics as close as possible to sound sources
• use directional mics
• turn down mics any time they are not needed
• carefully check and vary distances between the sound sources and multiple mics to reduce or eliminate any cancellation effect (A speaker's mic should be placed at one-third or less distance from the next nearest mic.)
In the next section we'll explore another dimension of audio: stereo and surround-sound.
Module 42
Updated: 05/13/2010



Stereo, Quad and 5.1 Sound

Just as we see in 3-D, we also, in a sense, hear in 3-D.
Our ability to judge visual depth and perception is based on interpreting the subtle differences between the images we see in our left and right eyes. Our ability to locate where sounds are originating is possible in part because we have learned to unconsciously understand the minute and complex time-difference relationship between the sounds from our left and right ears.
If a sound comes from our left side, the sound waves will reach our left ear a fraction of a second before they reach our right ear. We've learned to interpret this subtle time difference, which, technically, is known as a phase difference.
Depending upon the location of a sound, we might also note a slight difference in loudness between sounds that occur on our left and sounds coming from our right — which also helps us place the sound in a three-dimensional perspective.
In stereo production we are dealing with sound intended for our left and right ears, and the inherent differences represented. Therefore, recording and playing back stereo signals requires two audio channels.

Creating the Stereo Effect
In TV production there are several approaches to creating the stereo effect.
First, there is synthesized stereo, where stereo is simulated electronically. Here, a monaural (one channel, non-stereo) sound is electronically processed to create the effect of a two-channel, stereo signal.
A slight bit of reverb (reverberation, or echo) adds to the effect. Although this is not true stereo, when reproduced through stereo speakers, the sound will be perceived as having more dimension than monaural sound.
The elaborate audio board below can easily accomplish this.

True stereo is only possible if the original sound is recorded with two microphones, or a microphone with two sound-sensing elements.
This process is fairly simple when the output of a stereo mic is recorded on two audio tracks and the two tracks are subsequently reproduced with two speakers. Things get much more complicated when you want to mix in narration, music, and visual effects.
Typically in productions a monophonic (non-stereo) recording of narration is mixed into a background of stereo music or on-location stereo sound. The narration (or primary dialogue in a dramatic production) is typically placed "center stage" and the stereo track adds a left-to-right stereo dimension.
But, what if you are doing more sophisticated audio work, such as a contemporary music session, where you want to record the various instruments separately and then carefully and creatively mix them down to stereo tracks?
In a TV production the placement of instruments, vocalists, etc. in a setting is commonly arranged on the basis of how things will look visually, and not for optimum sound balance. For this reason you typically need to mic each element separately and then create the best sound balance in audio post-production or mixing by controlling the perspective of each instrument through an audio board while, of course, keeping in mind the original visual perspective.
For this you need —

Multi-Track Recording
Originally, recorders were used that could record from 8 to more than 40 separate audio tracks on a single piece of one-inch or two-inch audiotape. The recorder shown on the right records 16-tracks on two-inch, reel-to-reel tape. (Note the 16 VU meters on the machine.)
Today, audiotape has been largely replaced by computer-type hard disks. This type of digital recording not only makes it possible to record and play back high quality digital sound, but to almost instantly find needed segments.
By recording the various sources of sound on separate audio tracks, they can later be placed in any left-to-right sound perspective. The unique and creative sound of many of today's recording artists originates in the "mix" created by recording engineers.
In contrast to contemporary or modern music, recordings of classical music and orchestras are generally done with only one (strategically placed) stereo or ▲surround sound mic. In this case, the sound mix and balance are the responsibility of the conductor rather than an audio engineer.
Two approaches to stereo micing are used: the X-Y and the M-S approaches. Each has its advantages.

The X-Y Stereo Mic Approach
The easiest approach to stereo recording is to use an all-in-one stereo mic, which is basically two mics mounted in a single housing, or, as shown on the left, two mics mounted outside of a housing.
This approach to stereo is referred to as the coincident pair or X-Y technique.
Single unit stereo mics are useful in on-location productions where things need to be kept simple and audio can be successfully miced from one location.
However, this approach can limit stereo separation (a clear and distinct separation between the left and right stereo channels), and the ability to control the left and right sound perspective.
Although not as convenient, two separate mics can also be used in for X-Y recording. (See first illustration below.) With this approach two cardioid mics are pointed toward the subject matter, creating about a 130-degree arc of sensitivity (in green below).

The M-S Micing Techniques
Although more technically complex, some engineers feel that the mid-side, or M-S technique (on the right in the illustration) provides greater stereo flexibility.
In this case, bi-directional and unidirectional (supercardioid) mics are typically used together.
The directional mic (shown in dark blue in the illustration on the right above) picks up the basic audio in the center of the scene.
The bi-directional mic's polar pattern (shown in green in the center of the illustration) picks up the left and right audio channels. The areas of minimum sensitivity for this mic are oriented toward the camera, thereby suppressing unwanted production and studio noise.
The outputs of both mics are fed through a complex audio matrix circuit that uses the phasing differences of the mics to produce the left and right channels.
By adjusting the level of the mid (center) mic in relation to the side (figure 8) mic level, the stereo image can be made narrower or wider without moving the mics.
As in the case of X-Y mics, MS mics are available that include both of these mic elements within a single housing. But, when these single unit stereo mics are used, they can't inadvertently be mounted upside down or you will reverse the left and right stereo perspective.

Maintaining The Stereo Perspective
Stereo audio in TV production faces a major problem because camera angles and distances shift with each new camera shot.
Because of this, it's almost impossible — or at least it would be pretty confusing — if the stereo perspective shifted with each change in camera angle.
For example, in an on-location sequence shot at the beach it would be rather disconcerting if the ocean's audio position jumped from left to right with each reverse-angle shot. So we have to compromise.
In the case of an ocean, an audio engineer might place the ocean (or a sound effect of the ocean) in a left-to-right perspective that matches the initial wide-angle establishing shot and then hold that same stereo perspective in the audio tracks for subsequent close-ups — even reverse-angle shots.
Although the sound perspective might not remain true to what you see on the screen, there won't be abrupt changes in audio that would be call attention to themselves and be distracting.
However, for lengthy shots that clearly represent changes in stereo perspective, a pan pot can be used to subtly shift the ocean so that a true left-to-right stereo perspective is simulated.
A pan pot consists of two or more faders (volume controls) ganged together. They can be used on an audio board during postproduction to slowly move a source of sound from one stereo channel to the other. This will avoid jarring shifts in sound perspective as shots are changed.
Changes in the stereo placement end up being a creative decision. There are no rules, but there are two guidelines.
First, try to simulate the authentic stereo sound perspective whenever possible. The second guideline, which is even more important, says it's never desirable to use a production technique — in either audio or video — that diverts viewer attention away from production content. It's better to hold back on authenticity rather than use an effect that will call attention to itself.

Keeping Dialogue "Center Stage"
For maximum sound clarity the dialogue for dramatic productions should be mixed to keep it in the center of the stereo perspective. In most cases this will conform to what you see on the screen. The momentary exception might be when someone or something enters from one side of a frame
Even with center-stage dialogue, a stereo perspective can be added by mixing in stereo background music and sound effects during postproduction.
In sporting events background stereo sound of the crowd is typically mixed in with monophonic feeds of play-by-play narration. If there are two announcers, pan pots can be used to place them slightly to the left and right of center (but never at the extreme ends of the left-right stereo perspective).
For cuts to roving cameras focused on cheerleaders or sideline activity a stereo mic mounted on the camera can be faded into existing program audio when that camera is switched up.

Stereo Playbacks
Although many TV sets have stereo speakers built in, the distance between the speakers can limit the stereo separation and, therefore, the stereo effect.
Ideally, a stereo signal should be reproduced by two good-quality speakers placed about one meter (three feet) on either side of an average-sized TV set.

The distance between the speakers depends on the viewing distance and the size of the screen. The farther back the listener is the greater the distance can be between the speakers.
If a noticeable audio "hole" seems to be present between the left and right sound sources, the speakers are too far apart.

Surround-Sound / 5.1 Sound
The ATSC (Advanced Television Systems Committee) standard for digital TV adopted by United States and Canada includes 5.1-channel surround sound using the Dolby Digital AC-3 format. Compared to the earlier stereo TV broadcast standard, 5.1 audio adds important dimensions to TV audio.
Stereo covers about a 120-degree frontal perspective. Although this provides significant realism, we can actually perceive sounds in a much wider perspective — even in back of us.
Surround-sound, quadraphonic sound and 5.1 Dolby sound systems attempt to reproduce sounds in both the front and back of the listener — close to a 360-degree sound perspective.
Even though the number of homes equipped with full 5.1 surround-sound decoders is still limited, many productions are now being done in surround-sound.
The Dolby 5.1 Surround Sound system consists of six discrete channels of audio: left, center and right channels in front of listeners, and left-surround and right-surround at the back sides. If you've been counting, that only totals five channels, not six.
The 6th channel (which is the ".1" part of the designation) is a bass channel of limited frequency response (3-120Hz). Although it's capable of producing a room-rattling bass, it only takes up one-tenth of a full-range audio channel. Hence, the system is referred to as 5.1. Bass is essentially nondirectional, so the speaker can be placed almost anywhere in the room.
Of course, placing all these speakers in appropriate places and distances within a room strains most interior design schemes, so to tackle that problem researchers analyzed the way we hear sounds and came up with a surround-sound system that uses only two (high quality) speakers.
To achieve the expanded effect, multi-channel audio recordings are digitized and fed into a computer during postproduction. Using this technique, even a vertical dimension can be suggested. While not as good as a five- or six-speaker setup, it's an improvement over standard stereo.

Quadraphonic Mics
Quad mics, mics that detect sounds in nearly a 360-degree perspective have four mic elements within a single housing. From these mic elements separate channels for five or even six speakers can be derived.
Typically, an upper capsule contains two mic elements and picks up sound from the left-front and right-rear. Another capsule mounted below this one picks up sound from the right-front and left-rear. These are then recorded onto four audio tracks.
During postproduction the four audio tracks are fed through a computer and mixed with tracks of music and effects (M&E) to develop a full surround-sound effect.
Starting with a basic stereo signal, the latest digital audio and video editors can simulate full 5.1 surround-sound.

Speaker Polarity
When connecting speaker wires to amplifiers attention needs to be paid to polarity — the positive and negative leads (wires) to the speakers.
Generally, one of the wires will be different — possibly it will be a different color or have a different stripe. Amplifiers will often have red and black terminal connections to indicate these differences.
If you do not maintain this consistency (polarity) in hooking up both the amplifier and connections to the speakers, the audio will be out of phase. Among other things, you will experience sound cancellation effects and a loss of bass.
While we are talking about this, you need to know that it's never a good idea to operate an amplifier without the speakers being connected — especially with the volume turned up. Without "a load" of the speakers, some amplifiers can burn out.
In the next section we'll more fully explain digital audio.
________________________________________
Module 42-B




Digital Audio




There is very little about the details of analog audio technology that is useful in the digital world... This means having to learn the basics all over again....
Lon Neumann, Audio Engineer

The decade of the '80s saw the introduction of digital audio signal processing. This not only opened the door to a vast array of new audio techniques, but it represented a quantum leap in audio quality.
For example, the following technical problems have been a headache for audio recording engineers for decades: (Don't worry if you don't understand what these things mean — or, then again, you could memorize them and impress your friends by tossing these terms around in a conversation!)
• wow and flutter
• remnant high frequency response/self-erasure
• modulation noise
• bias rocks
• print-through
• azimuth shift
• head alignment problems
• stereo image shift
• poor signal-to-noise ratio
• generational loss
All of these problems and even a few more are eliminated with digital audio.
This is possible because of the precise timing pulses associated with digital audio and the fact that digital signal is comprised of "0s" and "1s." These represent simple positive and negative voltages that are not close to each other in value (so as to get easily confused or muddled along the line).
As long as equipment can reproduce just these two states, there is an audio signal.
However, with an analog signal there are an unlimited number of associated values, providing ample opportunity for things to get out of whack.
Technically speaking, the background noise of a digital signal can be as bad as 20dB (which is a lot) and the digital signal will still survive. In the case of an analog signal, this would translate into intolerable noise.

Copying vs. Cloning
Each time you make a copy of an analog audio segment you introduce aberrations because you are only creating a "likeness" of the original. With digital technology you are using the original elements to create a "clone."
If we are using the original uncompressed digital data, we can fully expect to end up with an exact clone of the original, even after 50 generations (50 copies of copies).
With analog data copies of copies quickly result in poor audio quality. Before the event of digital technology, such things as nonlinear editing (which we'll talk about in Module 56) were not possible.
If you have the option, you'll want to convert analog data into digital as soon as possible and leave it that way until you are forced at some point to convert it back to analog.

Converting Analog to Digital
The same sampling and quantizing principles that we discussed in digital video apply to digital audio. With both audio and video the analog signal is typically quantified or sampled 48,000 times per second.
That means that every 20 microseconds a "snapshot" is taken of the analog voltages. This instantaneous snapshot is then converted first to a base-ten number and from there to a computer-type binary ("0" and "1") form.
The number of data bits used to encode the analog data determines the resolution and dynamic range possible.
A 16-bit encoding system has 65,536 voltage steps that can be encoded. Obviously, the higher the data bits the better the quality — and the more technical resources required to handle the signal.
Such high rates demand a high degree of timing (synchronization) precision. Without it things fall apart with stunning speed.
Just as in video, a synchronizing signal is used to keep things in lock step. This signal or synchronizing (sync) pulse in digital audio is typically sent out every 0.00002 of a second.

Quantizing Error
In audio production signals must be converted back and forth from analog to digital and from digital to analog. Since we are dealing with "apple and orange" types of data, something called a quantizing error can result.
In the analog-to-digital conversion process, a voltage midpoint is selected in the analog values to use as the digital equivalent. This midpoint is a close, but generally not a perfect, reflection of the original analog signal. Thus, the error, and the need to minimize the number of digital-to-analog (as well as analog-to-digital) conversions.

Optimum Digital and
Analog Audio Levels
The optimum audio levels for digital audio signals are different than those for analog signals.
Whereas the 0dB peak setting is the standard operating level (SOL) for analog systems, for digital equipment the maximum level (in North America) is typically -20dB.
With both analog and digital signals it comes down to something called headroom.
Headroom is the safe area beyond the SOL (standard operating level) point. With a SOL of -20dB, (which is typically the standard in North America) this leaves 20dB for headroom. European countries tend to allow for-18dB of headroom.
Okay, this is a bit technical, but just keep in mind that the maximum audio level for analog signals will generally be different than it will for digital signals.
Analog audio systems often use an analog meter, such as the one shown on the left.
With digital signals, however, a digital meter, such as the one shown on the right, or a PPM meter (to be discussed below), is used. In the case of the digital meter on the right, when the signal touches the red area, we're entered the headroom area.
If a digital signal were to go to the very top of the scale, clipping would occur. Unlike analog audio, where exceeding the maximum level will result in signal distortion, in digital audio you might not notice the elimination of audio peaks .
Actually, an occasional full-scale digital sample (to the top of the red range above) is considered inevitable; but, a regular string of "top of the scale" occurrences means that the digital audio levels are too high and you are losing audio information.
VU meters respond in different ways to audio peaks.
In the case of a ▲ standard VU meter the needle tends to swing past peaks because of inertia. At the same time, this needle will not quickly respond to short bursts of audio. Thus, this type of meter tends to average out audio levels.
Because of the limited headroom with digital audio signals a faster responding peak program meter (PPM) or the previously discussed digital meter is preferred. On the outside a PPM looks like the animated VU meter.
Before you can really get serious about maintaining correct audio levels throughout a production facility, you must see that the audio meters throughout the facility accurately calibrated to a standard audio reference level.
Although, facilities can adopt their own in-house standards, typically, a 1,000Hz audio tone should register 0dB on analog equipment and -20dB on digital equipment.
At the same time, production facilities can set their own internal standards as long as they remain consistent throughout the facility and everyone knows what they are.

Digital Standards
In 1985, the Audio Engineering Society and the European Broadcasting Union developed the first standard for digital audio. This is referred to as the AEB/EBU standard. This standard was amended in 1993.
Before this standard was adopted digital audio productions done in one facility could experience technical problems when moved to another production facility.

Digital Audio Time Code
Although we will cover time-code when we talk about video editing, we need to mention at this point that digital audio systems make use of similar system of identifying exact points in a recording.
This is essential in the editing process in order to identify and find audio elements, as well as to keep audio and video synchronized.
But, as we will see when we talk about video time code, in the process of converting frame rates between the 24, 30, and 29.97 (the different video standards), timing errors develop.
Unless the audio technicians are aware of these differences and take measures to compensate, after a few minutes video and audio can get noticeably out of sync. (We've probably all seen movies where the lip-sync was out and the words we were hearing didn't exactly match the lip movements of the actors.)
People working with digital audio should at least be aware of the potential problem, and before a video project is started, consult an engineer about the possible problems that could arise in the conversion process. It's much easier to head off these problems before a projects starts than to try to fix them later.
In the next section we'll talk about audio control devices.

Module 43
Updated: 04/04/2010



Audio Control
Devices

Boards, Consoles, and Mixers
Various sources of audio must be carefully controlled and blended during a production.
You will recall we said if analog audio levels are allowed to run at too high a level, distortion will result, and if levels are too low, noise can be introduced when levels are later brought into the normal range.
Beyond this, audio sources must be carefully and even artistically blended to create the best effect.
The control of audio signals is normally done in a TV studio or production facility with an audio board or audio console. A sophisticated version, similar to what you would find in many TV stations, is shown in the right.
Audio boards and consoles are designed to do five things.
• amplify incoming signals

• allow for switching and volume level adjustments for a variety of audio sources

• allow for creatively mixing together and balancing multiple audio sources to achieve an optimum blend

• route the combined effect to a transmission or recording device
Sophisticated audio boards or consoles also allow you to manipulate specific characteristics of audio. These include the left-to-right "placement" of stereo sources, altering frequency characteristics of sounds, and adding reverberation.
For video field production smaller units called audio mixers provide the most basic controls over audio.
A simplified block diagram of an audio mixer is shown below. The input selector switches at the top of each fader can switch between such things as microphones, CDs, file servers, and satellite feeds.
The selector switch at the bottom of each fader typically switches the output of the fader between cue, audition and program.
Cue is primarily used for ▲ finding the appropriate starting point in recorded music. A low-quality speaker is intentionally used in many studios so cue audio is not confused with program audio.
Audition allows an audio source to pass through an auxiliary VU meter to high quality speakers so levels can be set and audio quality evaluated.
And, of course, program sends the audio through the master gain control to be recorded or broadcast.
Even though audio boards, consoles, and mixers can control numerous audio sources, these sources all break down into two main categories:
• mic-level inputs
• line-level inputs
Mic-level inputs handle the extremely low voltages associated with microphones, while line-level inputs are associated with the outputs of amplified sources of audio, such as CD players.
Once they are inside an audio board, all audio sources become line-level and are handled the same way.

Using Multiple Microphones in the Studio
Most studio productions require several mics. Since the mics, themselves, may have only a 5 to 10 meter (15-30 foot) cord, mic extension cables may be needed to plug the microphone into the nearest ▲mic connector.
Studio mics use cables with three-prong XLR or Cannon connectors, as shown on the left.
Since things can get confusing with a half-dozen or more mics in use, the audio operator needs to make a note on which control on the audio board is associated with which mic. A black marker and easily removed masking tape can be used on the audio board channels to identify what mic is plugged into what channel. Mic numbers ("lav 1") or talent names ("John") can be used for identification.
In the studio mic cables are normally plugged into three-prong XRL or Cannon connector receptacles mounted in the studio wall as shown in this six-connector array.
Because mics represent one of the most problem-plagued aspects of production, they should be carefully checked before the production begins. Unless you do this, you can expect unpleasant surprises when you switch on someone's mic, and there is either no audio at all, or you faintly hear the person off in the distance through another mic. Either way, it's immediately clear that someone goofed.
There is another important reason that mics should be checked before a production: the strength of different people's voices varies greatly.
During the mic check procedure you can establish the levels (audio volume) of each person by having them talk naturally, or count to 10, while you use a VU meter to you set or make a note of the appropriate audio level.
Of course, even after you establish an initial mic level for each person, you will need to constantly watch (and adjust) the levels of each mic once the production starts. During spirited discussions, for example, people have a tendency to get louder. Monitoring audio gain will be discussed below.
It is also good practice to have a spare mic on the set ready for quick use in case one of the regular mics suddenly goes out. Given the fragility of mics, cables, connectors, etc., this is not an unusual occurrence.

An elaborate digital audio console (board), such as the type you would find in a major production studio, is shown above. Note that many of the setting and monitoring status displays are in the form of small LCD screens at the top of the board.
Remember, throughout these modules we're introducing you to equipment that you could easily encounter on a job or internship, and not the kind of equipment that's typical for schools and training facilities.


Using Multiple
Mics in the Field
If only one mic is needed in the field, it can simply be plugged into one of the audio inputs of the camera. (The use of the internal camera mic is not recommended except for capturing background sound.)
When several microphones are needed and their levels must be individually controlled and mixed, a small portable audio mixer will be needed. The use of an audio mixer generally requires a separate audio person to watch the VU meter and maintain the proper level on each input.
Portable AC (standard Alternating Current) or battery-powered audio mixers, such as the one shown here, are available that will accept several mic- or line-level inputs.
The output of the portable mixer is then plugged into a high-level video recorder audio input (as opposed to a low-level mic input).
Most portable mixers have from three to six input channels. Since each pot (for potentiometer) fader or volume control) can be switched between at least two inputs, the total number of possible audio sources ends up being more than the number of faders. Of course, the number of sources that can be controlled at the same time is limited to the number of pots on the mixer.
There is a master gain control — generally on the right of the mixer — that controls the levels of all inputs simultaneously. Most mixers also include a fader for headphone volume
Although handheld mics are often used for on-location news, for extended interviews it's better to equip both the interviewer and the person being interviewed with personal mics.
Whereas the mixer shown above will probably require a special audio person to operate, the cameraperson can operate the simple two-mic mixer shown on the left. The output from the unit is simply plugged into the camcorder. A slightly different approach to this was discussed in Module 39.

Audio Mixer Controls
Audio mixers and consoles use two types of controls: selector switches and faders. As the name suggests, selector switches simply allow you to select and direct audio sources into a specific audio channel.
Faders (volume controls) can be either linear or rotary in design. As we've noted, faders are also referred to as attenuates, gain controls, or pots (for potentiometers).
A rotary fader is shown here.
Linear faders (shown on the right) are also referred to as vertical faders and slide faders.

"Riding Gain"
It's important to maintain optimum levels throughout a production. This is commonly referred to as riding gain.
You will recall that, depending on the production facility, digital and analog audio signals typically require different optimum levels. However, to reduce confusion in the following discussion we'll use the analog standard of 0dB to represent a maximum level.
Normal audio sources should reach 0dB on the VU or loudness meter (next to the 100 in the illustrations) when the vertical fader or pot is one-third to two-thirds of the way up (open).
Having to turn a fader up fully in order to bring the sound up to 0dB indicates that the original source of audio is coming into the console at too low a level. In this case the probability of system background noise increases.
Conversely, if the source of audio is too high coming into the board, opening the fader very slightly will cause the audio to immediately hit 0dB. The amount of fader control over the source will then be limited, making smooth fades impossible. In either case an adjustment should be made in the output of the originating audio souce.
To reflect the various states of attenuation (resistance), the numbers on some faders are the reverse of what you might think. The numbers get higher (reflecting more resistance) as the fader is turned down. Maximum resistance is designated with an infinity symbol, which looks like an "8" turned on its side.
When the fader is turned up all the way, the number on the pot or linear fader may indicate 0, for zero resistance. Even so, just as you would assume, when the pot is turned clockwise or the fader control is pushed up, volume is increased.

Level Control and Mixing
Audio mixing goes beyond just watching a VU meter. The total subjective effect as heard through the speakers or earphones should be used to evaluate the final effect.
For example, if an announcer's voice and the background music are both set at 0dB, the music will interfere with the announcer's words. Using your ear as a guide, you will probably want to let the music peak at around -15dB, and the voice peak at 0dB to provide the desired effect: dominant narration with supporting but non-interfering background music.
But, since both music and voices have different frequency characteristics (and you'll recall that, unlike VU meters, our ears are not equally sensitive to all frequencies), you will need to use your ear as a guide.
During long pauses in narration you will probably want to increase the level of the music somewhat, and then bring it down just before narration starts again.
In selecting music to go behind (under) narration, instrumental music is always preferred. If the music has lyrics sung by a vocalist (definitely not recommended as background to narration) they would have to be much lower so as not to compete with the narrator's words.

Using Audio From PA Systems
In covering musical concerts or stage productions a direct line from a professionally mixed PA (public address) system will result in decidedly better audio than using a mic to pick up sound from a PA speaker.
An appropriate line-level output of a public address (PA) amplifier fed to a high-level input of a mixer can be used. However, don't connect a high-level or speaker level PA signal to a mic input. I can damage the amplifier.
________________________________________
Module 44A
Updated: 05/11/2010



Part I

Audio Recording,
Editing and Playback

A Quick Look Back:

Turntables and Reel-to-Reel Tape Machines
Records and reel-to-reel tape machines used to be the primary source of prerecorded material in TV production. Part of a reel-to-reel machine is shown on the right.
Today, they have almost all been replaced by CDs (compact discs), DAT (digital audiotape) machines, and computer-type hard drives.
"Vinyl," a term that refers mostly to LP (long playing) records, was the primary medium for commercially recorded music for several decades. (Note photo below.)
Most vinyl records were either 45 or 33 1/3 rpm (revolutions per minute) and had music recorded on both sides. Records had a number of disadvantages — primarily the tendency to get scratched and worn, which quickly led to surface noise.
Unlike vinyl records, some of the newer media can be electronically cued, synchronized, and instantly started — things that are important in precise audio work.
Reel-to-reel analog 1/4-inch tape machines, which were relied upon for several decades in audio production, have also almost all been replaced — first by cart machines (below) and then by DAT machines and computer hard drives.
The Return of Vinyl?
Although digital equipment has a multitude of advantages, especially in TV production, in recent years some audiophile purists have been returning to analog recordings -- especially vinyl LP records. (Note photo above.)
They say that analog equipment, including tube-based amplifiers, renders a fuller, richer tone to music. Unfortunately, this latest generation of this type of analog equipment tends to cost many times what it originally did.
Along with the move to maximum audio fidelity we've seen the opposite take place with the popularity of miniature digital audio devices equipped with earbuds. Although convenient and quite mobile, the audio files typically involve significant digital compression, which impacts quality. In particular, the dynamic range is reduced, raising the overall loudness of music.

Cart Machines
Cart machines (cartridge machines), which are still used in a few facilities, incorporate a continuous loop of 1/4-inch (6.4mm) audiotape within a plastic cartridge.

Unlike an audio cassette that you have to rewind, in a cart the tape is in a continuous loop. This means that you don't have to rewind it, you simply wait until the beginning point recycles again. At that point the tape stops and is cued up to the beginning.

Most carts record and playback 30- and 60-second segments (primarily used for commercials and public service announcements) or about three minutes (for musical selections).

Audio carts are now well on their way to ▲Museums of Broadcasting along with other exhibits of broadcast technology used in earlier years. Today, audio is primarily recorded and played back on hard drives, CDs, and solid-state recorders.

Compact Discs
Because of their superior audio quality, ease of control, and small size, CDs (compact discs) have been the a preferred medium for prerecorded music and sound effects. (However, today, radio stations typically transfer CD selections to a computer disk for repeated use.)

Although the overall diameter of a typical audio CD is only about five inches (12.7 centimeters) across, a CD is able to hold more information than both sides of a 12-inch (30.5cm) LP phonograph record. Plus, the frequency response (the audio's pitch from high to low) and dynamic range (the audio range from loud to soft that can be reproduced) are significantly better.

Although CDs containing permanently recorded audio are most common, CDRs (recordable compact discs) are also used in production. These offer all of the advantages of using CDs, plus the discs can be re-recorded multiple times.

Radio stations that must quickly handle dozens of CDs use Cart/Tray CD players, such as the one shown on the right.
As we've noted, for repeated use, CD audio tracks are commonly transferred to computer disks where they can be better organized and quickly selected and played with mouse clicks or a few strokes on a keyboard. A computer screen displays the titles and artists, and the time remaining for a selection that's being played.

In mass producing CDs an image of the digital data is "stamped" into the surface of the CD in a process similar to the way LP records (with their analog signals) are produced.

When a CD is played, a laser beam is used to illuminate the microscopic digital pattern encoded on the surface. The reflected light, which is modified by the digital pattern, is read by a photoelectric cell.

The width of the track is 1/60th the size of the groove in an LP record, or 1/50th the size of a human hair. If "unwound" this track would come out to be about 3.5 miles (5.7 km) long. Of course, DVDs take this technology even further, but that's a story for another module.

In 2004, MP3 CDs appeared that have the capacity of as many as 10 standard CDs.
CD Defects and Problems

If the surface of the CD is sufficiently warped dure to a manufacturing problem, or improper handling or storage, the automatic focusing device in the CD player will not be able to adjust to the variation. The result can be mistracking and loss of audio information.
Automatic Error Correction

Manufacturing problems and dust and dirt on the CD surface can cause a loss of digital data. Professional CD players attempt to compensate for the signal loss in three ways:

• error-correction,
• error concealment (interpolation)
• muting
Error-correcting circuitry within the CD player can detect momentary loses in data (dropouts) and, based on the existing audio at the moment, supply missing data that's close enough to the original not to be readily noticed.

If the loss of data is more significant, error-correcting circuits can instantly generate or repeat data that more or less blends in with the existing audio. If this type of error concealment has to be invoked repeatedly within a short time span, you may hear a series of clicks or a ripping sound.

Finally, if things get really bad and a large block of data is missing or corrupted, the CD player will simply mute (silence) the audio until good data again appears — a solution that's clearly obvious to listeners.

In the second part of this Module we'll look at the latest audio recording and playback processes.
More

Module 44B
Updated: 04/04/2010

Part II


Audio
Recording,
Editing and
Playback
.
DAT
DATs (digital audiotapes) are capable of audio quality that exceeds what's possible with CDs.
Although the cassette is about two-thirds the size of a standard analog audiocassette, its two-hour capacity is 66 percent greater than a standard 80-minute CD.
Even though DAT has been largely replaced by hard (computer) disk recording, the DAT format is still used to a limited degree in film and television recording. One of its major advantages is that it incorporates time code that can synchronize audio with other devices.

Computer Hard Drives
Today, radio stations and professional production facilities rely primarily on computer hard drives for recording and playing back music, commercials, sound effects, and general audio tracks. Recording audio material on computer hard drives has several advantages.
First, the material can be indexed in an electronic "table of contents" display that makes it easy to find what you need. This index can also list all of the relevant data about the "cuts" (selections) -- durations, artists, etc. Second, by scrolling up or down the index you have (with the help of a mouse or keyboard) instant access to the selections.
Once recorded on a hard drive, there is no wear and tear on the recording medium as the audio tracks are repeatedly played. Another advantage is that the selections can't be accidentally misfiled after use. (If you've ever put a CD back in the wrong case, you know the problems this can represent.) And, finally, unlike most CDs, hard drive space can easily be ▲ erased and re-used.
Data Compression
Both digital audio and video are routinely compressed by extracting data from the original signal that will not be missed by most listeners or viewers.
This makes it possible to record the data in much less space, and, thus, faster and more economically.
In the case of audio compression, there is considerable controversy over what is gained and what is lost. "The Loudness Wars: Why Music Sounds Worse" on NPR discusses this controversy. An example of what is lost through compression can be found in "The Loudness Wars, A Real Example!"
As we will see in the chapters on video where this process is discussed in more detail, data can be compressed to various degrees using different compression schemes.
Although hard drives are extremely reliable, they do occasionally "crash," especially after thousands of hours of use or a major jolt ends up damaging the delicate drive and head mechanism.
Unless anti-virus measures are instituted, and assuming the computer is connected to the Internet or "the outside world," the computer operating system can also be infected with viruses, which can result in a complete loss of recorded material. With these things in mind, critical files and information should always be "backed up" on other recording media.

IC and PC Card Recorders
Some audio production is now being done with PC card and IC recorders. These and similar audio and video recorders use a variety of solid-state devices, referred to as ▲flash memory.
These memory cards contain no moving parts and are impervious to shock and temperature changes.
The data in these memory modules can be transferred directly to a computer for editing.
These units typically give you the choice of two basic recording formats: MPEG-2, a compressed data format, and PCM (pulse code modulation) which is an uncompressed digital format. The latter is used with CD players, DAT recorders, and on computer editing programs that use wave (.wav) files.

RAM Audio Recorders
As shown on the right, this new generation of recorders can be a fraction of the size of other types of recorders.
However, unlike recorders with removable media, the stored audio must generally be played back from the unit, itself.


The iPod Era
When iPod-type devices and computers that could "rip" (copy) musical selections from CDs and Internet sources arrived on the scene, consumer audio recording and playback changed in a major way.
Users can assemble hours of their favorite music (up to 2,000 songs) on a computer and transfer it to a pocket-sized, solid-state listening device such as an iPod (on the left) or to one of the new generation cell phones (on the right).


"Podcasts" of broadcasts from TV networks (photo on the left) can also be downloaded and listened to or viewed at the user's convenience.
With the iPod nano you can watch up to 5 hours of TV shows, music videos, movies, and podcasts.
Although Apple Computer initially popularized these devices, many manufacturers now produce their own versions.


Audio Editing Systems
Audio editing used to require physically cutting and splicing audiotape — an arduous process.
Today, there are numerous computer-based audio editing programs available. Many are shareware that can be downloaded from the Internet.
Shareware can be downloaded and tested, generally for about a month, before the program quits working and you need to pay for it.
Once you pay, you may be given an unlock code that will enable you to use the program for an unlimited time.
Often, minor updates to the program are free; major updates will probably involve an update charge.
In addition to basic editing, audio editing programs offer audio filtering, manipulation, and an endless range of special audio effects.

The audio line above shows how a single channel of sound appears in an audio editor. The vertical red line indicates the cursor (selector) position.
Much as a cursor is used to mark words in a word processing program to make changes as needed, the cursor in an audio time line provides a point of reference for making audio changes.


The display above shows how the time lines are integrated into a typical audio editor. Most programs use a computer mouse to drag-and-drop segments and visual effects onto a time-line (the longitudinal graphical representation of the audio along a time continuum).
Audio editing in television production is typically handled along with the video on a video editing system. This will be covered in more detail in Module 56.
The hard drives on computer-based audio editing systems can also store a wide range of sound effects that can be pulled down to a time line to accompany narration and music.
________________________________________
Module 45
Updated: 04/04/2010





Wrapping
Up Audio



Audio Level
Control Devices
Although manually maintaining audio levels is generally the best approach, there are some automatic devices that can help, and even do some things that you can't do manually.

AGC Circuits
We'll start with a simple audio control circuit, one that is built into most consumer audio equipment.
If the average audio level is low, an AGC (automatic gain control) circuit will raise it; if the average level is too high, the circuit will bring it down.
Even though AGC circuits can free you from having to worry about manually controlling audio levels, they can't intelligently respond to different audio needs.
When no other sound is present, as, for example, during a pause in dialogue, an AGC circuit will attempt to bring an audio level up to a standard setting. This can momentarily make unwanted background sounds louder. If subsequent audio processing circuits (in editing equipment, for example) have AGC circuits, the problem can get progressively worse as each piece of equipment further increases background noise.
AGC circuits can also introduce a reverse problem. Since they respond to loud noises by quickly pulling down audio levels, this means that words can be lost when an AGC circuit reacts to a loud sound, such as someone bumping the microphone.
In professional camcorders audio levels can be manually controlled, but in many consumer (nonprofessional) camcorders the AGC circuit can't be switched off.
Because of the effect of the AGC circuit in bringing up sound levels during a period of silence, the first few seconds of audio may be distorted until the AGC sets the proper level.
To get around this problem, many videographers (stuck with an AGC circuit they can't switch to manual control) have the on-camera talent say a few words just before the actual start of the segment. This can be simply counting, "5, 4, 3, 2, 1," to allow the AGC to adjust proper audio level. This countdown is then deleted during editing.

Compressors
Audio compressors also bring up low amplitude sounds and pull down the amplitude of loud sounds -- but they are much more sophisticated than AGC circuits.
Unlike AGC circuits, compressors can be adjusted so that many of the negative effects of automatic control go unnoticed. Program audio that has been compressed seems louder to the ear than non-compressed audio, a feature that hasn't escaped the attention of the producers of TV commercials.
Compressors typically have three controls:
• threshold, which establishes the audio level were compression begins
• compression ratio, which determines the amount of compression (which would be like expanding or narrowing the area on the right side of the illustration above)
• gain, which is simply the maximum output level
Some compressors have only two controls: input and output levels.
By raising the input level while keeping the output the same, a greater compression is achieved, at least until major distortion becomes evident. The compressor shown here has VU meters for input and output levels.
Compressors and AGC circuits can create problems with music. Although AM rock radio stations of the 1960s and 1970s may have preferred a maximum-loud sound, the artists often complained that their carefully balanced audio levels were destroyed. Everything in the recording, whether intended to be loud or soft, came out sounding about the same.

Limiters, Peak Limiters
A basic audio limiter isn't as sophisticated as a compressor or even an AGC circuit. As the name suggests, limiters simply keep the audio from exceeding a set maximum level.
By setting a limiter at 0dB, for example, you can be assured that a sudden loud noise, such as a door slamming, will not "pin" the VU meter and cause major audio distortion (and possibly jar listeners out of their seats!).

Audio Expanders
Although they have more limited use, audio expanders increase the dynamic (loudness) range of audio that has been overly processed. Audio that has gone through satellite relays, for example, often ends up being overly compressed.
Expanders can restore the audio to its normal range and in the process, reduce noticeable background noise.

Audio Filters
An audio filter can be used to cut or attenuate audio frequencies either above or below certain points or within the audio range.
For example, you may need to reduce or eliminate the low rumble of air conditioning or the hum of alternating current. In both cases a filter that eliminates frequencies below about 120Hz may solve the problem.
On the other end of the frequency range, you may want to try to eliminate upper range frequencies associated with the rustle of clothes or paper. For this you can try cutting off everything above about 8,000Hz.
By cutting all frequencies below about 2,000Hz, you can simulate the sound of a telephone conversation -- or possibly a radio or TV in the background of a dramatic scene. You can use a graphic equalizer to do this, or on some audio boards, you can switch a specific filter into an audio channel.

Production Communication Systems
PL Systems
Since a live, multi-camera TV production involves the closely coordinated efforts of numerous people, reliable behind-the-scenes communication links are critical.
Using a PL (private line or production line) headset such as the ones shown here, production personnel can talk to each other and receive instructions from a director.
Most PL or intercom systems are wired together on a kind of party line. In this way, each member can hear and talk to everyone else.
Normally, the headset microphones are always on so that both hands can be kept free to operate equipment.
But, for high-noise situations some PL headsets have a push-to-talk feature, which means that everyone's headset mic isn't on at the same time and contributing to the overall noise level.
Another feature that's useful in high noise situations is a large padded earphone, which will help screen out competing sound.

IFB Systems
In ENG (electronic newsgathering) and EFP (electronic field production) it may be necessary for a director to relay messages directly to on-air talent while they are on the air. This can be done if the talent uses a small earphone, or earpiece. This system is referred to as IFB (variously called interrupted feedback or interruptible feedback, or more accurately, interrupted foldback, because, technically, the signal comes from a foldback bus of the audio console).
When switched to program audio (or the basic audio being recorded or transmitted during the production) IFB systems allow on-air talent to hear questions or comments from studio anchors.
Now that you should be up to speed on audio, we can turn our attention to video in the next module.
________________________________________

________________________________________
Module 46
Updated: 04/20/2010



Video Recording
Media



Although the concept of "live" may have exciting connotations, recording a production has many advantages.
• the length of a program or segment can be shortened or lengthened during editing

• mistakes on the part of the talent or crew can be corrected, either by restarting the show, or to some degree during postproduction

• program segments can be reorganized and rearranged for optimum pacing and dramatic effect

• program content can be embellished through the use of a wide array of special effect and editing techniques

• production costs can be saved by scheduling production talent, crew, and production facilities for optimum efficiency, and

• once recorded, programs can be time-shifted or played back to meet the needs of time zones and the programming preferences of local stations
With the exception of a few prime-time dramatic productions that are still done on film, most of today's television programming is recorded on computer hard disks. Even when productions are produced on film, they are routinely converted to a video recordings before broadcast.

The Videotape Recording Process
Although videotape is has been phased out at most TV stations in favor of solid-state memory, it is still used for applications such as ▲archival storage.
Videotape resembles audiotape in its makeup. It consists of a strip of plastic backing coated with a permanent layer of microscopic metal particles embedded in a resin base. These particles are capable of holding a magnetic charge.
The videotape recording process was first demonstrated in 1953, and the first machines went into service in 1956.
Video recording revolutionized TV production.
Two-inch wide videotape (pictured at the left) was the first practical video recording medium and one that was used for several decades. Because it used four video heads to scan a complete video picture on two-inch wide tape, this system was referred to as the 2-inch quad system.
At the other end of the size spectrum was the Hi8 camcorder (right) that used videotape that's only 8mm wide.
All videotape formats used video heads that travelled across the surface of the tape and left magnetic traces in the tape's coating.
To be able to record the very high frequencies associated with video, not only must the tape be moving, but also the heads, themselves, must spin over the surface of the tape. This ends up being a little like walking along a moving sidewalk; the two speeds are added together.

Disk-Based Recording
DVD
In 1997, the DVD was introduced. (The initials stand for both digital versatile disk and digital videodisk.)
Although DVDs resemble audio CDs, they are capable of holding much more information -- up to 17GB of data.
To achieve capacities up to this level some innovations were added to the standard audio CD approach.
First, it is possible to recorded at two disk surface levels on the same side of the disk. (Note Blu-ray in the chart below.) For even greater a storage capacity both sides of the disk can be used.
Red light lasers were originally used, but the recording-playback density of data advanced in the early 2000s with the introduction of lasers using a shorter wavelength blue light -- hence, the name, Blu-ray.
The chart below compares standard audio CDs with several versions of DVDs.

Recording Technique Audio CD DVD
Single-sided, single-layer 0.74GB 4.7GB
Single-sided, double-layer — 8.5GB
Double-sided, single layer — 9.4GB
Double-sided, double layer — 17GB
HD-DVD, single layer (obsolete) — 15GB
HD-DVD, double layer (obsolete) — 20GB
HD single layer, Blu-ray — 25GB
HD double layer, Blu-ray — 50GB
Recording technology has been demonstrated that raises the Blu-ray data capacity to 200GB for a double-sided platter. In 2008, several decisions were made by the industry that meant HD-DVD format would be replaced by Blu-ray.
Data compression is used in almost all audio and video digital formats. Data compression is a little like freeze-dried instant coffee; elements are removed that can be later restored without appreciably affecting the final result.
In the same way that instant coffee is almost as good as the real thing, compressed video is almost as good as the original video signal.
Even though an engineer with a sharp eye (or ear) can tell the difference (just as coffee connoisseurs can tell the difference between instant and freshly brewed coffee), by "dehydrating" video signals, not only can far more data be recorded in the same space, but it can also be transmitted much faster.
Since the spiral tracks on the DVD disk surface are microscopic in size, it means that DVD equipment requires a high level of mechanical precision.
Since DVDs were cheaper to manufacture than VHS tapes, this sped the move to DVD. DVDs also allow for random-access, while VHS tapes were totally linear in nature. This means that with a DVD you can almost instantly jump to any point in a recording. No lengthy fast-forward or rewind process is involved.
The high data capacity of DVDs means that a production can include a number of "extras." Depending on the length of the original film, these extra options may include ▲out-takes, narration from the director, and 4:3 and 16:9 screen formats.
The narration from the director can be of particular value to people in production because it often adds significant insight into music selection, production problems, acting issues, and why particular scenes were deleted.
The 50GB capacity of Blu-ray goes considerably beyond standard DVDs. More than nine hours of HD video will fit on a disk, or about 23 hours of standard definition video.
Just as DVDs have almost completely replaced VHS tapes (which previously replaced Betamax tapes), Blu-ray is expected to replace DVDs.

DVDs are typically backwards compatible with standard audio CDs, which means that you can play an audio CD on a DVD player.
Although initial DVD machines didn't allow for recording, more recently DVD-R (DVDs that could be recorded once) and DVD-RW (DVDs that could be used to record or rewrite multiple times) were introduced.
High-Definition DVDs

In 2006 we began to see "home theaters" centered around 5.1 sound from HDTV videodisks (and even 7.1 sound, with an option for two more speakers).
With images that rival or exceed those in theaters, many people — at least those who can afford home theaters — now find little reason leave their homes to see a movie.
At the end of 2007, there were two major competing and incompatible standards for DVDs in the high-definition. There was the HD-DVD format led by Toshiba consortium and Blu-ray backed by a Sony-led consortium.
By early 2008, after several major motion picture studios backed away from HD-DVD, Toshiba conceded that Blu-ray had won the HD format competition. The public had also become aware of the picture quality advantage of Blu-ray, as shown in side-by-side comparisons of the various video formats.
Disk-Based Camcorders
In 1995, two companies introduced the first disk-based camcorders, primarily designed for ENG work.
After going through a few generations of improvement, a disk-based camcorder was introduced in 2002 with a three-hour capacity and the ability to simultaneously record on DVCAM videotape.
Once video and audio segments are recorded with the professional versions of disk-based camcorders, the segments can be played back almost instantly and in any order. We'll have more on this in the next module.

Solid State Memory
Many camcorders -- amateur, prosumer, and professional -- now record on solid state-memory cards, sometimes called flash memory. The memory module shown in front of the credit card on the left can hold up to 90 minutes of consumer-grade video.
This approach provides faster camera-to-computer transfer speeds. Plus, since there are no moving parts in the camcorder, maintenance costs are reduced to a fraction of what they were with videotape, or even videodisc.
Consumer-grade camcorders were the first to use solid-state recording or flash memory. In 2003, after quality and recording capacity had advanced sufficiently, this type of recording also moved to professional camcorders. As we noted earlier, there are currently many types of ▲ solid-state or flash memory.
Camera memory cards can be slipped into a computer and quickly accessed by an editing program. A common transfer approach for cameras with hard disks is with acamera-to-computer cable -- toften aFFireWireonnection. connection.
File Servers and
File-Based Systems
While we are talking about digital recording approaches we might as well venture into the editing domain for a moment and talk about file servers (also called video servers and media servers).
Instead of videotape, file servers store audio and video information on high-capacity computer disks. Most broadcast and production facilities are now "tapeless," meaning that file servers are used almost exclusively. These are referred to as file-based systems.
A cutaway view of a high-capacity computer hard disk is shown here. File servers typically consist of numerous computer hard drives.
A file server can be thought of as a kind of high-capacity depository of audio and video segments that can be accessed from workstations (computer editing stations) throughout a production facility.
A production facility may have numerous workstations that all tie into a single, high-capacity server. The concept in newsrooms, where it is most used, is referred to as file-based architecture. In its structure it's similar to a LAN (local area network) used in many institutions to tie desktop computers into the company's main computer.
Once material has been stored on a server, access time is virtually instantaneous.
For long-term (archival) storage video on the server can be transferred to a videotape or a DVD.
________________________________________
Module 47
Updated: 04/20/2010



Consumer Video Formats
and Video Compression

In this module we will cover:
 Video Compression Approaches
 Compression Ratios and Effects
 Consumer Videotape, hard drives, and Solid-State Memory
 Personal Video Recorders
 The Democratization of the Medium
________________________________________
Film has retained its basic formats for more than 100 years. But, in only the last few decades, videotape has progressed through no less than 20 different (and incompatible) formats.
Although this may be one of the prices we have to pay for progress, it has also added confusion to video production.
In this discussion we'll skip over most of the video formats that have been introduced over the years ---most of them didn't stay around too long anyway -- and very briefly look at what have been the most widely used.
In this module we'll focus on consumer videotape equipment. In the next module we'll cover professional equipment -- although, in recent years the dividing line between these categories has gotten a bit blurred. Serious hobbyists often select professional equipment, and consumer equipment often finds its way into professional applications.
Before we start our discussion of consumer formats we need to go into something that we've previously mentioned but not really explained: data or digital compression.

Digital Compression
Lossy and Lossless Compression
All consumer and most all professional video formats use some level of video compression. They are often divided into lossless and lossy, although there is no clear line between the two categories.
With lossless compression there is no difference -- or some people would say, no readily discernible difference -- between the original and the compressed data. Thus, no loss in quality.
The problem, however, is that lossless techniques involve huge amounts of data and are technically quite demanding. Thus, they require expensive equipment.
Most video and audio compression techniques eliminate data to some degree to make recording and transmission technically easier. It then becomes a matter of how much the data is compressed. When to the trained eye quality starts to be sacrificed, the term lossy compression is used.

Compression Ratios
If you start out with 100 bits of data and compress it to 50 bits, you have a 2:1 compression ratio. If you can reduce the original data to 25 bits, you now have a 4:1 compression ratio.
You can easily use a 2:1 compression with video without noticing any loss in quality. In fact, you can even compress video to 10:1 without noticing a significant difference -- and, in the process, of course, you can record the data in 1/10 the space.
When you move to 20:1 (depending on the subject matter), you will still have an excellent picture, even though a trained eye will notice a slight loss in quality. Using the right compression techniques, compression ratios today go as high as 100:1.
Note in the windows and in the face in the photo on the right below the subtle pattern added by compression. Compare that with the photo on the left that has minimal compression. The enlargements from these images (below these photos) make the differences much more obvious.

This difference is much more noticeable in the 300% enlargements in the two photos on the left below.



As you move to 50:1 and beyond, you can begin to see a noticeable (and objectionable) difference between the original picture and the compressed version.
In full motion compressed video, you often see discrete data blocks or rectangles in the video, especially during rapid action involving large areas of the picture. Note photo on the right above.

Compression Codecs
There are various compression approaches for audio and video. Any specific one, such as MPEG-4, is referred to as a ▲codec.
Top-of-the-line digital camcorders use so-called "no compromise" digital 4:2:2 compression. Explaining these numbers would get us into some deep technical waters, so just keep in mind that 4:4:4 is a pure, uncompromised (uncompressed) signal; 4:2:2 represents minimal and unnoticed compression; and 4:1:1, which is associated with consumer camcorders, involves significant signal compression.

MPEG-2 and MPEG-4 Compression
MPEG-2 and MPEG-4 are popular compression techniques that eliminate redundant video information. This includes data between successive frames that does not change, as well as data "that we probably won't miss."
However, rapidly changing subject matter such as a hockey game is particularly taxing for a compression scheme. In this case the discarded data may be necessary to reproduce all of the detail in the action. It is in this type of subject matter that you are most apt to see artifacts, visible video aberrations or problems caused by the compression scheme not keeping up with the speed of action.
Cable and satellite systems routinely compress video. The degree varies, but in every case some technical quality is sacrificed in favor of delivering more channels and services.
This is the major reason that video, even HDTV, will not look quite as good after being transmitted as it did when it was first reviewed or edited.


An Uncompromised Future?
As we've noted, data compression is necessary in the production process because until recently production equipment could not easily handle the high-speed data steams associated with digital and HD video.
However, as data storage becomes cheaper and more compact and computer chips become faster, uncompressed or minimally compressed audio and video may become the norm -- at least in the initial stages of video production. This will significantly improve the quality of productions -- especially HDTV productions -- that go through numerous stages of editing.

Consumer Video Formats
Many people remember the popular 8mm and VHS formats. But before these, there was the once-popular Betamax format that was introduced by Sony Corporation in 1976.
Although it eventually lost out in popularity to VHS, it was the first consumer format to be widely accepted for home use.
Betamax was finally discontinued in 2002.

VHS
Prior to digital recording, the most successful of all the home videotape formats was VHS (Video Home Service).
The VHS format lasted more than 20 years and spawned hundreds of thousands of video rental stores around the world. However, as you can see form the graph below, things quickly started to change in 2001 with the development of the DVD.
By 2008, most movie rental stores had relegated VHS tapes to a small section in the back of the store, and by 2009, VHS tapes were no longer being produced.
Just as video rental houses had almost completely switched over from VHS recordings to DVDs, in the early 2000s, as we've mentioned, Blu-ray entered the scene.
This format, which we'll discuss in more detail later, offers more than five times the storage capacity of traditional DVDs. Blu-ray is a high-definition video format that's backwards compatible with CDs and DVDs.
For historical reasons, if nothing else, we need to complete the VHS story. The exploded view of a VHS cassette on the right below shows the two internal reels and the tape path. This basic design is used for all cassettes.
________________________________________


________________________________________
VHS took a step forward in quality when S-VHS (super VHS) was introduced. Some news operations started using it as an acquisition format that could be brought back to the production facility and immediately dubbed (copied) to a higher quality format for editing.
This minimized any subsequent loss in quality due to editing. For a discussion of acquisition formats click here.
Although the technical quality of VHS improved significantly after its introduction, when it came to professional applications the quality still left a lot to be desired -- especially if significant editing and video effects were needed.
Like all of the videotape formats, VHS tapes had a record lockout provision. Once you broke off the small plastic tab shown here, machines would no longer record on the tape. This made it possible to keep important material from being accidentally erased.

A Review of Consumer Tape Formats
When Betamax (not to be confused with Betacam, to be discussed in the next module) didn't survive, 8mm video was introduced.
The format in part tried to cash in on the "8mm" designation that had long been a household name in home (film) movies. In fact, Eastman Kodak was one of the originators of 8mm video.
The reduced size of the 8mm cassette meant that camcorders could be made even smaller than VHS camcorders, a feature that attracted people who had grown weary of dragging around their bulky, full-sized VHS camcorders.
At about the time that S-VHS was introduced, Sony introduced Hi8, a higher quality version of 8mm. This was also used as an acquisition format, and under optimum conditions can produce high quality video.
In mid-1999, Sony introduced Digital-8 for the consumer market. This format not only represented a major improvement in quality, but the digital approach made new camcorder features possible.

Digital Formats
Digital video recording has a number of advantages over analog. Although we've mentioned some of these in previous modules, we'll summarize five major advantages.
• A digital videotape can be copied almost indefinitely without a loss of quality. This is an important consideration in postproduction sessions that require numerous generations of video effects.

• Digital material can be directly uploaded to digital editing systems without the need of analog-to-digital conversion.

• Error-correction circuitry associated with digital electronics reduces or eliminates problems such as dropouts (to be discussed in Module 49).

• Digital videotapes are better suited for archival (long-term) storage.

• The technical quality of digital recordings is significantly better than typical analog recordings.
At the same time, compared to analog recording, digital recording requires far greater amounts of data.

DV Camcorders
Many of today's consumer camcorders record digital signals in some type of solid-state memory.
Note this camcorder that is so small it can fit into a shirt pocket.
Many of these units have a FireWire connection, a high-speed data connection that allows the output of the camera to be fed directly into a computer or digital editor.
An alternative transfer method is to insert the solid-state memory into the computer and transfer the data to a hard disk for editing.
Although we introduced on disk-based and solid-state memory in the last module, here we'll go into a bit more detail from the perspective of consumer type camcorders.

Disk-Based and Memory Module
Consumer Camcorders
A tapeless camcorder was introduced for the consumer market by Hitachi in late 1997. The MPEG-Cam could record up to 20 minutes of video and audio on a detachable 260 MB hard disk.
In early 2005, JVC introduced two disk-based camcorders that represented another major step forward. These cameras could record up to one hour of video on removable, four-megabyte MicroDrives. The cameras could also record on Compact Flash and SD memory cards.
By 2010, 120GB hard drives in camcorders could hold 15 hours of high-definition video, or 50 hours in standard definition.
Even though filling up one of these drives seems improbable, the possibility during a vacation exists, and at that point before the camera can be used the video has to be downloaded to a computer to free up the camera's hard drive.
One of the advantages of solid-state memory is that memory modules can be switched to add recording time. Many consumer camcorders allow recording on both hard disks and different types of solid-state memory modules.
Many consumer camcorders also allow you to take still photos, which means that you don't have to drag along two cameras on your vacation or trip to Disneyland.
Both the Mac and the Windows operating systems come with a basic editing program that allows you to transfer video from your camcorder to the computer for editing and then burn the results onto a DVD.
In this module we have focused primarily on consumer and prosumer equipment. In the next module we'll discuss professional video recording. However, before we end this module there are a few loose ends in recording media we need to mention.

Miniature Solid-State Camcorders
Although the camcorder shown ▲ shown at the beginning of this module may look like a toy, especially with its scores of available funky case designs, it's actually capable of recording two hours of high-definition (HD) video with stereo audio.
The 3.3 ounce "Flip," deemed the world smallest HD camcorder, plugs into a computer for downloading footage through its built-in USB connector. From there is can be edited into a production, sent by e-mail to a destination, or uploaded to a site such as YouTube or MySpace.
Although compared to professional or prosumer camcorders even the $200 HD model of the Flip may leave a bit to be desired, there aren't too many camcorders that can be carried around in a shirt or jacket pocket like a cell phone.
Unlike disc, or videotape recording media, solid-state or flash memory media are not subject to problems stemming from jolts, dust or moisture.

PVR (Personal Video Recorders)
In 1999, a technology was introduced for digitally recording TV programming in the home. PVR, or personal video recorders, which come with many satellite receivers, use a high capacity computer hard disk to record 100 or more hours of programming.
Although several companies now make these units, initially, TiVo ® was the name associated with this technology.
These units make it possible to do instant replays of material and (to the consternation of advertisers) speed through commercials at up to 300-times the normal speed. As commercials (in one form or another) are now approaching 50% of prime-time programming content, many people feel that this feature alone makes the investment worthwhile.
Desktop and laptop computers -- especially those with high-capacity or multiple hard drives -- are also being used to record programs and movies off the air or from sources such as HuLu.com.
________________________________________

The Democratization of the Medium

It was not too long ago that a broadcast quality camcorder was at least $60,000. Today, digital camcorders that can be used in broadcast applications cost a small fraction of that. Camcorder equipment is now more reliable and simpler to operate. All this has led to what some have called the "democratization of the medium."
With the proliferation of public access opportunities on cable channels and Internet video sites such as YouTube, the ideas and concerns of many more people can be expressed and heard.
Many news stories today are being recorded with cell phone cameras (or, ideally on a little higher quality equipment) and uploaded to news outlets. CNN started this mode of "citizen journalism" in 2006, with its I-Reports.
However, only a small percent of the contributions could be used on their main site, so in 2008 CNN launched an "unedited, unfiltered" iReport site that immediately got more than 100,000 submissions.

Seeing something is quite different from reading about it. Although an event may take place that can and should elicit a public outcry of opposition, until it is recorded on video for "all the world to see," little may be done. (A striking example is the story of Neda that we've mentioned before.
We don't showcase videos on this site, primarily because of established options such as YouTube. Here are just a few of the other options available.

Smoothing Out the Film-Video Difference
Compared to film, digital video has its own unique characteristics. It can look sharper and colder than film, and exhibit compression artifacts that many people feel detract from the video medium.
At the same time, for those who feel these things are not desirable, there are a variety of filters available that can counteract these effects. These are discussed here.

________________________________________
Module 48
Updated: 05/24/2010



Professional
Video Formats
Video cameras like the one on the right are being used in the production of many of TV series and even for theatrical motion pictures.
First, although there's a rather blurry line between professional and consumer formats, professional camcorders typically have many of the following features:
• Three imaging "chips" (Consumer formats typically have only one.)

• An audio level meter and the ability to control audio levels (i.e., you are not stuck with an AGC audio circuit).

• Low-impedance, balanced (i.e., professional quality) mic inputs

• A jack for headphones so you can monitor audio with high-quality earphones

• Detachable lenses so you can use special purpose lenses and aren't stuck with whatever zoom lens the manufacturer originally put on the camera

• A video output for an external video monitor (You and others can see the video on a large, high-quality monitor.)

• High-quality 4:2:2 digital signal processing
• In some cases a dockable camcorder design where the camera can be fitted with (attached to) different recording devices.
________________________________________

If you are interested to seeing the multitude of controls and features available in one of today's sophisticated prosumer camcorders, you can click here.

As you will see in this module, there have been a bewildering number of professional videotape formats; in fact, at least 15.
First, it may be helpful to look at this comparison chart on the major quality differences between some of the popular consumer and professional formats.
Keep in mind that the greater the bandwidth (frequency in MHz) of the luminance part of the signal, and the greater number of horizontal lines of resolution, the clearer the video picture will initially appear to be.
VHS Beta SP S-VHS DVCAM D-1
Luminance
in MHz 3.0 4.5 5.0 5.5 5.75
Horizontal
Resolution
in TV Lines 240 360 400 440 460
You will note that as you move from earlier VHS recorders on the left to the best professional machines on the right, that both the amount of luminance information and the lines of resolution increase.
Recall that some engineers now prefer the term "luma" instead of "luminance" when referring to the black-and-white or achromatic portion of the video signal. They go into the technical difference depends on the application and is beyond the scope of this discussion. This distinction notwithstanding, the term "luminance" is still widely used in video.
Now, let's take a look at some of the major professional recording formats.

One-Inch, Reel-to-Reel
In an earlier module we mentioned the two-inch tape that started the whole video recording process. After almost three decades of use, the two-inch quad format gave way to one-inch tape. Initially there were "Type A" and "Type B" versions of the one-inch format.
But, it was the Type C version that became the next major standard, especially in countries using the NTSC video standard.
With the one-inch Type C format, still-frame, slow- and accelerated-motion playbacks were possible for the first time. During the 1980s, Type C (shown here) was the dominant format in broadcasting and production facilities.

Reel-to-Reel Gives Way to Cassettes
The first widely used videocassette format was 3/4-inch U-Matic introduced in 1972. This format was initially intended as a home and institutional format, but because of its small size (at least for the time), it was soon adapted for broadcast field production in general and electronic newsgathering (ENG) in particular.
Among its technical limitations was the fact that its quality was limited to 260 lines of resolution (sharpness). It was never considered a quality production format -- even after the resolution was later increased to 330 lines. Even so, the 3/4-inch cassette format quickly replaced 16mm film in TV news. This, in itself, represented a bit of a revolution in TV news.
Like all of the cassette tape formats, 3/4-inch U-Matic cassettes had a record lockout function to keep important material from being accidentally erased. When the red button (shown in the photo on the right) was removed, machines would not record on the tape.

________________________________________

Error-Correction Circuitry
All of the videotape formats had to cope with the possibly of momentarily interruptions in the flow of data as the tapes were recorded and played back. It's easy to see why such interruptions can occur.
A signal was recorded on the videotape in a data area the width of a human hair.
The read-write heads spun across these areas at a speed of about 9,000 RPM (revolutions per minute). In an analog recording a dust particle on the tape or an imperfection in the tape caused ▲dropouts. The momentary glitches are shown here.
A momentary head-to-tape separation of only four microns (which is 1/20th the size of a human hair) could cause a tape dropout. A speck of dirt or even a smoke particle from a cigarette is at least this size.
To try to compensate for these problems, professional digital machines incorporate error correction circuitry. Simply put, in digital machines these circuits keep track of the mathematical sums of the 0s and 1s in each block of data. If "things don't add up," these circuits substitute appropriate digital numbers (data).
If a large block of data is corrupted, the circuitry will substitute data from previous data blocks. Taken to the extreme, if you lose a complete video frame, you will see the last good video frame frozen on the screen while awaiting uncorrupted data.
________________________________________

Professional Digital Formats
The "D" Formats
There is a long line of D (digital) formats, and we'll briefly run through them as a way of quickly tracing the history of digital videotape.
Sony developed D-1 in 1986. This was the first digital format and it made possible multi-generation editing without the loss in quality inherent in the analog formats. D-1 is considered a "no compromise" format where the color information is recorded separately from the luminance. D1 is still used in a few specialized postproduction applications where there's a need for extensive postproduction visual effects.
D-2, introduced by Ampex Corp., quickly followed D-1. Matsushita (Panasonic) introduced D-3 in 1991. Since it used a small 1/2-inch tape cassette, this format was used for the first digital camcorders.
There is no D-4, since the term is similar for "death" in the Japanese language, and by this time almost all of the equipment was being manufactured in Japan. (Of course in the U.S. many buildings don't have a 13th floor and some airplanes don't have a 13th row.)
Since D-3 wasn't as successful as Panasonic would have liked, they introduced D-5 in 1993, in part to compete with Sony's popular digital Betacam.
Because D-5 had many technical advantages, this format made a definite impact in the high end equipment arena.
D-5 was the first format to rival the "no compromise" D-1 quality.
D-7, or DVCPRO was Panasonic's way of moving the advantages of the small DV and DVC formats up to a professional level.
DVCPRO (D-7) used the same sized tape as DV, and made use of the quality advantages of metal particle tape.
One of the advantages of DVCPRO was that the tape cartridges could be transferred to the computer's hard drive at four times normal speed.
At that point the "D" designations for videotape were abandoned and new digital recording media were introduced.

DVCAM, Digital Betacam
Sony's DVCAM was a professional adaptation of the consumer DV format and incorporated many of the same type of improvements used when DV was upgraded to DVCPRO.
DVCAM incorporated the "iLink" (IEEE-1394) or FireWire connection, which enabled recorders to plug directly into computer-based editing systems. DVCAM machines could play back the DV and DVCPRO formats.
Digital Betacam was introduced by Sony in 1993 as a digital replacement for their very popular analog Betacam line introduced 20 years earlier.
The format was based on a 1/2-inch tape format pioneered by companies such as Grundig and Phillips. (A Betacam cassette is shown on the left.).
In a similar way that users pushed Panasonic to improve DVCPRO by introducing DVCPRO 50, Digital Betacam users had concerns that prompted Sony to introduce the higher quality Betacam SX in 1996.
Digital-S (D-9)
Digital-S, was designed as a professional upgrade to S-VHS. When the standard was officially accepted by the SMPTE standards committee to become the D-9 format, it found its way into professional applications.
D-9 had a pre-read function that incorporated the simultaneous use of separate record and playback heads. This made it possible to see (check) the recorded signal a split-second after it's recorded.

Disk-Based Recording
We introduced the concept of camcorders that record on computer hard drives in the last module. (Note: you will see "disk" spelled "disc" in some applications by some manufacturers.)
However, at the professional level a number of additional features were incorporated into these machines. One model, introduced in mid-2003, allowed you to record two channels of video and audio, while simultaneously playing back two channels. This made it possible to do basic editing "in the camera," with an almost instant access to the scenes.
Going Tapeless
With this history have almost moved to a time when videotape will end up in a Museum of Broadcasting display of historical developments.
Most TV stations have dropped videotape for all but archival storage. Consumer-level camcorders that use videotape are no longer manufactured.

One of the things that made production people question the future of videotape was in 2006 when an accomplished director, David Fincher, shot the full-length feature film, Zodiac, entirely on computer hard drives. All postproduction work was subsequently done using these digital recordings.
According to Fincher, "The biggest challenge involved grappling with a studio and industry culture that tends to see the removal of physical media as an impediment to their security and long-term archiving goals. ...It's about getting people to wrap their minds around change." (In the end all of the footage was transferred to videotape -- but only for long-term storage.)

DVD and Solid-State Recording
Two recording techniques were then introduced that virtually spelled the demise of videotape: blue laser DVD recording and solid-state cards. The latter are solid-state memory cards that slide into slots in camcorders and computers. (See below.)
In late 2002, Hitachi introduced a tapeless acquisition format that records both in solid-state memory and on a DVD. This combination made it possible to record and edit projects in the field.
Sony's DVD system uses a blue laser light to record up to 23.3Gb of data on a single 5-inch (12.7cm) DVD camcorder disk. This translates into over an hour of broadcast quality audio and video.
Like with any DVD, it's possible to almost instantly move to any point in a recording. The recordable DVDs can be used multiple times.
Panasonic introduced P2 professional grade solid-state recording in 2004. Their AJ-SPX800 camcorder has no moving parts and has slots for up to five memory cards. Each card can record up to 32 GB ( ▲ gigabytes) of material.

Once video is recorded, the card can be removed and placed in a computer for editing.
Subsequently, Sony introduced its own flash memory cards. The flash memory, "no moving parts" approach is highly resistant to environmental problems such as humidity and vibration. Plus, it uses far less power than either videotape or disk recording.
Solid-state (flash) memory cards are advertised as being able to record and play back up to 100,000 times. This means that they have a much longer useful life than videotape or even camera DVDs.
There are two more advantages to using solid-state memory. Some models allow for playbacks and digital uploading to editing systems at 20X normal speed. It's possible to make digital camcorders so small that you can close your hand around one model. (Note photo.)

High-Definition Formats
The first high-definition (HD) digital recorder was Sony's HDD-1000. It used 1-inch, open reel tape (which, incidentally, cost $1,500 for a one-hour reel). Perhaps, not unexpectedly, these machines weren't big sellers and they were soon replaced by HDCAM.
We previously mentioned the D-6 format, so we'll move onto D-5HD, which as you might guess, was a HDTV version of Panasonic's D-5 line. (Note video recorder on the right.)
Likewise, the DVCPROHD was an upgraded version of DVCPRO. However, the tape speed was increased to four-times that of DVCPRO. This gives you some idea of the extra demands of HDTV signals.
In late 2003, JVC introduced the first consumer grade HDTV camera, the GR-HD1. It used mini-DV tapes and cost a fraction of what professional HDTV cameras cost.
This was followed by HDTV camcorders from Panasonic, Sony and Canon. A number of documentaries that have ended up on network TV have originated with these cameras.
At this point solid-state video recording was introduced, which meant that for amateur, prosumer and professional applications videotape was on the way out.
For some time, solid-state and hard disk recording media were a limiting factor in recording time. But need dictated invention and the recording time for both gradually increased.
In 2010, terabyte hard drives and solid-state modules were introduced. (A terabyte is 2 to the 40th power, or 1,099,511,627,776 bytes of information. Put another way, a terabyte is 1,024 gigabytes.) By that time terabyte hard drives were already on the scene.
This level of storage power is necessary for recording lengthy 3-D segments, which optimally involve two, simultaneous, high-definition video sources.
Digital SLR Cameras
[The digital SLR camera is] 'one of the most dramatic things to happen in the history of video.'
Vincent Laforet, former Pulitzer Prize-winning New York Times staff photographer, now producing his own videos with digital SLR cameras.


Question: are the cameras below still cameras or HDTV video cameras?
Answer: Both.
These cameras can produce both high quality still photos and high-definition (HDTV) video.

They were the first of a new generation of ▲SLR cameras that have advantages that you don't have with typical camcorders.
Those who have used digital and 35mm SLRs know that this shape is easy to stabilize against your face -- plus it's much easier to carry than a full-size camcorder, not to mention being much less conspicuous for covering news. (The mic can be removed, making the camera appear identical to a standard SLR.)
These cameras have now gone "mainstream" in professional production. For example, the House finale on FOX in 2010, a series which is normally shot on film, was shot entirely with digital SLR (DSLR) cameras.
Successfully shooting professional video with one of these cameras (which many people are now doing) involves special considerations, which are covered here.
The Reemergence of 3-D
For decades attempts were made to introduce a system of three-dimensional (3-D) film and video that would be accepted by audiences. Over the years nearly 100 feature films have included 3-D versions.
Judging from the number of 3-D video cameras and 3-D display systems at the 2010 National Association of Broadcasters Convention (where new innovations are traditionally introduced) many manufactures at that point felt that we were then on the threshold of practical 3-D video.
Actually, over the years, television stations such as KTLA in Los Angeles produced and aired a number of shows in 3-D. The red-blue paper framed glasses were used for viewing, however, and these did not meet with wide acceptance among viewers. Thus, these productions were seen "novelty experiences" and not a serious production component.
In 2010 this began to change when several satellite networks started regular 3-D programming. Even so, there were still incompatible equipment approaches and the fact that special glasses had to normally be used to see the 3-D image.
There are a number of important differences between 2-D and 3-D production that must be kept in mind. This file has additional information.

Cell Phone Cameras
The new generation of cell phones with 5-megapixel cameras and high-quality, auto- focusing lenses is eliminating the need to carry both a cell phone and a consumer-quality digital still camera.
SLR video camera and cell phone innovations are discussed in more detail in this technical addendum.

Ultra High-Definition Formats
Although by 2006, HDTV had just gotten a foothold in homes, by that time manufactures had developed cameras with much higher resolutions.
Popular examples of ultra high-definition video cameras are ▲ The Red One" or RED shown at the beginning of this module and the Arri video camera shown here.
Arri has long been a leading manufacturer of motion picture (film) cameras. This video camera has many innovations including the use of film camera accessories and nomenclature, designed to make it easy for film people to switch to video.
Although HDTV is one application for these ultra high-definition cameras, they are replacing film in motion pictures and in episodic TV -- areas that for decades have been centered on film technology.
Instead of using the 2/3 inch chip that's common to most professional video cameras, these ultra high-definition cameras use a chip with an image area many times greater -- roughly the size of a 35mm motion picture image. In fact, adaptors are available to use the popular Nikon and Canon 35mm lenses. This graphic shows the relative pixel resolution of several ultra-high definition formats.
In the next Module we'll take up video recorder operations.
________________________________________


Module 49
Updated: 04/20/2010


Video Recorder
Operations


Until tape is phased out completely, it seems prudent to cover some key factors in using videotape machines.

Video recorders have five basic functions: play, record, stop, rewind, fast-forward, and pause. To record on some machines the record button has to be held down before pressing the play button. On others you simply press the record buttons. In either case you will probably see a red light come on — a kind of universal indicator that the machine is in the record mode.
First, although the stop mode disengages the tape from the head, the pause button allows the tape to stay in contact with the spinning video heads, ready for an instant start in either the record or playback modes. This can create a problem.
If left in pause too long, the video heads will wear away the recording surface of the tape. This can damage the tape resulting in video noise and the dropouts we illustrated at the end of the last module.
This can also result in head clog where the microscopic gap in the video heads of an analog machine is clogged with any kind of foreign matter. To help avoid this problem, most of today's VCRs will automatically shut down after being left in pause for a few minutes.
When there is head clog, you will see a snowy picture (photo on right). If things get even worse, you will see a full-blown "snowstorm" take over and the picture will roll, break up, and disappear.
"Playing" a head cleaning tape for about five seconds may solve the problem. If it doesn't, you may have to get a technician to clean the heads with a special solution.
Some machines have self-cleaning heads, which routinely (and somewhat superficially) clean the heads during normal VCR operations. This will generally take care of minor head clog problems.
As we've noted, a few tape machines have confidence heads with the pre-read function. These machines are able to play back the recorded signal a fraction of a second after it has been recorded. Without confidence heads the operator can only monitor the video from the camera. This gives no indication of possible recording problems, which brings us to --

Spot-Checking a Tape
Because dropouts or head clog may not be discovered until the tape is played back, tapes should be spot checked (checked at various spots) after important recordings.
Spot checks are done by stopping the tape at the end of the recording, rewinding a meter or two (five or so feet) and checking the last seconds of the tape; then rewinding the tape to about the midpoint and checking again; and, finally, rewinding the tape to the beginning and checking the first five or ten seconds of the recording.
Although some people will only check the end of the recording, sometimes head clog problems that develop earlier in a recording will clear up toward the end -- a problem you wouldn't know about if you only checked the last few seconds.
During spot checks you should look for:
• absolute image stability (no horizontal jitter or vertical flutter or roll)
• the presence of dropouts and video noise
• general video sharpness and quality
• audio clarity
If you do find minor dropout problems and you can't redo the segment, an electronic dropout compensator may be able to unobtrusively fill in missing data as the tape is edited or copied.

VCR Adjustments
Although the following differ between machines -- especially between digital and analog tape machines -- the skew control found on some VCR's controls videotape tension.
This affects the length of the video tracks as they are "read" (played back) from the videotape. Improper skew adjustment is indicated by flagging, or a bending and wavering of vertical lines at the top of the video frame.
Most skew controls have a center "indent" position that indicates a normal setting. Tapes that have been played many times, stretched, or subjected to high temperatures may require new skew settings.
A more common control is the tracking control that affects the VCR's ability to precisely (and generally automatically) align the heads with the narrow video tracks recorded on the tape. As with skew, the tracking control is only used to correct problems during playback.
On most videotape formats tracking errors show up in the form of a horizontal band of video noise (shown here). In severe cases there will be a total breakup of the picture.
Some VCRs have tracking level meters that represent a readout of the strength of the video signal. If automatic tracking fails or is not present and the video level falls below the optimum level indicated on the meter, the tracking control should be adjusted for maximum signal strength.
You may find that a tape has a tracking level that's too low to provide a stable playback. Since VCRs differ, playing the tape on a different machine may help. At least it's worth a try.
Stories From Real Life
One desperate and despondent husband wrote us recently saying that he had accidentally recorded a football game over the video of his recent wedding. He asked was there anything he could do.
The short answer: "No; it's gone."
In addition to telling him about the record lockout function on videotape cartridges, we advise him to consider being especially kind to his wife for quite a while!


Care and Handling of Videotape

Packing the Tape
Videotape can shed microscopic particles during use. These particles can gradually fill in the gap in the record-playback heads resulting in head clog and increased head and tape wear.
A procedure called packing the tape is often recommended before a videotape is used for the first time. This involves fast-forwarding the tape to the end and then rewinding the tape again to the beginning. It accomplishes two things.
• variations in tape position and tension that can cause recording problems will be minimized
• any loose particles on the tape's surface may drop away before getting lodged in the record-playback heads
This procedure is also recommended at intervals when the tapes are in long-term storage, which brings us to....
Videotape Storage
How permanent is the data recorded on a videotape?
Tests have been run on metal particle tape (used in the D-1 through D-6 formats, plus DVCPRO and DVCAM) to try to determine how stable it is over time. The tape was found to resist electromagnetic loss and damage for at least 15 years if consistently stored at 25°C (77°F) at 50 percent relative humidity.
A more practical recommended range for tape storage is a consistent temperature of between 15 and 25°C (59 - 77°F) at a relative humidity between 40 and 60 percent. Fortunately, this ends up being a desirable living environment for both human beings and videotape.
However, if the temperature or humidity rise significantly above these levels, damage can occur. Some videotapes have been destroyed in less than one hour when stored at 75°C (180°F). A cassette sitting in the sun in a closed car during the summer can reach that temperature.
Magnetic Damage
Material on videotapes is often intentionally erased or degaussed with a strong magnetic field before use. By using a videotape degausser, you can be sure that the previously recorded material --- especially if it has been on the tape for an extended period of time -- is entirely erased before the tape is reused. (Over time, the magnetic image on the tape tends to become harder to erase.)
At the same time, the information on the tape can be accidentally damaged or destroyed by putting it near a magnetic field. More than one person (including the author) has thoughtlessly laid a video or audiotape down on top of an audio speaker or another piece of equipment, only to find that afterwards the content was unusable.

The Time-Base Corrector
Finally, we need to mention, and even pay tribute to, a "little black box" that revolutionized video recording and originally made electronic newsgathering possible.
Every second a television system must precisely scan more than 15,000 lines for standard NTSC television and more than 35,000 lines for HDTV.
Fluctuations in the timing pulses (sync) that control the start and stop points of each of these lines will result in unstable (jumpy) video or serrated (jagged) edges on vertical lines. (See photo below. ) If things get really bad, there will be a complete loss of video.
This timing precision is relatively easy to maintain with purely electronic circuitry. However, once mechanical devices such as tape transport mechanisms are introduced, fluctuations invariably arise.
Left uncorrected, these variations create picture instability that gets worse each time the recording-playback process is repeated. You can quickly end up with no picture at all.
Until the development of the TBC (time-base corrector), only large, expensive cameras and tape machines could meet broadcast requirements for time-base stability. This meant that there was no such thing as portable broadcast-quality video equipment.
The invention of the TBC made it possible to take small cameras and recorders to the scene of news stories and either tape or broadcast the stories "live."
Before that, it took several engineers, a few days notice, and an 18-wheeler loaded with equipment to do a major broadcast -- which meant there weren't many remote broadcasts.



The photo on the left illustrates major video timing problems that can be cleared up by running the signal through a good TBC (photo on the right).
Although TBCs can be stand-alone (separate) units, today professional video recording equipment commonly has TBC circuitry built in.
________________________________________
Module 50
Updated: 04/05/2010

Video Editing, Part I



Continuity Editing


Editing is the creative force of filmic reality...and the foundation of film art.
- V.I. Pudovkin, 1915

You will note that the above statement was made in 1915. Since that time editing has become even more important.
Editing establishes the structure and content of the production along with the production's overall mood, intensity, and tempo.
In this series of modules on editing we'll start with the most logical and useful approach: continuity editing.
Continuity editing refers to arranging the sequence of shots to suggest a progression of events.
Given the same shots, an editor can suggest many different scenarios. Consider just these two shots.
• a man glancing up in surprise
• another man pulling a gun and firing toward the camera
In this order it appears that the first man was shot. However, if you reverse the order of these two scenes, the first man is watching a shooting.
Let's look at what we can do with just three shots.
1. people jumping from a car
2. the car on fire
3. an explosion
1-2-3 - In the 1-2-3 sequence shown the shots suggest that people are jumping from a car seconds before it catches fire and explodes.
3-2-1 - A 3-2-1 order suggests that there is an explosion and then the car bursts into flames; and, as a result, the people have to jump out.
2-3-1 - In a 2-3-1 sequence people jump from the car after a fire causes an explosion.
2-1-3 - If the sequence is changed to 2-1-3, it appears that as a result of a fire passengers jump out of the car just in time to escape a devastating explosion.
Three shots; four very different meanings!
When hundreds of scenes and takes of scenes are available to an editor, which is normally the case in dramatic productions, the editor has tremendous control over the basic continuity and message of the production.

Changing Expected Continuity
Continuity editing primarily suggests guiding an audience through a sequence of events, and, in the process, showing them what they want to see when they want to see it. In the end, you've told a story or logically traced a series of events to their conclusion.
In dramatic television good editors sometimes break from the expected to achieve a dramatic effect. Unfulfilled expectations can be used to create audience tension. Let's take this simple shot sequence:
• A man is working at his desk late at night.
• There is a knock at the door.
• The man behind the desk routinely calls out, "Come in."
• After looking up, the calm expression on the man's face dramatically changes to alarm.
Why? We don't know. Where is the shot of who or what just came in? What happens if we don't cut to that expected shot? The audience is then just left hanging with curiosity and apprehension -- or, depending on how it's handled, with frustration and resentment.
Here's an example of the latter. In a story about the changes in the U.S. $100 bill a treasury spokesperson spends considerable time giving specific details on the changes that were necessary to foil counterfeiting.
Let's assume that the whole time all we see is a two-shot of the men carefully examining one of the new $100 bills.
Obviously, we would want to see a close-up of the bill so we can see the changes they're talking about. If there is no such shot, we feel frustrated.
So, unless you want to leave your audience hanging for momentary dramatic effect, always keep in mind what you think the audience expects to see at any given moment. If you do, your edit decision list (sequence of edits) will largely write itself.
In news and documentary work the more logically your can present events (without extraneous material) the less room there will be for misunderstanding or frustration. In these types of productions you want to be as clear and concrete as possible. (However, in dramatic production it's sometimes desirable to leave some things open to interpretation.)

Acceleration Editing
In film and video production time is routinely condensed and expanded.
For example, let's say you want to tell the story of a young woman going out on an important date.
The process of just watching her pick out clothes from her closet, taking shower, drying her hair, doing her nails, putting on her clothes and make-up, checking the whole effect in a mirror, making any necessary adjustments, and then driving to some prearranged place could take 90 minutes. That's the total time devoted to most feature films -- and the interesting part hasn't even started yet!
Very early in the film business audiences were taught to assume things not shown. For example, the 90 minutes or so it took the woman to meet her date could be shown in 19 seconds.
• a shot of her concluding a conversation on the phone and moving quickly out of the frame (3 seconds)

• a quick shot of her pulling clothes out of her closet (2 seconds)

• a shot of her through a steamy shower door (2 seconds)

• a couple shots of her blow-drying her hair (4 seconds)

• a quick shot of her heading out the front door. (2 seconds)

• one or two shots of her driving (4 seconds)

• and finally, a shot of her pulling up in front of the prearranged meeting place (2 seconds)
Or how about this:
• A shot of her hanging up the phone, jumping up and moving out of frame.

• A shot of her arriving at the agreed upon place.

Expanding Time
Occasionally, an editor or director will want to drag out a happening beyond the actual time represented.
The noted director, Alfred Hitchcock, (North By Northwest, Psycho, etc.) used the example of a scene where a group of people sitting around a dinner table was blown up by a time bomb.
In a real-time version of the scene, the people sit down at the table and the bomb goes off. End of people; end of scene.
But Hitchcock was famous for suspense, and no real suspense would be generated by this approach.
In a second version the people gather, talk, and casually sit down at the dinner table. A shot of the bomb ticking away under the table is shown revealing to the audience what is about to happen.
Unaware of the bomb, the people continue their banal conversations.
Closer shots of the bomb are then intercut with the guests laughing and enjoying dinner. The intercutting continues (and speeds up) until the bomb finally blows the dinner party to bits.
The latter version understandably creates far more of an emotional impact.

Causality
Often, a part of continuity editing is to suggest or explain cause. A good script (enhanced by good editing) suggests or explains why things happen.
For example, in a dramatic production it would seem strange to cut to a shot of someone answering the phone unless we had heard the phone ring. A ringing phone brings about a response; the phone is answered.
We may see a female corpse on the living room floor during the first five minutes of a dramatic film, but not know who killed her or why until 90 minutes later.
In this case effect precedes cause.
Although strict continuity editing would dictate that we present events in a logical sequence, it would make a more interesting story -- one that would be more likely to hold an audience -- if we present the result first and reveal the cause gradually over time. Is this not the approach of almost every crime story?
Sometimes we assume cause.
If we are shown a shot of someone with all the signs of being drunk (effect), we can probably safely assume they have been drinking (cause).
If we see a shot of someone attempting a difficult feat on skis for the first time, followed by a shot of them arriving back home with one leg in a cast, we assume that things didn't quite work out.
Let's go back to the corpse on the living room floor. Knowing that the husband did it may not be enough (maybe for the police, but not for most viewers). In causality there is also the question of why. This brings up motivation.

Motivation
Under motivation, we can assume any one of the age-old motives, including money, jealousy and revenge.
But, even knowing that the motive was revenge may not be enough for a well thought-out, satisfying production. Revenge must have a cause.
To provide that answer we may have to take the viewer back to incidents in the past. We could show a woman with a lover. We could then see suspicion, jealousy, resentment, and anger building in her husband. Finally, we could see that these negative emotions could no longer be restrained.
Now we understand. We've been shown effect, cause, and motivation.
Editors must perceive the dynamics of these cause-and-effect relationships to skillfully handle them. They must also have an understanding of human psychology so that they can portray feelings and events realistically.
How many serious dramatic productions have you seen where actions and reactions just don't seem to be realistic? Does this not take away from the credibility of the production?
Writers and directors also know they shouldn't reveal answers (motivations) too quickly.
In a good mystery we will probably try to hold our audience by leading them through critical developments in a step-by-step fashion.
________________________________________
Note: Today's video and audio editing systems are all based on computer platforms. Here is a refresher course in computer basics.
________________________________________
Module 51
Updated: 04/18/2010

Video Editing, Part II



Continuity
Techniques

While holding to the basic continuity of a story, an editor can enhance the look of a production by adding insert shots and cutaway. We introduced these previously, but now let's look at these from the standpoint of editing.

Insert Shots

An insert shot is a close-up of something that exists within the basic scene. The latter is typically visible within the establishing or wide shot. (Note close-up shot above from the scene on the left)..
Insert shots add needed information, information that wouldn't otherwise be immediately visible or clear.
In our earlier example of the new $100 bill, an ECU (extreme close-up) of the bill that was being discussed would be an insert shot.

Cutaways
Unlike insert shots that show significant aspects of the overall scene in close-up, cutaways cut away from the main scene or action to add related material.



Here, we cut away from a shot of a man glancing down a mine shaft (on the left) to man already at a lower level (above).
During a parade, we might cut away from the parade to a shot of people watching from a nearby rooftop or a child in a stroller sleeping through the commotion.
In the editing process we have to rely on regular insert shots and cutaways to effectively present the elements of a story. We can only hope that whoever shot the original footage (which might be you) had enough production savvy to include them.

Relational Editing
Many years ago, the Russian filmmakers Pudovkin and Kuleshov conducted an experiment where they juxtaposed various scenes with a shot of a man sitting motionless and totally expressionless in a chair.
The scenes included a close-up of a bowl of soup, a shot of a coffin containing a female corpse, and a shot of a little girl playing. To an audience viewing the edited film, the man suddenly became involved in these scenes.
When the shot of the man was placed next to the shot of the coffin, the audience thought that the actor showed deep sorrow. When it was placed next to the close-up of the food, the audience perceived hunger in his face; and when it was associated with the shot of the little girl, the audience saw the actor as experiencing parental pride.
Thus, one of the most important tenets of editing was experimentally established: the human tendency to try to establish a relationship between a series of scenes.
In relational editing, scenes that by themselves seem not to be related take on a cause-effect significance when edited together in a sequence.
The scene on the left begs for a cut to a scene to explain who the woman is waving at.
If this scene were followed by a shot of a car pulling up to the curb, we would naturally assume that the woman would go over to the car and get it. If it's followed by a shot of a woman some distance away pushing a stroller along a sidewalk, we'd assume something quite different.
To follow this shot with ▲ the photo of the students in the library shown at the beginning of this module would probably not make much sense. Thus, in relational editing we expect to see scenes come together in a logical sequence to tell a story.
It's easy -- and generally even desirable -- to combine continuity and relational editing.
Remember the scenario in the last module of the woman who was apparently murdered by her husband? What if we preceded the shot of the corpse on the living room floor with shots that included the woman covertly cleaning out large sums of money from a home safe as her husband entered to catch her? Is a relationship between these events suggested? Do we then have a clue as to what might have happened?
When it comes to the next topic, thematic editing, these fundamental concepts change.

Thematic Editing
In thematic editing, also referred to as montage editing, images are edited together based only on a central theme. In contrast to most types of editing, thematic editing is not designed to tell a story by developing an idea in a logical sequence.
In a more general sense, thematic editing refers to (as they say in the textbooks) a rapid, impressionistic sequence of disconnected scenes designed to communicate feelings or experiences.
This type of editing is often used in music videos, commercials, and film trailers (promotional clips).
The intent is not to trace a story line, but to simply communicate action, excitement, danger, or even the "good times" we often see depicted in commercials.
________________________________________
From continuity, relational, and montage editing we now move to a technique for enriching editing and stories by adding extra "layers."

Parallel Cutting
Early films used to follow just one story line -- generally, with the hero in almost every scene.
Today, we would find this simplistic story structure rather boring.
Afternoon ▲ soap operas, sitcoms, and dramatic productions typically have two or more stories taking place at the same time.
The multiple story lines could be as simple as intercutting between the husband who murdered his wife in the previous scenario and the simultaneous work of the police as they try to convict him. This is referred to as parallel action.
When the segments are cut together to follow the multiple (different) story lines, it's referred to as parallel cutting.
By cutting back and forth between two or more mini-stories within the overall story, production pace can be varied and overall interest heightened. And, if the characters or situation in one story don't hold your attention, possibly the characters or situations in one of the other storylines will.
Today's dramas typically have eight or ten major characters, and although intertwined with the main drama, each has their own continuing story.
________________________________________
Module 52
Updated: 04/05/2010

Video Editing, Part III


Solving
Continuity Problems

As we've noted, audiences have learned to accept the technique of cutting out extraneous footage to keep a story moving. Strictly speaking, these are discontinuities in the action.
While some discontinuities in action are expected and understood, some are not. When edits end up being confusing or unsettling, they are called jump cuts.
If you are very observant you'll notice that many films and weekly television series provide good examples of minor continuity jumps in the action. Here are some examples:
• A two-shot of a couple talking on a dock will show their hair blowing gently in the breeze, but in an immediate close-up of one of them, the wind has inexplicably stopped blowing.

• A close-up of an actress may show her laughing, but in an immediate cut to a master shot we see that she is not even smiling.
• A man is walking with his arm around the waist of his girlfriend, but an immediate cut to another angle suddenly shows that his arm is around her shoulder.
These problems are primarily due to shooting in single-camera, film-style, where a significant period of time can elapse between scene setups and takes. We'll look at single-camera techniques a little later.
It would be nice if potential jumps in continuity could always be noticed during shooting. Scenes could be immediately re-shot while everything and everybody was still in place, and there wouldn't need to try to fix things during editing.
Sometimes, however, these problems only become evident when you later try to cut scenes together. Apart from costly and time-consuming re-shooting of the whole scene, there are some possible solutions.

Bridging Jumps in Action
Let's start with how a jump cut in a dramatic production might be handled.
Remember our young woman who was getting ready for a date? Let's say we see her hang up the phone in the kitchen and then head out the door to the bathroom for a shower. No problem yet.
Let's now assume that after exiting the kitchen (moving left-to-right), the hallway scene has her immediately reaching the bathroom door from the right. Now she's moving right-to-left.
The audience is left with a question: Why did she instantly seem to turn a full 180-degrees and start walking in the opposite direction to get to the bathroom? Although this would not trouble some directors and editors today, others would see it as an undesirable reversal in action -- one that jars a smooth transition between scenes.
The solution to most of these problems is to use the cutaways and insert shots we discussed earlier.
With this particular continuity problem we could add a quick close-up of someone's hands (either hers or hands that look like hers) opening a linen closet and taking out a towel. Not only is a bit of visual variety introduced, but when you cut to her entering the bathroom we won't be as apt to be troubled by the sudden reversal in action.

If that didn't work, you might consider inserting a scene of her in front of her closet deciding on her clothes. All of these tricks can be used to cover continuity problems.

Bridging Interview Edits
Interviews are almost never run in their entirety.
An audience used to short, pithy sound bites will quickly get bored by answers that wander from the topic, are less than eloquent, or that are... just boring. In interviews you may shoot ten times more footage than you end up using.
It's the job of the video editor to cut the footage down --
• without leaving out anything important
• without distorting what was said, and
• without abrupt changes in mood, pacing, energy, or rhetorical direction
Not an easy job.
To start with, cutting a section out of dialogue will normally result in an abrupt and noticeable jump in the video of the person speaking.
One solution, illustrated here, is to insert a three- or four-second cutaway shot over the jump in the video.
This assumes, of course, that you've already made a smooth audio edit between the segments.
These cutaways, which are typically done in editing with an insert edit, are often reaction shots ("noddies") of the interviewer.
If videotape is being used, these cutaway shots are typically from a separate videotape (a B-roll) as opposed to the recording of the interview answers (the A-roll). In linear editing having two separate video sources (an A-roll and a B-roll) can make editing easier.
With nonlinear editing everything can be recorded on a hard disk or solid-state memory card and the segments can be instantly accessed from a single source. Even so, the supplementary footage is commonly referred to as B-roll footage.
Editors depend greatly on this supplementary B-roll footage to bridge a wide range of editing problems. Therefore, you should always take the time to record a variety of B-roll shots on every interview -- insert shots, cutaways, whatever you can get that might be useful during editing.
Another (and somewhat less than elegant) way of handling the jump cut associated with editing together nonsequential segments of an interview is to use an effect such as a dissolve between the segments. This makes it obvious to an audience that segments have been cut out, and it smoothes out the "jump."

Abrupt Changes in Image Size
An abrupt and major change in image size constitutes another type of jump cut.
Going from a wide-angle (establishing shot) directly to a close shot can be too abrupt. An intermediate medium shot is generally needed to smooth out the transition and orient the audience to the new area you are concentrating on.


For example, if you cut from the shot on the left above directly to the one on the right (the area indicated by the red arrow in the wide shot), the audience would have trouble knowing where the new action is taking place within the overall scene.
However, if you cut to the medium shot as shown here before the close shot, the area you are moving to becomes apparent.
A well-established 1-2-3 shot formula covers this. It starts with

1. a momentary wide shot (also called a master or establishing shot), then
2. a cut to a medium shot, and then
3. cuts to one or more close-up shots
Periodically going back to the wide or establishing shot is often necessary to remind viewers where everyone and everything is. This is especially important during or after a talent move. When you cut back to the wide shot in this way, it's referred to as cutting to a reestablishing shot.
Although this long-shot-to-medium-shot-to-close-up formula is somewhat traditional, there will be times when an editor will see an advantage in another approach.
For example, by starting a scene with an extreme close-up of a crucial object, you can immediately focus attention on that object. In a drama that could be smashed picture frame, a gun, or any crucial subject matter.
Once the importance or significance of the object is established, the camera can dolly or zoom back to reveal the surrounding scene.

Shooting Angles
Another type of jump cut results from cutting from one shot to a shot that is almost identical.
Not only is it hard to justify a new shot that is not significantly different, but a cut of this type simply looks like a mistake.
To cover this situation, videographers keep in mind the 30-degree rule.
According to this rule, a new shot of the same subject matter can be justified only if you change the camera angle by at least 30 degrees.
Of course, cutting to a significantly different shot -- for example, from a two-shot to a one shot -- would be okay (even at basically the same angle), because the two shots are significantly different to start with.
Related to shooting angles is the issue of on-screen direction.
The following photos illustrate one example. If the woman in the left was talking to the man on the phone, which angle seems the most logical: her facing the right (first photo), or facing the left (photo on the right?




We assume that if two people are talking, they will be facing each other -- even though on the telephone this is not necessarily the case. Although this seems logical when we look at photos such as these, when you are shooting in the single-camera style and scenes are shot hours or even days apart, these things are easily overlooked.

Crossing the Line
And, finally, we come to one of the most vexing continuity problems -- crossing the line.
Any time a new camera angle exceeds 180-degrees you will have crossed the line -- the action axis -- and the action will be reversed.
This is hard to fix during editing, although some of the techniques we've outlined can help.
Football fans know that action on the field is reversed when the director cuts to a camera across the field. For this reason it's never done in following a play -- only later during a replay.
And then it's only justified (generally with an explanation) if that camera position reveals something that the other cameras didn't clearly catch.
When something is being covered live, this type of reversal of action is immediately obvious. The problem can be much less obvious when actors must be shot from different angles during single-camera, film-style production.
Let's say you want a close-up of the man at the left of this photo.



If the camera for this shot were placed over the woman's right shoulder (behind the blue line in the illustration above), this man would end up looking to our left as he talked to the couple instead of to our right as shown in the photo. You would have "crossed the line."
Note, however, that camera positions #1 or #2 in front of the blue line could be used without reversing the action.
If all close-ups are shot from in front of the blue line, the eye-lines of each person -- the direction and angle each person is looking -- will be consistent with what we saw in the establishing shot.
Occasionally, a director will intentionally violate the 180-degree rule for dramatic effect. For example, during a riot scene a director might choose to intentionally cross the line on many shots in order to communicate confusion and disorientation. That should tell you something right there about the effect of crossing the line.
Assuming that confusion is not the objective, an editor must always remember to maintain the audience's perspective as scenes are shot and edits are made.
________________________________________


Module 53
Updated: 04/05/2010

Video Editing, Part IV

Technical Continuity

Any noticeable, abrupt, or undesirable change in audio or video during a production is referred to as a technical continuity problem.
We tend to accept some technical continuity problems; others we don't.
News and documentaries are often shot under drastically different conditions, and so we tend to accept such things as changes in video color balance or audio ambiance between scenes.
But in dramatic productions we don't want technical inconsistencies diverting our attention from the storyline. In this type of production the medium (television) should be totally "transparent" so there's nothing to get in the way of the message (the story).

Audio Continuity Problems
Audio continuity problems can be caused by a wide range of factors including shot-to-shot variations in:
• background sound
• sound ambiance (reverberation within a room, mic distance, etc.)
• frequency response of mic or audio equipment
• audio levels
In single-camera production most of these inconsistencies may not be easy to detect on location; it's only when the various shots or takes start to be assembled during editing that you discover the problem.
As you cut from one scene to another you may discover that the talent suddenly seems to move closer or farther away from the mic, or that the level or type of background sound changes (passing traffic, an air conditioner, or whatever).
Some problems can be helped with the skilled use of graphic equalizers or reverberation units. Changes in background sound can sometimes be masked by recording a bed of additional sound, such as music or street noise.
As in most of life, it's easier to avoid problems than to fix them -- assuming there even is a way to fix them.

Things to Be Alert For
First, be aware that mics used at different distances reproduce sounds differently. This is due to changes in surrounding acoustics, as well as the fact that specific frequencies diminish over distance.
Although some expensive directional mics will minimize the effect of distance, most mics exhibit proximity or presence effects. A good pair of padded earphones placed on top of a set of well-trained ears can detect these differences.
With the increased reliability of wireless mics many production facilities are equipping actors with their own personal mics. The distance of the mic -- it's generally hidden in the person's clothes -- can't change, and because of the proximity of the mic, background sounds tend to be eliminated. Some of the things we talked about in using personal mics should be kept in mind here.
Finally, you need to watch for changes in background sounds. For example, the sound of a passing car or a motorcycle may abruptly appear or disappear when you cut to a shot that was recorded at a different time.
Even if an obvious background sound doesn't disappear, its level may change when you cut from one person to another. This may be due to differences in microphone distance coupled with the level adjustments needed to compensate for the different strength of voices.
The scene here would make a beautiful background for an interview, but the running water could create major sound problems, especially for a single camera interview or a dramatic production.
Audio technicians will typically want to keep the camera or audio recorder running for a minute or so after an interview so that the ambient sound on the location can be recorded. This is referred to as room tone or ambient sound.
You may need to use either of these to cover a needed moment of "silence" or just to give an even and consistent "bed" of sound behind a segment. Low-level audio from a sound effect CD can also be used in this way.

Continuity Issues in Background Music
Music can smooth the transition between segments and create overall production unity -- if it's used in the right way.
Background music should add to the overall mood and effect of the production without calling attention to itself. The music selected should match the mood, pace, and time period of the production. Vocals should be avoided when the production contains normal (competing) dialogue.
Ideally, the beginning of a musical selection should coincide with the start of a video segment and end as the segment ends. In the real world, this almost never happens, at least without a little production help.
To a limited degree you can electronically speed up and slow down instrumental segments with digital editing equipment, especially if the music is not well known.
Because a kind of continuity issue arises when music has to be faded out "midstream" to conclude at the end of a video segment, you can try backtiming the music.
If the music is longer than the video, you can start the music a predetermined amount of time before starting the video. You then fade in the music as the video starts. This will be less noticeable if the segment starts with narration and the music is subtly brought in behind it.
If you calculate things correctly, the music and the video will both end at the same time.
Let's assume, for example, that a music selection is two minutes and 40 seconds long and the video is only two minutes long.
By starting the audio 40 seconds before the video and fading it in with the start of the video, they should both end together.
As will see later, all of this is fairly easy when you are using a nonlinear, computer-based editing system. (Everything is visible on the computer screen's time-line.) With linear editing the process takes a bit more work and planning.

Video Continuity Problems
Video has its own continuity problems; for example, changes in:
• color balance
• tonal balance
• light levels; exposure
• camera optics; sharpness
• recording quality
Intercutting scenes from cameras with noticeably different color characteristics (color balance) in a dramatic production will immediately be apparent to viewers.
To alleviate this problem all cameras should be carefully color-balanced and compared before a production.
This is especially important if multiple cameras are being used and the shots will later be cut together. (You may remember that we previously discussed setting up both monitors and cameras.)
Once cameras are color balanced and matched, an electronic test pattern with all of the primary and secondary colors is often recorded at the beginning of the videotape. This has traditionally been used to color balance the video playback. However today, many systems can electronically adjust color from the recording's integrated color reference signal.



Notice in the photos above that several things subtly change, especially skin tones and color balance.
While we are comparing these shots, notice that cutting from the close-up to the two-shot would also represent a problem because of the change in the position of the woman's head.
Editing systems often make use of a vectorscope for adjusting colors on recordings before editing starts. As we've noted, a vectorscope and a waveform monitor are both a part of the software of professional nonlinear editing systems. These professional editors allow you to change the basic color balance of scenes.
However, in trying to match different video sources the subtle differences between some colors may not be able to be satisfactorily corrected. This is why the initial color balancing of cameras is so important. This page shows the various color balance and luminance range settings available in one sophisticated nonlinear video editing system.
________________________________________
Video Editing, Part V


Editing Guidelines:

Today's nonlinear computer editors are capable of just about any effect you can dream up. Because of this, it's tempting to try to impress your audience with all the production razzle-dazzle you can manage.
But, whenever any production technique calls attention to itself, especially in dramatic productions, you've diverted attention away from your central message. Video professionals -- or maybe we should say true artisans of the craft -- know that production techniques are best when they are transparent; i.e., when they go unnoticed by the average viewer.
However, in music videos, commercials, and program introductions, we are in an era where production (primarily editing) techniques are being used as "eye candy" to mesmerize audiences. The video editing system shown below, for example, is capable of creating about any type of effect.

Even though the traditional rules of editing seem to be regularly transgressed in commercials and music videos, the more substantive productions -- especially serious dramatic productions -- seem to generally adhere to some accepted editing guidelines.
As in the case of the guidelines for good composition, we are not referring to them as rules.
As you will see, many of these guidelines apply primarily to single-camera, dramatic, film-style production.
Guideline #1: Edits work best when they are motivated. In making any cut or transition from one shot to another there is a risk of subtly pulling attention away from the story or subject matter. However, when cuts or transitions are motivated by production content this is much less apt to happen.
• If someone glances to one side during a dramatic scene, we can use that as motivation to cut to whatever has caught the actor's attention.
• When one person stops talking and another starts, that provides the motivation to make a cut from one person to the other.
• If we hear a door open, or someone calls out from off-camera, we generally expect to see a shot of whoever it is.
• If someone picks up a strange object to examine it, it's natural to cut to an insert shot of the object.
Guideline # 2: Whenever possible cut on subject movement.
If cuts are prompted by action, that action will divert attention from the cut, making the transition more fluid. Small jump cuts are also less noticeable because viewers are caught up in the action.
If a man is getting out of a chair, you can cut at the midpoint in the action. In this case some of the action will be included in both shots. In cutting, keep the 30-degree rule in mind.

Maintaining Consistency in Action and Detail
Editing for single-camera production requires great attention to detail. Directors will generally give the editor more than one take of each scene. Not only should the relative position of feet or hands, etc., in both shots match, but also the general energy level of voices and movements.
You will also need to make sure nothing has changed in the scene -- hair, clothing, the placement of props, etc. -- and that the talent is doing the same thing in exactly the same way in each shot.
Note in the photos below that if we cut from the close-up of the woman talking to the four-shot on the right, that the angle of her face changes along with the lighting. (Because of the location of the window, we would assume the key light would be on our left, which it isn't in the first shot.)
These things represent clear continuity problems -- made all the more apparent in this case because our eyes would be focused on the woman in red.



Part of the art of acting is in to maintain absolute consistency between takes.
This means that during each take talent must remember to synchronize moves and gestures with specific words in the dialogue. Otherwise, it will be difficult, if not impossible, to cut directly between these takes during editing.
It's the Continuity Director's job to see not only that the actor's clothes, jewelry, hair, make-up, etc., remain consistent between takes, but that props (movable objects on the set) also remain consistent.
It's easy for an object on the set to be picked up at the end of one scene or take and then be put down in a different place before the camera rolls on the next take. When the scenes are then edited together, the object will then seem to disappear, or instantly jump from one place to another.

Discounting the fact that you would not want to cut between two shots that are very similar, do you see any problem in cutting between the two shots above?
Okay, you may have caught and a difference in lighting and color balance, but did you notice the disappearance of her earrings and the change in the position of the hair on her forehead?

Entering and Exiting the Frame
As an editor, you often must cut from one scene as someone exits the frame on the right and then cut to another scene as the person enters another shot from the left.
It's best to cut out of the first scene as the person's eyes pass the edge of the frame, and then cut to the second scene about six frames before the person's eyes enter the frame of the next scene.
The timing is significant.
It takes about a quarter of a second for viewers' eyes to switch from one side of the frame to the other. During this time, whatever is taking place on the screen becomes a bit scrambled and viewers need a bit of time to refocus on the new action. Otherwise, the lost interval can create a kind of subtle jump in the action.
Like a good magician that can take your attention off something they don't want you to see, an editor can use distractions in the scene to cover the slight mismatches in action that inevitably arise in single-camera production.
An editor knows that when someone in a scene is talking, attention is generally focused on the person's mouth or eyes, and a viewer will tend to miss inconsistencies in other parts of the scene.
Or, as we've seen, scenes can be added to divert attention. Remember the role insert shots and cutaways can play in covering jump cuts.
Guideline # 3: Keep in Mind the Strengths and Limitations of the Medium. Remember:
________________________________________
Television is a close-up medium.
________________________________________
An editor must remember that a significant amount of picture detail is lost in video images, especially in the 525- and 625-line television systems.
The only way to show needed details is through close-ups.
Except for establishing shots designed to momentarily orient the audience to subject placement, the director and the editor should emphasize medium shots and close-ups. The latter will be less important when everyone is viewing scenes in HDTV; but SDTV will be around for some time yet.
There are some things to keep in mind with close-ups.
Close-ups on individuals are appropriate for interviews and dramas, but not as appropriate for light comedy. In comedy the use of medium shots keeps the mood light. You normally don't want to pull the audience into the actors' thoughts and emotions.
In contrast, in interviews and dramatic productions it's generally desirable to use close-ups to zero-in on a subject's reactions and provide clues to the person's general character.
In dramatic productions a director often wants to communicate something of what's going on within the mind of an actor. In each of these instances the judicious and revealing use of close-ups can be important.
________________________________________
Module 55


Updated: 04/05/2010

Video Editing, Part VI



Editing Guidelines:


In this module we'll cover the final editing guidelines.
Guideline number 4: Cut away from the scene the moment the visual statement is made.
First, a personal observation. From the perspective of having taught video production for more than two decades I can say that more than 90% of the videos I see from students are too long. Most could be vastly improved by being edited down -- often by at least 50%.
When I tell students this they seem skeptical until I show them sample scenes from commercials, dramatic productions, news segments, and resume reels from noted professionals.
If you ask someone if he or she enjoyed a movie and they reply, "Well, it was kind of slow," that will probably be a movie you will avoid. "Slow moving" connotes boring.
The pace of a production rests largely with the editing, although the best editing won't save bad acting or a script that is boring to start with.
So how long should scenes be?
First, keep in mind that audience interest quickly wanes once the essential visual information is conveyed. Shots with new information stimulate viewer interest.
In this regard there are some additional points to consider.

New vs. Familiar Subject Matter
Shot length is in part dictated by the complexity and familiarity of the subject matter.
How long does it take for a viewer to see the key elements in a scene? Can they be grasped in a second (take a look at some contemporary commercials), or does the subject matter require time to study?
You wouldn't need a 15-second shot of the Statue of Liberty, because we've all seen it many times. A one- or two-second shot would be all you would need to remind viewers of the symbolism (unless, of course you were pointing out specific areas of damage, restoration, or whatever).
On the other hand, we wouldn't appreciate a one or two second shot of a little green Martian who just stepped out of a flying saucer on the White House lawn. Those of us who haven't seen these space creatures before would want time to see what one really looks like.
In an earlier module we mentioned montage editing. With this technique shots may be only a fraction of a second (10-15 video frames) long. Obviously, this is not enough time even to begin to see all of the elements in the scene.
The idea in this case is simply to communicate general impressions, not details. Commercials often use this technique to communicate such things as excitement or "good times."
Next, cutting rate depends on the nature of the production content.
For example, tranquil pastoral scenes imply longer shots than scenes of rush hour in downtown New York. You can increase production tempo by making quick cuts during rapid action.

Varying Tempo Through Editing
A constant fast pace will tire an audience; a constant slow pace will probably tempt them to look for something more engaging on another channel.
If the content of the production doesn't have natural swings in tempo, the video editor, with possible help from music, should edit segments together to create changes in pace.

This is one of the reasons that editors like parallel stories in a dramatic production -- pace and content can be varied by cutting back and forth between stories.


How you start a production is critical, especially in commercial television.
If you start out slow (and boring), your audience will probably immediately go elsewhere. Remember, it's during these opening seconds that viewers are most tempted to "channel hop" and see what else is on.
Because the very beginning is so important, TV programs often show the most dramatic highlights of the night's program right at the start. To hold an audience through commercials, newscasts regularly "tease" upcoming stories just before commercial breaks.
So, try to start out with segments that are strong -- segments that will "hook" your audience. But, once you have their attention, you have to hold onto it. If the action or content peaks too soon and the rest of the production goes down hill, you may also lose your audience.
It's often best to open with a strong audio or video statement and then fill in needed information as you go along. In the process, try to gradually build interest until it peaks at the end. A strong ending will leave the audience with positive feelings about the program or video segment.
To test their productions, directors sometimes use special preview sessions to try out their productions on general audiences. A director will then watch an audience's reaction throughout a production to see if and exactly where attention drifts.
Guideline number 5: Emphasize the B-Roll. Howard Hawks, an eminent American film maker, said: "A great movie is made with cutaways and inserts." We've previously noted that in video production these commonly go under the heading of "B-roll footage."
In a dramatic production the B-roll might consist of relevant details (insert shots and cutaway shots) that add interest and information.
One critical type of cutaway, especially in dramatic productions, is the reaction shot -- a close-up showing how others are responding to what's gong on. Sometimes this is more telling than holding a shot of the person speaking.
For example, would you rather see a shot of the reporter or the person being interviewed when the reporter springs the question: "Is it true that you were caught embezzling a million dollars?"
The do's and don'ts of interviewing can be found here.
By using strong supplementary footage the amount of information conveyed in a given interval increases. More information in a shorter time results in an apparent increase in production tempo.
The A-roll in interviews typically consists of a rather static looking "talking head." In this case the B-roll should consist of scenes that support, accentuate, or in some way visually elaborate on what's being said.
For example, in doing an interview with an inventor who has just perfected a perpetual-motion machine we would expect to see his creation in as much detail as possible, and maybe even the workshop where it was built. Given the shortage of perpetual motion machines, this B-roll footage would be more important to see than the A-roll (talking head) interview footage.
Guideline number 6: The final editing guideline is: If in doubt, leave It out.
If you don't think that a particular scene adds needed information, leave it out. By including it, you will probably slow down story development, and maybe even blur the focus of the production and sidetrack the central message.
For example, a TV evangelist paid hundreds of thousands of dollars to buy network time. He tried to make his message as engrossing, dramatic, and inspiring as possible.
But, during the message the director saw fit to cutaway to shots of cute, fidgety kids, couples holding hands, and other "interesting" things going on in the audience.
So, instead of being caught up in the message, members of the TV audience were commenting on, or at least thinking about, "that darling little girl on her father's shoulders," or whatever. There may have been a time and place for this cutaway, but it was not in the middle of the evangelist's most dramatic and inspiring passages.
So, unless an insert shot, cutaway, or segment adds something significant to your central message, leave it out!

Five Rules for Editing News Pieces
A recent study done at Columbia University and published in the Journal of Broadcasting & Electronics analyzed the editing of news pieces. It was found that if a set of post-production rules is used, viewers would be able to remember more information. In addition, the rules make the stories more compelling to watch.
Although the rules centered on news pieces, many of the principles apply to other types of production. The rules are condensed and paraphrased below.
1. Select stories and content that will elicit an emotional reaction in viewers.
2. If the piece has complex subject matter, buck the rapid-fire trend and make sure that neither the audio nor the video is paced too quickly.
3. Try to make the audio and video of equal complexity. However, if the video is naturally complex, keep the audio simple to allow the video to be processed.
4. Don't introduce important facts just before strong negative visual elements. By putting them afterwards the audience will have a better chance of remembering them.
5. Edit the piece using a strong beginning, middle, and end structure. Keep the elements as concrete as possible.
________________________________________
Module 56
Updated: 04/05/2010

Video Editing, Part VII



Dedicated and
Software-Based Editors


A dedicated editor is designed to do only one thing: video editing.
Dedicated editing equipment was the norm until desktop computer software started to become available in the late 1980s.
Software-based editors use desktop and laptop computers as a base. Video editing is just one of the tasks they can perform; it all depends on the software you load.
It was in the early 1990s that sophisticated video editing hardware and software became available for desktop computers. By 2000, the best laptop computers had become powerful enough to handle sophisticated editing programs.
Historically, the Video Toaster system for the Amiga computer was the first widely used system. The basic screen for that system is shown here. The Toaster was both a video switcher and an editing system.
Thereafter, several software companies introduced computer-based editing systems for the Apple and Windows operating systems. Today's desktop and laptop computers can rival the capabilities of dedicated editing systems.

Simple, Free Editing Systems
Mac and Windows machines come with simple video editors. Although they aren't capable of sophisticated effects, for assembling audio and video clips with basic transition effects (such as those typically found on YouTube) they are quite adequate.
Although some people prefer the Mac editor, the Windows Live Movie Maker shown here is simplicity in itself and it's potentially available to more computer users.
You only have to drag the video clips (either stills or movie clips) from anywhere on your computer to the area on the right. You can trim segments as needed, add filter effects, and create special effect transitions between video segments. If you wish, the program can automatically space the timing of the video segments to correspond with selected music or audio. The result can be output in the .WMV file format.
These simple editors can be used to create a quick "blueprint" of a production to get an idea of how shots will flow together. Later, if you need more elaborate special effect filters or color correction, the original footage can be put trough a more sophisticated video editor.
The illustration below shows a basic representation of how scenes, transitions, and audio sources can be represented on the timeline of a more sophisticated editing system. A mouse is used to drag the elements into different positions.

Note that one sequence of video segments is represented in the dark blue area at the top of the illustration and another in dark blue below that. In between, transition effects, filters, and visual effects are represented.
At the bottom the audio elements (music, narration, and special effect tracks) are shown in light blue. Relative levels and transitions are controlled by a mouse and drop-down menu selections.
Compositing
The question arises, what happens if you select two video sources for display at the same time -- for example, in the above illustration Scene 1 and Scene 2 without the transition effect?
Answer: As you might expect, you simply end up with one scene on top of the other -- which may result in a mess, or (if you know what you are doing) a crafted effect in compositing or layering.
In it's most basic form you get a superimposition ("super") or a key effect, which we illustrate in Module 60 through the use of a video switcher. However, with an editing system such as the one shown below it's possible to combine multiple video sources and create much more sophisticated effects.
For example, you can place at least two video clips on your editor timeline, one directly above the other and by adjusting the individual layers -- turning down the opacity, cropping, or keying out parts of each one as needed -- you'll see the combined effect. Using this technique you can add titles over video, substitute elements in a scene (such a adding a new background) or create a variety of visual effects.
The Avid editing screen shown below more accurately depicts how timelines actually look on an editing system. This particular system allows you to mix standard-definition and high-definition video in the same project -- an important consideration during this period of analog-to-digital transition.

Although you can edit audio on a machine like the one above (note the two tracks of audio on the time-line at the bottom of the screen) for more demanding audio editing you will want to consider a sophisticated audio editing system like this one.
With sophisticate editing systems there are a variety of video filters and plug-ins (software additions that add various effect options to the original editing program).
Examples are various types of blur, color corrections, cropping, sharpening, fog effects, geometric distortions, and even image stabilization.
The latter attempts to lock onto a central element in a scene and keep it from moving, thus canceling moderate camera shake. More on that later.
Although it's not possible to create detail that isn't in video to start with, with some plug-ins it's possible to rather convincingly convert standard definition video (SDTV) to HDTV.
As we've noted, there are both dedicated and software based laptop editing systems.
An example of a rugged dedicated system is this Panasonic field editing unit, primarily used in news work. Note that the controls designed exclusively for video and audio editing.
However, with computer-based systems you have the advantage of a wide variety of "off the shelf" laptop computers, plus the software can be readily switched and upgraded.
In addition to editing, computer-based systems can accommodate other computer programs, such as those used to write news scripts.
Computer-based editing used to be confined to especially modified ("souped up") desktop computers. However, in recent years high end laptop computers can do most anything desktop systems can.
These computers typically use a FireWire, IEEE 1394, USB-2, or i.Link cable connection to download the video from the camcorder to the computer's hard drive.
Because video information takes up a lot of digital space, these computers need a high-capacity hard-drive. (One minute of uncompressed video requires about one gigabyte (GB) of disk space.)
One of the best ways to learn how a nonlinear editor works is simply to play with one for several hours. One popular nonlinear editing program, Adobe Premiere, is available on the Internet for download. This demo version—if it's still available when you read this—does everything but save files. If you are interested, click on Premiere.
The professional editing programs tend to be quite expensive, so if you want to postpone that kind of an investment, you can check out Avidemux. This free editing program runs on all the major computer operating systems. It supports many file types, including AVI, DVD compatible MPEG files, MP4 and ASF, using a variety of codecs. There's an associated forum on the site.
________________________________________
Even though most computer-based editing systems today are non-linear, at this point we need to point out the difference between linear and non-linear systems.

Linear and Non-Linear Editing Systems
Working on a non-linear editing system is like working with a sophisticated word processor. Using a computer screen and a mouse you can randomly cut and past segments and move them around until you are satisfied with the result.
Working on a linear editing system is a bit like using a typewriter to type a term paper; you need to assemble everything in the proper sequence as you go along. After it's all on paper (or in this case recorded), adding, deleting or rearranging things can be a major problem.

With nonlinear editing the video and audio segments are not permanently recorded as you go along as they are in linear editing. The edit decisions exist in computer memory as a series of internal digital markers that tell the computer where to look for segments on the hard disk.
This means that at any point you can instantly check your work and make adjustments. It also means that you can easily (and seemingly endlessly!) experiment with audio and video possibilities.
Sony's complete high-definition NLE (non-linear editing, or random access editing) system is shown below. This editing system compliments Sony's line of XDCAM cameras.

Although a sophisticated nonlinear (random access) editing system such as the one above may take a while to learn, once you figure one out, you can transfer the basic skills to other editing programs.
After you finalize your edit decisions most editing systems allow you to save your EDL (edit decision list) — preferable on some removable media that you can take with you in case you need it again. This will save you from having to start from scratch if you later want to come back to the original footage to make revisions.
The final edited video and audio output can be handled in two ways.
It can be "printed" (transferred) in final, linear form to videotape or a DVD or it can remain on a computer drive to be recalled and modified as needed. The latter approach, which is often used for segments in newscasts, requires high-capacity storage devices such as...

File Servers
Video and audio segments—especially HDTV—take up a great amount of hard disk storage space.
Instead of trying to replicate the needed storage in each desktop computer, many facilities use a centralized mass storage device called a file server, sometimes called a media server or video server (shown here.)
These were introduced in an earlier module. Even editing programs can be run from a server.
A centralized video server not only gives all of the computer editing stations the advantage of having access to large amounts of storage, but it means that segments can be reviewed, edited, or played back from any of the editing workstations (desktop or laptop computers equipped with a network connection) within the facility.
As high-speed, Internet connections become commonplace, you will be able to link to a media server from any location — even your home — and edit and re-edit pieces. In fact, many professionals are doing that now.
________________________________________
Module 57
Updated: 04/14/2010

Video Editing, Part VIII
Making Use
of Time-Code


With the advent of digital, file-based video recording and playbacks there is no longer a need for a separate time-code track as there was with analog videotape recording. Although the hardware and software basis for time code information may have changed in the digital realm, the "human interface" need for time-code references has not -- especially for editing.
SMPTE/EBU time-code
SMPTE/EBU time-code (or just " time-code") is an eight-digit code that allows you to specify precise video and audio editing points.
Once time-code become a part of a video file, a designated time-code point (set of numbers) cannot vary from one editing session to another or from one machine to another.
Editing instructions like, "cut the scene when Whitney smiles at the camera," leave room for interpretation -- especially if Whitney tends to smile a lot.
But even though a video recording may be four hours long, "00:01:16:12" refers to one very precise point within that total time.

Breaking the Code
Although a string of eight numbers like 02:54:48:17 might seem imposing, their meaning is simple: 2 hours, 54 minutes, 48 seconds and 17 frames.
Since time-code numbers move from right to left when they are entered into an edit controller, you must enter hours, minutes, seconds and frames, in that order.
If there is anything tricky about time-code, it's the fact that you don't add and subtract from a base of ten the way you do with most math problems.
The first two numbers are based on 24 hours. This is so-called military time.
Instead of the time starting again at 1:00 p.m. in the afternoon, the time at that point becomes 13-hundred (13:00) hours and goes all the way to 23-hundred hours, 59-minues-59 seconds, at which point things start over again.
In time code the minute and second numbers range from 00 to 59, just the way they do on any clock, and the frames go from 00 to 29. (Recall there are 30 frames per second in NTSC video. The PAL and SECAM systems use 25 as a base.)
Thirty frames, like 5/5 of a mile, would be impossible in time-code display, because 30 frames in NTSC equal one second. (The next frame after 29 would add a complete second and the frame counter would start counting over again on the next second.) Likewise, "60 minutes" would be impossible in time-code (but not necessarily impossible on CBS).
Question: What comes after 04 hours, 59 minutes, 59 seconds and 29 frames (04:59:59:29)? If you said 05:00:00:00 you would be right.
Now let's look at some more complex time-code problems.
If one video segment is 8 seconds, 20 frames long, and a second segment is 6 seconds, 19 frames long, what is the total time?
8 seconds, 20 frames, plus
6 seconds, 19 frames
= 15:09
Note in this example that as we add the total number of frames we end up with 39. But, since there can be only 30 frames in a second, we add one second to the seconds' column and we end up with 9 frames. (39 minus 30 = 09 frames). Adding 9 seconds (8 plus the 1 we carried over) and 6 gives us 15 seconds, for a total of 15:09.
Let's look at this question. If the time-code point for entering a video segment is 01:22:38:25, and the out-point is 01:24:45:10, what is the total time of the segment?
segment out-point - 01:24:45:10
segment in-point - 01:22:38:25
= total segment time - 00:02:06:15

Getting the answer is a matter of subtracting the smaller time code (second line above) from the larger time code (top line).
Note that since we can't subtract 25 frames from 10 frames we have to change the 10 to 40 by borrowing a second from the 45.
For people who regularly do time-code calculations, computer programs and small hand held calculators are available. An Internet search will bring up a number of Windows and Mac time-code calculators available for downloading.
Drop-Frame Time-Code
Basic time-code assumes a frame rate of 30 per-second or 25 per-
second, depending on the country.
The latest digital equipment can convert one frame rate and video standard to another. However, 30 video frames per-second, which is widely used for video in the U.S. and many other countries, will be the basis for the following discussion.

Although 30 is a nice even number, it actually only applies to black and white television. For technical reasons, when color television was introduced, a frame rate of 29.97 frames per second was adopted. This frame rate is also used in the U.S. version of DTV/HDTV.
Although the difference between 30 and 29.97 may seem insignificant, in some applications it can result in significant timing problems. If you assume a rate of 30 frames per second instead of 29.97, you end up with a 3.6-second error every 60 minutes.
Since broadcasting is a to-the-second business, a way had to be devised to correct this error. Just lopping off 3.6 seconds at the end of every hour was not practical way of doing this -- especially from the viewpoint of a sponsor that gets the end of a commercial cut off as a result.

The Solution
So how do you fix this error?
A little math tells you that 3.6 seconds equal an extra 108 video frames each hour (3.6 times 30 frames per second). So, to maintain accuracy, 108 frames must be dropped each hour and done in a way that will minimize confusion. Unfortunately, we're not dealing with nice even numbers here.
First, it was decided that the 108-frame correction had to be equally distributed throughout the hour. (Better to lose a bit here and there instead of everything all at once.)
If you dropped 2 frames per-minute, you would end up dropping 120 frames per-hour instead of 108. That's nice and neat, but it's 12 frames too many. But, since you can't drop half frames, this is as close as you can get by making a consistent correction every minute.
So what to do with the 12 extra frames? The solution is every 10th minute not to drop the 2 frames.
In one hour that equals 12 frames, since there are six ten-minute intervals in an hour.
So, using this approach you end up dropping 108 frames every hour -- exactly what you need to get rid of.
Since the frame dropping occurs right at the changeover point from one minute to the next, you'll see the time-code counter on an editor suddenly jump over the dropped frames every time the correction is made.
For example, when you reach 01:07:59:29, the next frame would be 01:08:00:02. In drop-frame time-code frames 00 and 01 don't exist.
Maybe this is not the most elegant solution in the world, but it works, and now it should be obvious why it's called drop-frame time-code.
For non-critical applications, such as news segments, industrial television productions, etc., drop-frame isn't needed. However, if you are involved with producing 15-minute or longer programs for broadcast, you should use an editor with drop-frame capability.
On most edit controllers you will find a switch that lets you select either a drop-frame or non-drop frame mode. Software programs typically have a drop-down box where you can select the approach you want.
When you use the drop-frame mode, a signal is added to the SMPTE/EBU time-coded video that automatically lets the equipment know that drop-frame is being used.
Drop frame is usually represented with a semi-colon (;) or period (.) between the seconds and frames whereas non-drop retains the colon (:). The period is usually used on devices that don't have the ability to display a semi-colon. Example: drop frame = "HH:MM:SS.FF" or "HH:MM:SS;FF", non-drop frame = "HH:MM:SS:FF"
There is more to editing with time code, and you will find additional information here.
Time-Code Display
Some editing systems have small time-code displays on the top of the edit controller, as shown here.
More sophisticated editing systems superimpose the time-code numbers over the video, itself, as we see below.
In the latter case the time-code numbers may be either temporarily superimposed over the video (keyed-in code), or they may become a permanent part of the picture (burned-in time-code).
In the case of keyed-in code, an electronic device reads the digital time-code information from the tape and generates the numbers to be temporarily keyed (superimposed) over the video.
The disadvantage of this approach is that you can only see the code if you are using special equipment, such as an appropriate editing system.
Once the time-code-numbers have been burned in (permanently superimposed into the video), the video and time-code can be viewed on any video playback system.
Although this approach requires making a special copy of the original footage, it can be an advantage if you want to use standard playback equipment to review tapes at home or on location and make time code notes on the segments you want. Reviewing segments in this way and making what's called an initial paper-and-pencil edit can save a great deal of time later on.
________________________________________
Module 58
Updated: 04/05/2010


Video Editing,
Part IX



On-Line
and Off-Line
Editing

The basic goal of off-line editing is to create a list of edit decisions.
Before digital and tapeless camcorders, this involved using a copy of the original videotape footage. This was important in protecting the original videotape from damage during the often arduous process of making edit decisions.
Off-line editing involves reviewing footage and compiling a list of time-code numbers that specify the "in" and "out" points of each needed scene.
In this phase a rough cut (an initial rough version without visual effects, color corrections, etc.) is assembled. This version can be shown to a director, producer, or sponsor for approval. Typically, at this point a number of changes will be made.
In on-line editing (at least in the traditional sense of the phrase) you are using original footage to create the final edited version of a program, complete with audio and video effects, color correction, etc.
Since this process can be rather expensive if full-time engineers and costly, high-quality on-line equipment are involved, an off-line phase will reduce editing expenses and allow time for greater experimentation.
An important part of the creative process is trying out many possibilities with video, music, and effects. Hours can be spent on just a few minutes, or even a few seconds, of a production.
When time is limited, such as in preparing a news segment for broadcast, you generally can't afford the luxury of an off-line phase.
A laptop computer equipped with one of the many available editing programs can control an on-line edit for a ▲news package.


Digital Editing With a Video Server
Once video editing becomes totally digital with equipment that can handle video with minimal compression, there will be no need for the traditional on-line and off-line editing phases -- ▲ it can all be done on-line.
Digital recordings can be made in the studio or on location and uploaded (transferred) directly to an editing computer or video server for editing. Once this transfer is made, there will be no danger of tape damage in editors, no matter how many times the footage is previewed. (Digital information stored on a computer disk does not gradually degrade with repeated access the way it does when it's recorded on videotape.)
When a video server is used, the original footage can be viewed and edited by anyone with a computer link to the server.
This is generally someone within the production facility; but, thanks to high-speed Internet connections, it could even be someone in another city-or even in another country. In the case of animation and visual effects, which are labor intensive, projects are often electronically transferred to countries where labor is less expensive.
The two main approaches used in newsrooms in editing server-based footage are covered here.
The latest non-linear editors have many features that both speed up and improve video and audio editing. We'll just give two examples.
Some editors can "read" or understand the spoken dialogue in video footage and match it up with a written script or with words you type in. If you happen to have hours of video footage and are looking for the point where someone said, "Eureka, I found it," the editing system can search through the footage the cue up the part of the video where that phrase is spoken.
Another useful feature that is briefly touched on elsewhere is image stabilization. Let's assume you have some shaky footage -- possibly involving a moving vehicle.
The first thing you do is freeze the beginning of the footage on the screen. Then you find a clearly defined object near the center of the scene and draw a box around it, as shown on the left. (Note motorcycle headlight.) This becomes an anchor point reference.
Then you crop the whole image slightly to give the process "working room."
Once you roll the footage the editor holds the selected area still, eliminating the shake and movement in the original scene.

Creating a Paper-and-Pencil Edit
Regardless of what approach you take in editing, previewing footage and making a paper-and-pencil edit can save considerable time.
For one thing, you may not really know what you have -- what to look for and what to reject -- until you have a chance to review all of your footage.
By jotting down your tentative in and out time codes, you will also be able to add up the time of the segments and get an idea of how long your production will be.
At that point, and assuming you have to make the project a certain length, you will know if you need to add or subtract segments. Having to go back and shorten or lengthen a carefully crafted project is not most people's idea of fun!
A form for a paper-and-pencil EDL (edit decision list), such as the abbreviated one shown below, will give you an idea of how this data is listed.
________________________________________
Videotape Log
Sequence Reel # Start Code End Code Scene Description
.
.
.
.
.
________________________________________
There are also computer programs designed for logging time-codes and creating EDLs. By using a mouse, the indicated scenes can be moved around on the screen and assembled in any desired sequence. The programs can keep track of time-codes and provide total times at any point.
There are EDL programs and time-code calculators available as software for PDAs (Personal Digital Assistants), such as the Palm Pilot shown on the left. If they are not available for the iPhone and iPad yet, it's probably just a matter of time.
If you are preparing a news script on a computer, the time codes and scene descriptions can be typed in while you view the scenes on a monitor. When you finish logging all of the scenes, you can split the computer screen horizontally, putting the time code and scene listings on the top, and your word-processing program for writing your script on the bottom.
By using a small camcorder and a laptop computer producers have been able to create an entire EDL while flying from the East to the West Coasts of the United States.
Once the EDL is created, it can be uploaded from a computer disk directly into a file server or editor for final editing.
Six Quick Tips for File Server Editing
1. Although you may want to shoot everything on location that you think you could possibly use, when it comes to uploading or capturing this footage on a file sever or computer hard disk, you will want to use a bit of restraint. (You will have to eventually sort through all this!)

After reviewing the footage and making a rough paper-and-pencil edit, upload only the footage that you are reasonably certain you will use. Not only does excess footage take up valuable hard drive space, but prodding through this footage during editing adds considerable time to the editing process.
2. After the footage is uploaded, trim the front and back ends of the segments to get rid anything you're not going to use. This will also speed up editing and reduce storage space, plus, it will make the clips easier to identify on the editing screen.
3. Once this is done (#2 above), look for connections between segments; specifically, how one segment will end and another will start. Look for ways to make scenes flow together without jarring jump cuts in the action, composition, or technical continuity.
4. Find appropriate cutaways. In addition to enhancing the main theme or story they should add visual variety and contribute to the visual pace.
5. Use transition effects sparingly. Although some editing programs feature 101 ways to move from one video source to another, professionals know that fancy transitions can be distracting and can get in the way of the message — not to mention looking pretentious.
6. Use music creatively and appropriately. "Silence" is generally distracting, causing viewers to wonder what's wrong with the sound. Finding music that supports the video without calling attention to itself (unless, of course, it's a music video) can be a major task in itself.
If you have a bit of talent in the music area, you might consider a do-it-yourself approach to electronically composing your music. The Sonic Desk Smart Sound program, among others, will not only give you full control of your music, but it will eliminate copyright problems.
Sometimes simple music effects will be all that you will need.
A savvy editor can take the same script, footage and on-camera performances and subtly or even dramatically change the meaning of a video piece. So, in a sense, editing has the potential of being the most creative phase of the production process.
The writer learned a major lesson about this (and about humility) early in his television career.
________________________________________
This brings us to the end of the modules on editing. With the move to tapeless production well underway, this is an area that will see major changes in the next few years.
At this point in this cybercourse you should be able to write a production proposal, do a decent job on a script, plan out a production, shoot on-location footage, and assemble what you shoot into a logical and coherent "package."
In the next module we'll move into the TV studio where the production process takes on a number of new dimensions.
________________________________________
Module 59
Updated: 04/19/2010


Studio
Production

As a first step in seeing how studio productions are done we need to take a closer look at the role and responsibilities of the key person in this process -- the director.

The Role of the Director
In addition to the specific duties and responsibilities outlined in Module 1, the director's job is to get the crew and talent to function as a team and in the process bring out the best work in each person.
The director's main job is to get his or her crew to effectively work together with a common goal in mind.

Any director worth the title can stay on top of things when the crew, talent, and equipment perform exactly as expected.
But much of the value and respect that people place on directors depends on their ability to stay in control when things don't go as planned and new procedures suddenly have to be improvised.
A crew member or on-camera person may get sick, a key person may refuse to continue unless some special accommodation is made, a studio camera may go out, or a mic may suddenly fail. Vacillating, giving mixed signals or not being able to make a decision at a crucial time can result in production paralysis.
In large-scale productions everyone is typically working under pressure. Directors must be able to control their own tension and anxiety while being sensitive to the differing abilities and temperaments of talent and crew — not an easy task when they have responsibility for everything.
A heavy-handed approach with the wrong person can temporarily destroy that person's effectiveness and turn a bad situation into a disaster. Conversely, a mealy-mouthed approach that elicits no respect or leadership ability can be just as bad.
Put another way, a director's job is not to dictate but to clearly and effectively guide.

Requisitioning Equipment and Facilities
The director's first job will normally be to fill out a Facilities Request Form. Production facilities typically have forms tailored to their own needs and equipment.
On this form you will list such things as the production and rehearsal dates and times, studio space needed, personnel required, and the number of cameras, video recorders and mics needed.
Not anticipating a need may mean there will be last-minute delays in getting what you want.
Worse still, you may even have to do without something you need, especially if someone else has requisitioned the equipment for the same time period.
In addition to being used by the studio's Facilities Manager to plan on the necessary talent, crew, facilities, and equipment, the Facilities Request Form is used to anticipate production costs.

Studio Sets
Although virtual sets are now being used in many studios, the traditional hardwall and softwall sets are still the most widely used setting for studio productions. These are discussed here.

The Directing Process
For every audio or video event that takes place during a production several behind-the-scenes production steps are typically required.
Because production involves the activities of numerous crew members -- the number can range from 6 to more than 60 -- the director's instructions must be clearly and succinctly phrased.
Even the word sequence is important.
If the director says, "Will you pan to the left and up a little when you 'lose [your tally] light' on camera one," all camera operators must wait until the end of the sentence before they know who the director is talking to; and then they must remember what the instructions were.
However, if the director says, "Camera one, when you lose light, pan left and up a little," the first two words indicate who, the next four words tell when, and the last six words indicate what.
After the first two words, crew members know that only camera one's operator is being addressed. This will get the attention of the camera one operator, and the rest of the crew members can concentrate on their individual tasks.
The "when" in the sentence tells the camera one operator not to immediately pan and tilt, but to prepare for a quick move once the camera tally ("on-air") light is off. This may involve loosening the pan and tilt controls on the camera's pan head and being ready to make the adjustment -- possibly within the brief interval when the director switches to a reaction shot.
Even a two- or three-second delay can make the difference between a tight show and one where the production changes lag behind the action.
Although the specifics of the jargon vary between production facilities, directors tend to use some of the same basic terminology. To illustrate this, let's trace a director's PL line conversation for the opening of a simple interview show.
This production uses two cameras, one of which moves from position A to position B. In position A the camera gets the establishing (wide) shot. In position B it gets close-ups and over-the-shoulder shots.
Since the guests on this show are different each week and will require different opening and closing announcements, only the show's theme music is prerecorded. The opening and closing announcements are read off-camera, live.
Before we get to the actual show, let's look at several things that the audience will not see, but that are still important to the production.
Color Bars, Slate,
Countdown Clock, and Trailer
In professional productions there are four elements are typically recorded that are not seen by the audience:
1 . First on the tape are color bars for a minimum of 30 seconds accompanied by a reference level audio tone (generally 0dB) on all audio tracks.
These are used to set proper color balance, and audio and video levels for the video playback. (As we've previously noted, with some playback equipment this is adjusted automatically.)
The white level (note the white block) and the primary (red, green and blue) and secondary (magenta, cyan and yellow) color bars should register correctly on a TV screen and on a vectorscope. You will recall that we explained how to set up your video monitor here.

2. After the color bars is the slate (shown on the right) which is either picked up on a camera or electronically generated. At this point announcer reads the following program information.
• the title of the program
• the episode title and number
• the date (and possibly recording machine number)
• possibly the audio format (mono, Dolby 5.1, etc.)
• possibly the presence of closed captioning or extra data
This information will vary, depending on the facility and production.
The slate shown above shows the time code numbers that are being encoded on the video. Network requirements typically specify a start code of 01:00:00:00 for the first program on a tape.
After these, there is typically an electronic countdown clock that starts at 10 seconds and goes to 2 seconds.
3. At this point there should be exactly two seconds of black and silence before the program begins. This precise timing makes it possible to roll a videotape (if you are using videotape) on a particular number and then "punch it up" at the exact moment it's needed.
Hard disk recorders and some videotape machines have an instant start capability. This means you can stop and freeze the first second of video on the screen and expect the segment it to instantly start when needed.
4. At the end of the production, network specifications require several minutes of black and silence with continuing time code after the last scene (generally the closing credits) of a production.
Now let's trace the director's dialogue for the first minute or so of a very basic interview show. Here we're assuming an on-camera slate and that videotape is being used to record the show.
________________________________________
Director's Comments Explanation
Standby on the set. This means "attention" and "quiet" on the set. The command is given 15-30 seconds before rolling tape. (Assuming for this example that videotape is being used.)
Standby to roll tape. Get ready to start the videotape that will record the show.
Roll tape. The tape is rolled, and when it stabilizes the tape operator calls "speed."
Ready to take bars and tone.

Take bars and tone. The electronic test pattern (ETP) and audio tone is recorded at the reference level (generally, 0dB). This segment will be used to set up playback equipment for proper video and audio.

This may last from 15 to 60 seconds and depends on the technical requirements of the production facility.
Standby camera ONE on slate; stand by to announce slate.
Take ONE.
Read slate.
Standby black. Assuming the slate is not electronically generated, camera one's first shot is the slate identifying the show.

During this time the announcer reads the basic program identifying information we previously listed.
Go to black

Ready TWO with your close-up of Lee; ready mic; ready cue.
The technical director (TD) cuts to black.

The show opens "cold" (without an introduction of any kind) with a close-up of Dr. Lee. This "tease" statement is intended to grab attention and introduce the show's guest and topic.
Take TWO, mic, cue! Cut to camera two with a close-up of Dr. Lee, turn her mic on, and cue her to start.
Stand by ONE on the guest.
Take ONE!

Dr. Lee introduces subject and makes a quick reference to the guest. When Dr. Lee mentions the guest, the director makes a two- to three-second cut to the close-up camera on the guest (who is just listening) and then back to Dr. Lee on camera two.
Standby black and standby to roll commercial on tape 4.
Roll tape 4. Go black. Take it.


The commercial is rolled and taken as soon as it comes up. The audio person brings up the sound on the commercial without being cued. (Everyone's script should list basic information, such as machine playback numbers, etc. Some things, such as cutting mics when they are not needed, are done as needed without a director's command.)
Camera 1 truck left for your wide shot.
Fifteen seconds. Standby in studio. During the commercial camera #1 will reposition for the opening wide shot. (See drawing above.) This shot will be used for keying the opening program titles.
Standby opening announce and theme.
Ready ONE on your wide shot; ready TWO on a close-up of Lee.
Standby to key in title.
Take ONE; hit music; key title. When the commercial ends, a wide shot is taken on camera one, the theme music is established, and the title of the show is keyed over the screen.
Fade [music] and read. The music is faded under and the opening announce for the show is read by an announcer. This will probably include the show's title, followed by the topic, and the name of the show's host.
Ready TWO with a close-up on Lee. Standby mics and cue.
Take TWO, mic, cue. This is a close-up of the show's interviewer, Dr. Lee, who now fully introduces the day's guest and asks the first question.
Camera 1, ready on your close-up on the guest.

During this time Camera 1 trucks back to the opening position for the close-up of the guest. Dr. Lee covers the interval for the camera move by fully introducing the show and guest..
Take ONE. The guest answers first question.
Show continues alternating between close-ups of host and guest. Occasionally cameras will zoom out to get over-the-shoulder shots. Closing of show is similar in pattern to the opening.
Excluding the commercial all of the above takes less than a minute of production time.
During the 30 seconds or so that the interviewer uses to wrap up the show camera one can truck right to the mid-position and zoom back. This shot can be used (possibly with dimmed studio lights) as a background for the closing credits and announce.
Even though this example is a bit of old-fashioned in its format, it illustrates all the things the director is concerned with "behind the scenes" (and it represents a good starting assignment for laboratory exercises).

"Standby"
Note the constant use of the terms "ready" and "standby" in the director's dialogue.
During a production, crew members are normally thinking about or doing several things at once, including listening to two sources of audio: the PL line and the program audio. "Standbys" warn them of upcoming actions.
They also protect the director.
If a "standby" is given in reasonable time, the director has every right to expect the crew member involved to be prepared for the requested action — or to quickly tell the director about a problem.
But if the director simply blurts out, "Take one!" when the cameraperson is not ready, the audience may see a picture being focused, complete with a quick zoom in and out. Since no "standby" warning was given, the director can hardly blame the cameraperson.

Studio Hand Signals
Although the studio director can relay signals to the crew via a headset (PL line), getting instructions to on-camera talent while the mics are on is generally done silently through the floor director.
To do this the floor director uses agreed upon hand signals. In order for the talent to be able to easily and quickly see these signals they should be given right next to the talent's camera lens. The talent should never have to conspicuously look around for cues when they are on camera
Photos of the various studio hand signals can be seen here.

Shooting Angles
In an interview the eyes and facial expressions communicate a great deal — often even more than the words the person is saying.
Profile shots (equivalent to shooting the close-ups from camera position A in this case) often hide these important clues. A close-up of the guest from camera position B, as well as a close-up of Dr. Lee from the camera 2 position, provide much stronger shots.
These angles also offer more possibilities for shots. You have a strong close-up of the person talking, plus, if you zoom back slightly, an over-the-shoulder shot that can even be used to momentarily cover comments by the person whose back is toward the camera.

The Need to Anticipate
An essential talent for a director is the ability to react quickly to changes in action.
But "react" implies delay.
In fact, the total reaction time is equal to the accumulated time involved in recognizing the need for a specific action, communicating that action to crew members, having them respond -- or telling the technical director what you want done and having them respond. That can represent a delay of several seconds.
Although that may not seem long, when audiences are used to seeing production responses in sync with on-camera action, it will clearly reveal that the director is lagging behind the action.
The solution is for the director to try to anticipate what's going to happen.
During an interview a director should be able to sense when the interviewer's question is about to end or when an answer is winding up.
By saying "stand by" early and calling for a camera cut a moment before it's needed, a director will be able to cut from one camera to the other almost on the concluding period or question mark of the person's final sentence.
Also, by watching the off-air monitor in the control room, as opposed to the on-air shot of the person talking, the director will often be able to see when the off-camera person is about to interrupt or visually react to what is being said. Using these clues, a good director can almost appear to have precognitive powers!
This is easier to see when the cameras and video sources are grouped together on a single, large, multi-view. flatscreen monitor. (We'll talk more about this in the next module.)

On-Camera Talent Issues
Under this heading we'll cover makeup, hair, jewelry, and wardrobe.

Makeup
Back in the days of low-resolution black-and-white TV, facial features had to be somewhat exaggerated, just as they do now on the stage. However, in this day of color and high-resolution video, this type of exaggeration would look a bit clownish.
Today, makeup is primarily used to cover or diminish facial defects, fill in deep facial chin clefts and "five o'clock shadows" on men, and to take the shine off faces.
In the case of women, judiciously applied "street makeup" is generally adequate for limited on-camera exposure.
However, when professional talent need to appear at their best under different lighting conditions and for long periods of time, things can get a bit more complicated. For this reason, we cover makeup in much more detail here.

Hair
For limited on-camera appearances, no special changes need to be made from normal hair styling. Stray hairs have a way of calling attention to themselves when close-ups are illuminated by backlights, so stray hair needs to be kept in place.
When applied to hair, oils and creams can impart an undesirable patent leather-like shine, which will be exaggerated by backlighting. The absence of hair — i.e., bald heads — may need help from a powder base carefully matched to skin tones.
Backlights and blond hair, especially platinum blond hair, will cause video levels to exceed an acceptable brightness range, so backlight intensity will need to be dimmed or the beams barned off.
When it comes to the effect of backlights and lighting in general, camera shots and lighting should be carefully checked on a good video monitor before a production.

Jewelry
Jewelry can represent two problems.
First, if it's highly reflective, the results can range from a simple distraction to the creation of annoying streaks in the video. The simplest solution is to either substitute non-reflective jewelry or possibly remove it all together.
The second problem with jewelry such as necklaces and beads is noise -- especially if it comes in contact with a personal mic.

Wardrobe
In general, clothes (wardrobe) that are stylish and that flatter the individual are acceptable -- as long as five caveats are kept in mind.
• Colors that exceed 80-percent reflectance, such as white and bright yellow, need to be avoided. White shirts are often a problem, especially if not partially covered by a jacket or a sports coat.
• Black clothes, especially against a dark background not only can result in a tonal merger, but adjacent Caucasian skin tones can appear unnaturally light, even chalky.
• Closely spaced stripes in clothing can interact with camera scanning and result in a distracting, moving ▲moiré pattern.

• Very bold patterns can take on a distracting, facetious appearance.
• Sequined, metallic, and other shiny clothing (note photo), which might otherwise look good, can become quite distracting on television, especially under hard lighting.
We'll move from the studio into the TV control room in the next module.
________________________________________
An Internet link that has considerable broadcast-related information is broadcast.net
________________________________________
Module 60
Updated: 04/24/2010






Video Switchers
and Visual Effects



In this module we will cover:
• Hardware and software based video switchers
• Video switcher functions and controls
• Some basic ▲ visual effects (VFX)
• Supers (superimpositions)
• Luminance and chroma keying
• Multi-view monitors
________________________________________
We'll start with a very basic switcher configuration.

Each button represents a video source—even "black," which includes the technical parts of the video signal necessary to produce stable black.
The bottom row of buttons (outlined in blue) represents the program bus or direct-take bus.
Any button pressed on this row sends that video source directly to line out, the final feed being broadcast or recorded.
The easiest way to instantly cut from one video source to another is simply to select it ("punch it up") on the program bus. The program bus generally handles more than 90% of video switching.
But, what if you want to dissolve (fade) from one camera to another, or fade to black?
For this you need to move to the top two rows of buttons referred to as effects, or the mix/effect bus. From here, with the help of the fader bars, you can create rudimentary visual effects.
When the fader bars are in the top position as shown here, any video source punched up on the top row of buttons is sent to the effects button on the program bus. (To see this clearly, you may want to refer back to the larger illustration above.) The buttons that have been selected are shown in red.
In this case, camera 3 was selected on the effects bus, so that's the camera that will be sent down to the program bus. Since the effects bus has been selected on the program bus, its signal will then be sent out and be displayed on to the line out video monitor.
Put another way, if the fader bars point toward the top row of buttons on the effects bus, and camera 3 has been selected on that bus, we will see camera 3 when the effects bus is selected on the program bus.
If we were to move the fader bars down to the lower position, the video source selected on the lower row of buttons (in this case camera #2) would be sent to the program bus.
During the process of moving the fader bars from the top to the bottom, we see a dissolve (and overlapping transition) from camera #3 to camera #2.
If we stop the fader bars midway between the move from top to bottom, we would see both sources of video at the same time — we would be superimposing one camera over the other.
Although this used to be the way we displayed titles, credits, etc., on the screen, today we use an electronic keying process.
As illustrated below, a key represents a much cleaner and sharper effect.

Note in the drawing above that in a key one image is electronically "cut out" of the other, while in a super the two images are visible at the same time. Compared to a key, the latter can look a bit jumbled.
These effects are commonly done "on the fly" with a video switcher. However, as you will recall in Module 56, for edited programs these effects (and far more sophisticates ones) are possible by using layering or compositing with a sophisticated non-linear editor.
Now, let's add a couple of new things to our basic switcher.


First, note in the drawing above that the fader bars have been split—each one being at the "0" (no video, or black) position. If we were to move fader bar "A" to the top position we would put camera 3 on the air; if we were move fader bar "B" to the bottom position we would put camera 2 on the air. But, of course, you already know that.
What you don't want to do is split the bars so that they each sends out maximum video from its source. (Video engineers may get very upset with you!)
Next, note the extra row of buttons (outlined in green) marked "preview," just below the program bus.
With the preview bus we can set up and check an effect on a special preview monitor prior to switching it up on the program bus. Without being able to preview and adjust video sources before putting them on the air, we might end up with some unpleasant surprises.
To see (preview) an effect, we first punch up effects on the preview bus. When we get the effect we want on the effects bus, we can cut directly to it by punching up effects on the program bus.
Some switchers, like the one shown in the photo at the beginning of this module, have multiple effects banks. A simple version is shown below.



Using what you know about switchers at this point, can you figure out how black arrived on the screen in the drawing above?
If you moved the fader bars on Effects #2 to the up position, you would make a transition from black to whatever was on Effects #1. In this case it would be Camera 2 superimposed over Camera 3.
Finally, let's add a few bells and whistles.
The top row of buttons in this drawing represents various types of wipes.
Yellow on the buttons represents one video source, black another source.
Additional patterns—some switchers have hundreds—can be selected by entering numbers on the keypad.
If wipe is selected on the switcher, the button pushed (indicated in red in this drawing) shows the moving pattern (controlled by the fader bars) that would be involved in the transition from one video source to the other.
A border along the edges of the wipe pattern — a transition border — can be used and its hue, brightness, sharpness, width, and color saturation selected.
The key clip knob controls the video level of the source you are going to key into background video. This is adjusted visually on the preview monitor.
Downstream keyers, which are often used to key in such things as opening titles and closing credits, are external (downstream from) the basic switcher.
The advantage of a downstream keyer is that it doesn't require the use a switcher's effects bank for keying.
This means that the bank stays free to be used for other things.
The switcher shown at the left incorporates versions of all of the features we've discussed, plus a computer display that adds even more options.
Although switcher configurations differ, they all center around the same basic concepts.
In recent years switchers have been getting more compact, bringing many video control functions into a "small footprint."
The switcher on the right, an eight-input switcher that was introduced in 2008, has many of the features of the larger switchers, including limited visual effects.
Although it may not be as impressive looking as some of the larger switchers, it can adequately handle the needs of many small studios and production facilities.
Later, we'll talk about software-based switchers and special effect units that can be a function of a desktop computer.

Multi-View Monitors
Until recently, each video source in a TV control room was displayed on a separate monitor. This meant that control rooms typically had dozens of TV monitors, taking up considerable space, consuming a lot of power and taxing air conditioning.
With the introduction of large, flat-screen monitors in the 1990s, this started to change. As shown on the left, today's video switchers can output multiple video sources for a single display.
A typical multi-view display suitable for the technical director or director is ▲ shown in this pop-up photo.
Although there may be a large flat-screen display in the front of the control room, crew members, such as the TD and audio person, may have smaller displays directly in front of them.
Depending on the needs of specific productions, template configurations (display arrangements) can be programmed into the switcher with macros (generated computer code) and called up as necessary. Video source boxes can also be rearranged by dragging them into different positions with a mouse.
The corresponding video sources can be selected by using a standard switcher, with a mouse, or in the case of touch-screen displays simply by touching the desired source.
Many of today's switchers use macros to program complex special effect sequences and even CG information. Some switchers allow for the storage of clips (audio and video segments) that can be inserted on demand into programming.

Chroma Key
Earlier, we mentioned a type of keying called luminance key, where the keying effect is activated by the brightness or luminance of the video that you are keying in. But, as we saw when we discussed virtual reality sets, it's also possible to base keying on color (chroma).
In chroma key a particular color is selected for removal and another video source is substituted in its place.
This type of keying is commonly done during weathercasts where a graphic is inserted behind weather person. (Note photo on the right.)
In this picture the man on camera is looking at a monitor off camera on our left, using it as a guide to know where to point on the green chroma key background. The result is shown on the HDTV monitor at the right of the photo.
Although any color can theoretically be used in chroma key, royal blue and a saturated green are the most commonly used. Most of the viisual effects we seen video production are done with chroma key.

Software-Based Switchers and Effects
Most software-based switchers use the hardware-based switcher that we've discussed as a graphical model.

The NewTec TriCaster Studio™ above is a long way from ▲ the first-generation software based systems of 20 years ago. The system illustrated above requires a shoe box-sized interface and the output and can be displayed on a laptop or desktop computer and controlled by a keyboard and mouse.
Software based systems can be easily and regularly upgraded when new software is written — an advantage you don't have to the same degree with hardware-based equipment.
With most software-based systems it's also possible to go far beyond basic switching and create a wide variety of visual effects.
________________________________________
Module 61
Updated: 04/06/2010



Multiple-Camera
Remotes

The most challenging, exciting, and demanding productions to do are multiple-camera remotes -- especially if they are done "live."
Live productions such as the Super Bowl and the Academy Awards may require 30 or more cameras, a few tons of equipment, and months of preparation.
But even covering a high-school football game or a homecoming parade with just a couple cameras takes skills beyond those needed for a basic studio production.
In the studio you have a tested, controlled, and even predictable environment. Once you leave the studio, things can get much more complex.
One of the most important steps in doing a successful remote production is the first one: doing a thorough on-location survey. Ten basic points to check on when considering a major remote production are covered in this article.


Deciding on Camera Positions
To show the camera positions decided upon by the director, a location sketch is generally attached to the facilities request form discussed in Module 59.
When deciding on camera locations, several things should be kept in mind. In addition to the obvious things, such as not shooting against the sun and not placing a camera in a position that would result in a reversal of action (crossing the line), there are some special considerations for stationary cameras.
If people suddenly jump up in front of your camera during a parade or the most exciting play in a game and block your shot, there may be little you can do. Members of the press or ENG camerapersons may find that the camera positions you've selected are ideal, which wouldn't be surprising if they are good ones. If they don't stand directly in front of your camera, they may simply block your shot from one or more angles.
And, there is another problem you may need to anticipate. If fans or spectators start jumping up and down in their excitement and shaking the camera platform, the resulting video may be unusable.
You might want to consider using a camera jib (pictured below on the left) to add dynamic camera moves to your production (and shoot over the head of anyone that could be in the way of your camera shots). ▲ As we've previously seen, most camera jibs are considerably smaller than the one pictured below.



"Crane shots" used to involve large, heavy cameras and a camera operator to pan, tilt and focus the camera. However, today's light-weight video cameras can be totally operated with remote controls. (Note photo on the right above.) These developments, together with the dynamism they can add to production shots, has made camera jibs a staple of many productions, both in and out of the studio.
Often, cables are strung across the top or on the sides of of a sports arena or concert hall for remotely controlled cameras that can travel back and forth across the cable. This is another approach to providing unique, high-angle shots.

On-Location Audio Concerns
Because of the noise common to remote locations, off-camera directional mics or personal wireless mics are almost always used on remotes. In the latter case, be sure to check for multipath reception and dead spots by having an assistant test each RF mic while talking and slowly walking through the area where it will be used.
Since mic problems are common, there should be back-up mics for each area that can be put into service at a moment's notice.
When mounting crowd mics (mics that will pick up audience or crowd reaction) make sure they cover a wide area, rather than favoring a few people closest to a mic.
Plan for the shortest distances possible for mic cables and avoid running them parallel to power cords where electrical noise might be induced into an audio line. In wet weather tightly seal up cable connectors with black plastic electrical tape.

Determining Lighting Needs
Once the lighting director visits the location, a list of needed lighting instruments and accessories can be drawn up. One or more lighting kits, such as the ones shown below, may be needed to light specific areas.


And, finally, remember, in case it rains or something interferes with your original plan, have a "plan B" (and maybe even a "plan C") worked out for both your lighting approach and your basic production plans.
You don't want to have everyone assembled and maybe even have expensive rented equipment on the location and not have some backup plan in case some unforeseen event throws a wrench in your original plans.

Production Communication
In a regular studio production crew members typically get to know the basic routines associated with programs and often don't even have to be prompted by the director. But, during a field production routines change and crew members will depend heavily on the director for second-by-second cues.
To maintain microwave or satellite signals at optimum technical quality, engineers at both ends of the remote link must be in contact so that video and audio level adjustments can be made.
Except in the case of some microwave and satellite feeds that accommodate PL audio channels, cell phones or land lines may have to be used by engineering and production personnel to keep in touch with the station or production facility.
Production personnel at both ends of the link must be able to coordinate commercials and station breaks. These normally originate from file servers at the station.
Interrupted foldback (IFB) lines are used to communicate with on-camera talent. Unlike live ENG reporters who wear only a small, single IFB ear piece (ear bud), announcers for sports events generally prefer padded, noise-canceling earphones that cover both ears.
In their normal mode, one or both earphones carry program audio. When a brief message needs to be relayed to an announcer (preferably when he or she is not talking) the audio on just one of the earphones can be interrupted for the message.
A director may need to notify an announcer to go to a commercial or tell a color commentator that a replay of a specific play is ready for playback.
Although PL headsets are generally plugged into cameras, in some cases extra PL line drops (added outlets) have to be installed in field locations to accommodate production personnel who are not working close to a camera. And, of course, maximum mobility is possible if these crew members use wireless PL systems.
Because of the importance of PL communication, it's highly desirable -- some would say absolutely essential in live broadcasts -- to have a fully functional standby (backup) PL line that everyone can instantly switch to if problems develop in the primary system.

The Equipment Inventory
The remote survey form and facilities request form should be used as a guide in deciding on the production equipment that will be transported to the remote location. An equipment list should be carefully drawn up and then double-checked as the equipment is packed.
Don't forget extra lamps for lights, extra mics and mic cords, extra headsets, etc. It's a rare remote in which some piece of equipment doesn't fail.
________________________________________
Module 62
Updated: 04/06/2010



Single-Camera
Production


For more than a century, ▲Hollywood has been making a single film camera look like the work of several cameras working simultaneously.
Like many things in Hollywood, it's actually a bit of movie magic.
At first, film directors didn't have a choice; there was no way to synchronize multiple film cameras on a single scene. Although this started out as a seeming limitation, it actually turned out to be a creative advantage, an advantage that most multiple-camera video producers don't share.
The difference is based primarily on how film is shot. In film-style production each scene and camera angle is setup and rehearsed until the director is satisfied. Actors, lighting directors, makeup artists, audio people, etc., only need to concentrate on one scene at a time.
Although it's a time-consuming and tedious process, at the same time, it provides the opportunity for maximum technical and artistic quality.

In contrast, in typical video dramas actors often have to memorize lines for a complete production, and they may even have to go through the whole production without stopping.
Lighting, audio, make-up, etc., have to work for long shots and close-ups, and for a variety of different camera angles.
You may recall from the lighting modules that the best lighting is limited to one camera angle.
But when people have to be lit and shot from three or four angles at the same time, as they normally are in multiple-camera video production, there will invariably have to be some compromises.
In film production many "takes" may be necessary before directors feel they have the best possible take. (Recall that a take is a short, discrete segment of action.) Some film scenes are shot dozens of times before a director is satisfied.
In film, scenes are shot from different angles and at different distances and film editors can choose from a multitude of takes. It becomes the editor's job to cut the best takes and scenes together, giving the appearance of one continuous action sequence photographed from different camera angles. With film, editing decisions are typically spread out over weeks, if not months -- ample time to reflect, experiment, and reconsider before final decisions are made.
Much in contrast, editing decisions in live or live-on-tape video productions are done in real time on a second-by-second and minute-by-minute basis. There is seldom the opportunity to look back, rethink, and revise.
This link discusses film vs. videotape and this link takes you to a quick look at the development of motion picture production.

Film-Style Dramatic Production
Although film-style (single-camera) video production has been used in news and documentary work for decades, until recently its use in video dramatic productions has been limited.
Many made-for-TV movies are still shot on film, but in most cases that film is immediately transferred to video after processing and all subsequent postproduction work is done with the video version.
With video projection in theaters for feature-length films emerging, the move to video for all types of dramatic production can't be too far away. If nothing else, pure economics will drive the transition. Within the last few years numerous feature "films" have been shot with high-definition video equipment.

Advantages of Single Camera
Film-Style Production
One of the advantages of single-camera (film or video) dramatic production is that scenes don't have to be shot in sequence. In fact, seldom does a script's chronological sequence represent the most efficient shooting order. The final sequence of scenes is arranged during editing.
In order of importance, the following should be considered when planning the shooting sequence of a single-camera production:
• all shots involving specific talent/actors (starting with the highest paid) should be shot as close together in time as possible, regardless of script sequence

• all shots at a particular location should be shot at the same time

• all shots requiring specific production personnel should be shot at the same time

• all shots requiring specialized production equipment, such as special cameras, lenses, microphones, and generators should be shot at the same time
As an example let's consider just one dramatic scenario -- a couple meets, falls in love, gets married, and after 20 years, starts fighting fiercely.
In an effort to start over, they decide to return to the hotel room where they spent their first romantic night. Unfortunately, one of the partners finds out that the other had an affair in the room. They start arguing again, and in a final rage, one partner kills the other. (Granted, not a very pretty scenario, but it'll have to do for this example.)
For scheduling efficiency it's desirable to shoot the scenes of their first shy lovemaking in the same hotel room (and possibly on the same day) as the scenes of their vicious arguing and fighting.
You can already see the challenge for the actors involved. Plus, while you have the lights, sound equipment, etc., set up, you can also get the shots of the affair that took place in the room -- probably to be added in the form of a flashback.
We'll assume that changes in the motel room will be minimal, except for aging of the walls, furnishings, etc. The bigger challenge will be to age the actors appropriately. Not to worry, make-up people are pretty good at this kind of thing.
In the final version of the film these scenes will be separated by other story elements. But, as you can see, it would be much more efficient if all of the motel scenes were shot at the same time. (We'll return to our unhappy couple in a moment.)

The Master Shot
When dramatic video is shot in the single-camera style, many film conventions apply. (We introduced some of these earlier in our discussion of general video production, but here we're concentrating on the steps in single-camera production.)
First, we have the cover shot (normally called the master shot in film), which is a wide shot showing the full scene or acting area.
This shot is useful to show viewers the overall geography of the scene and for bridging jumps in continuity during editing. More specifically, the master shot or cover shot is used to:
• show major changes in the scene's basic elements

• cover major talent moves, including the entrance or exit of actors

• periodically remind viewers of a scene's geography (referred to as reestablishing shots)

• and whenever needed during editing to momentarily cover the action when a good medium shot or close-up is not available
In dramatic video and film production many directors start out by shooting a scene, beginning to end, from the master shot perspective.
Once this shot is filmed, the director repositions the camera for the medium shots and close-ups of the various actors. For these the actors once again repeat all their dialogue.
To accommodate the new camera distances and angles these setups often require changes in lights, microphone positions, and sometimes even make-up. Obviously, all this has to involve changes that will (that should) go unnoticed when all of the takes are cut together.
Some directors shoot the scenes in the opposite sequence: close-ups, medium shots, and then master shot.
However you do it, the series of setups associated with a scene is commonly referred to as coverage. (Remember, some terms may have different meanings in film and video, so don't be surprised if you see some of these terms used in different ways.)
As an example of the scenario we've been discussing, let's consider the restaurant scene where the man in the ill-fated marriage originally proposed to the woman.
In single-camera film-style shooting the three camera positions indicated are actually one camera that is moved to each position.
Although directing approaches can vary, let's look at one possibility.
First, we run through the entire dialogue for the scene from camera position #1. We can use this wide shot as a master or establishing shot, and thereafter whenever we need to reestablish the scene, cover bad shots on camera positions #2 or #3, or just to introduce visual variety.
Next, we run through the entire scene again from camera position #2 as the man repeats his lines.
From this position we can get over-the-shoulder shots or close-ups. Finally, we do the same thing all over again from camera position #3.
The actors must be careful to make the same moves in the same way on the same words in their dialogue. Otherwise, the words and actions in different takes will not match and that will make it very difficult to cut between the various takes.
When we finish, we'll have at least three complete versions of the scene to choose from during editing.
The obvious editing approach would be to use a close-up of each person as they speak. But, as we've noted, often a reaction shot is more telling. For example, it might be better to have a close-up of the woman's reaction as the man "pops the question."
We would probably also want to get close-ups of the ring, the wine glasses clinking together in a toast, etc.

Working With Actors and Talent
Film directors in the era of silent films could shout instructions to actors while scenes were being shot. The director's role, especially in television, is quite different today.

Part of the art of directing is bringing out the best on-camera performance in actors.
A good director finds an optimum point between forcing the actors to follow his or her own rigid interpretation and giving them absolute freedom to do as they wish.
The optimum point between the two extremes will depend largely on the experience of the actors and the approach of the director.
During read-throughs or table readings (the informal group sessions where the actors initially read through their lines) directors should carefully observe the character interpretation that actors are developing.
If the actors know the story and have developed a feel for their parts -- which they should if they are good actors -- the director should at least initially allow them the latitude to go with their interpretations.
If the director decides this is clearly at odds with what he has in mind, then he should skillfully and maybe diplomatically suggest another interpretation.
Although the director is in charge and is responsible for getting the performance he or she envisions, directors who have limited experience with actors will want to "tread lightly" until they understand the acting process and the personality of specific actors.
Directors who have taken acting classes, or who have acting experience have a definite advantage.

Inventing "Business"
During rehearsals the director along with the actor will decide on the basic actions and business of actors. (Business refers to the secondary action associated with scenes. This would include fixing a drink, paging through a magazine, etc.)
Scripts generally do not describe actor business, but it can influence camera shots, setups, and editing.

Single-Camera vs. Multiple-Camera Production
The single-camera, film-style approach offers many important creative advantages in dramatic production. As we will see in the next section, this approach is also valuable in shooting news and documentary pieces.
But this approach is also time-consuming, and in TV production time is money. When time or budget limitations demand a faster and more efficient approach, the video producer must rely on multiple-camera production, the topic of the next section.
Assigned readings:
The Quintessential Element in video production and
12 Guidelines for Effective Videos.
In the next section we'll take up news and documentary production. Even if you don't have an interest in news or documentary work, the principles that will be outlined are important to other types of production.
________________________________________
Module 63
Updated: 05/04/2010
Whenever the people are well informed, they can be trusted with their own government.
--Thomas Jefferson

Part I



News and
Documentary Production


We now start three modules on news and documentary production. You will note that much of the information applies to more than just news and documentary work.
We also feature it because this area tends to be highly competitive and, consequently, uses the most innovative production techniques.
________________________________________
If there's a war, a disaster, or a major civil disturbance somewhere in the world, television news will be there.
To be successful in today's highly competitive, 'one person does everything' news and information production world, you must know more than just how to run a camcorder.

Those who produce TV news and documentaries collectively hold the keys to much power and influence. For this reason we'll spend some time investigating this television genre.
Although the printed word can be powerful, as we've so often seen in the last 50 years, seeing images, especially on TV, makes happenings much more real.
At the same time, keep in mind that George Lucas, one of the most successful producers of all time, said, "It's very foolish to learn the how without the why."
In news and documentary work the "why"-- the context of what we are seeing -- is especially important.
________________________________________

During the events surrounding the September 11, 2001 terrorists' attacks on the East Coast of the United States where more than 2,000 people died, TV news brought the nation and much of the world together in feelings of outrage and sorrow. The same happened in July, 2005 after the London terrorists' attacks.
It was during these times that another application of video moved into prominence in news -- the Internet.
The major news organizations now assemble a wide range of live video, photo collections, animated displays, and interactive maps for Internet users.
________________________________________
Where the Public Gets
Its News and Information
Note on the chart on the right that about half of the general population now gets its much of its news and information from the Internet.
In mid-2008, Zogby International put the breakdown at 48% for the Internet and 29% for television. (This includes general information, not just news.)
For young people alone the Internet percentage is considerably higher.
However, things appear to be changing -- and rather rapidly.
In looking at just a single year (2007 to 2008), we can see from the chart below how the popularity of the various news media is shifting.
You will note that the mainstream news media including network and local TV news (in red below) slipped in popularity while the Internet and cable news (in green) have more than made up the difference.

Sources: Arbitron, Audit Bureau of Circulations, comScore Media Metrix, Nielsen Media Research and stateofthemedia.org.
________________________________________
People that get their news from the Internet and cable news channels jumped an average of 30% in just this one year.
Although the number of broadcast TV viewers for news declined, with the advent of Internet outlets such as YouTube™ the public's exposure of video in general has more than made up the difference.
The Internet has also sparked another dimension in news and information: blogs.

The Internet
Internet Blogs
Blogging has created a million eyes watching over the shoulders of journalists.
-Matthew Felling, media director of the
Center for Media and Public Affairs, Washington.

Blogs -- short for web logs -- are viewed by about 30% of Internet users and all major news organizations. The writers of blogs use their web sites to post news they uncover, photos and videos, personal reactions to events, rumors, and even their own personal diaries.
Blogs, can be highly opinionated and include unsubstantiated information. Even so, the more valued ones are often the source of leads that the mainstream media develop into major stories. The following link will take you to a list of major blogs, including a comprehensive list of mainstream news sources.
As part of their news coverage the mainstream media now regularly feature blogger reports and even interviews with the more respected bloggers. TV news often features sites such as YouTube™, MySpace™ and FaceBook™. More recently Tweeter™ has caught hold and "tweets" have become an instant news and information source. (These things are covered here.)
As part of their news coverage the mainstream media now regularly feature blogger reports and even interviews with the more respected bloggers. TV news often features sites such as YouTube™ which commonly have videos.
Many blogs and Internet sites contain unsubstantiated rumors and just pure fiction. There are at least two Internet sites that attempt to debunk rumors and widely repeated half truths: Scopes.com and FactCheck.org.


Network and cable news channels encourage viewers to send in photos and video stories. Instructions for doing this are included on their sites.*

Internet News and Information Top Internet News Sites
(All ages, In Thousands of Users)
CNN 23.5 New York Times.com 10.1
MSNBC 20.1 Tribune Newspapers 8.6
Yahoo News 19.9 ABC News Digital 8.3
Gannett Newspapers, including USA Today 17.9 Hearst Newspapers Digital 6.3
AOL News 16.7 Associated Press 6.1
Knight Ridder Digital 11.00 Fox News 5.4
Internet Broadcasting Systems. 10.8 Washington Post.com 5.4
________________________________________
Keep in mind that Internet use is positively related to education and age — the younger and better educated tend to use the Internet more. This explains part of the discrepancy between the rankings listed above and the over-the-air TV ratings where FOX News typically leads..
It may also help explain the PBS Internet site having the most of visitors (mid-2008 statistics).
Top Five Broadcast Internet Sites
(All ages, in Percent of Total)
Rank Web site Domain Market Share
1 PBS Online www.pbs.org 24 %
2 ABC www.abc.com 19%
3 NBC www.nbc.com 18%
4 CBS www.cbs.com 18%
5 FOX (Includes several
standalone program sites) FOX 17%
Young people represent the mainstream media consumers of the future, so it's also important to look at media use by this segment of society. A web site that has summary information on all of the news media is The State of the News Media.
With all this as a background, let's look at some of the tools for the production of news and information programming -- whether it's being produced for standard broadcasting, cable, or the Internet.

The Difference Between
ENG and EFP
Electronic newsgathering (ENG) is a part of electronic field production (EFP).
Although in all-digital operations we're starting to see the initials DNG used for digital newsgathering, we'll stick to "ENG" for this discussion.
Electronic Field Production (EFP) includes many other types of field productions, including commercials, music videos, on-location dramatic productions, and various types of sports coverage. EFP work generally provides the opportunity to insure maximum audio and video quality.
In ENG work the primary goal is to get the story. In 90% of news work there will be time to insure audio and video quality, which is what the news director and producer will expect.
But conditions are not always ideal in news work, and if compromises must be made they are made in audio and video quality, not in story content.
The most-watched and celebrated television news story in history was shot with one low-resolution black-and-white video camera -- not the quality of video that you would think would make it to every major TV network in the world.
The video was of mankind's first steps on the moon.
Although the quality of the footage was poor, no TV news editor said to NASA, "You've got some interesting footage there, NASA, but we'll have to pass; the quality just doesn't meet our technical standards."
In democratic society news and documentaries also serve an important "watchdog" function. Not only do they tend to keep politicians and other officials honest, but they have also brought to light countless illegal activities. Once such things become public knowledge, corrective action often follows.

The Influence of Broadcast News
We can more fully appreciate the power and influence of TV news when we consider the lengths to which some people and nations go to control it.
As we have seen countless times, the news media are the first target for those who want to control the people of a country. South Africa and the Philippines are two examples that we've previously cited.
Although ▲censorship, is often justified as a way of protecting values or ideals, history has repeatedly shown that censorship leads to a suppression of ideas and often to political, military or religious control.
Today, there are many countries that censor, or at least try to censor, broadcast news, books, magazines, and the Internet. Although the stated justification is often to protect moral values, the list of censored materials sometimes includes the web pages of The New York Times, the Washington Post and The Los Angeles Times. You can draw your own conclusions about the real intent.
It might be assumed that things are different in the United States, since we have The First Amendment to the Constitution guaranteeing free speech. However, the United States has a long history of censorship attempts. Over the years many books have been banned in the United States.
Even through broadcast news has problems with credibility, as the bearer of "bad tidings" TV news often gets complaints from people who at least unconsciously confuse the medium with the message. Thus, the messenger (TV news) is blamed for information that some viewers find distressing or that runs contrary to the beliefs they hold.
There is no doubt that most of TV news in the United States, especially in the big cities and at the network level, is ratings driven.
Thus, stories that will grab and hold an audience are favored over those that in the long run may be much more consequential. Stories that are "visual" are favored over those that are static and more difficult to explain or understand.
A baby beauty contest or a dog show may win out over coverage of a city council meeting or an international trade conference. Dramatic footage of a spectacular fire will typically get more air time than a story of an international trade settlement that will affect millions of people.
Given the preferences of viewers who are constantly "voting" on program popularity with their TV remote controls, a news director (whose job largely depends on maximizing ratings and station profits) may have little choice but to appeal to popular tastes.
As media conglomeration spreads with more and more media outlets being owned by several huge corporations, news is emanating from fewer and fewer sources.
Even now it's alleged that corporate self-interest shapes decisions on what will and will not be covered.
At the same time, news is highly competitive and outlets that bypass or downplay certain stories because they may negatively impact advertising profits or corporate prestige may find that their credibility drops with viewers. This route is unwise, if for no other reason, because it will eventually impact news ratings and, subsequently, profits.
But, there is also this: A large percentage of the audience still gets most of their news from their favorite TV news station. If TV news bypasses certain stories because they may be unpopular or not easily understood, the viewers may never know. (How can you miss something if you don't know about it in the first place?)

Documentaries That Changed
Thinking and Sparked Action
A documentary is a factual production, one that generally incorporates interviews with the people involved with the subject and actual footage of what has taken place.
The dictionary would add, "from documents....expressing things as perceived without distortion of personal feelings, insertion of fictional matter, or interpretation."
The hard-hitting, hour-long documentaries, such as CBS's "Harvest of Shame," which won many awards and sparked social reform in the United States, have all but disappeared in mainstream commercial television.
They have lost favor because they produce low ratings and are expensive and time-consuming to produce. Plus, they often step on the toes of influential individuals and corporations, and that can upset network sponsors and even spark lawsuits.
In their place on the commercial networks are typically the softer, safer, human interest and crime story mini-documentaries featured in some of the popular news magazines.
PBS, which does some excellent documentaries, is an exception, as are some of the special interest cable and satellite channels. These sources represent an important means of getting a message across to a segment of the population that, according to ratings analyses, tends to be better educated and often part of the so-called "decision-making group."
Before we dismiss the audience for documentaries as limited, we need to remember that a surprising number of documentaries have had mainstream appeal -- even to the point making an impact in movie box offices.
Even before its release on DVD, the controversial Fahrenheit 9/11, generated revenue comparable to popular mainstream films. An Inconvenient Truth, the 2006 film on another controversial issue, won the Oscar for best documentary in 2007. The film cost $1-million to produce and within a short time had generated $50-million in revenue.
For the fist time you can purchase a video camera at your local electronics store with the hope of producing a professional documentary -- or, as we've also seen in some cases, even an independent dramatic film that can end up in theaters.
It's not easy, of course, but people who know what they are doing are regularly doing it. Even a short segment on YouTube recently prompted a network documentary.

Handling Controversial Subject Matter
When handling controversial subject matter broadcast television is different from many of the feature films noted above because it must attempt to show balance.
Although broadcasters no longer have a legal "equal time" mandate from the FCC, the airwaves still belong to the public. With the exception of religious views, which can legally go unchallenged, the FCC expects networks and stations to present opposing views -- especially if they represent major factions.
At the same time, views on "bias" have changed in recent years. For example, a recent court cast against FOX alleging bias in their news was lost when, among other things, the court noted that, people can now turn to the Internet and other sources of news. At the same time, the effect of incestuous amplification must be considered.
Even so, since "biased" is a word that you don't want to hear about your work (especially if you plan to broaden your employment opportunities), you don't want to promote your own view on an issue and not seek opposing views.
Let me speak personally for a moment. As a person who spent many years in news (newspapers, radio, and TV) I had to confront this issue very early in my career. I can recall becoming upset and emotionally involved in stories involving the unfair or illegal treatment of people. Around the newsroom I was known to start sentences with, "We've got to do something about...."
When a seasoned journalist saw what was happening (and that it was affecting my objectivity) he passed on some advice that helped me over the years.
He said,
Worry about your job and not somebody else's. Your job is simply to uncover the facts — as many as you can on both sides of the issue. The less emotionally involved you are the better you'll be able to do your job.
Let the politicians, preachers, public officials, or whoever, do something about what you find out. That's their job.

Part of your responsibility as a newsperson is to bring out the various sides of an issue. This means you allow each side to state their views as strongly and convincing as they can. Not only is it the professional thing to do, but it will also add interest and controversy to your news stories.
If you keep an open mind right from the beginning, you may uncover facts that put issues in a whole new light. Again, speaking from many years of experience in news, I often found that my initial views on issues dramatically changed after I uncovered facts that were not commonly known.
In speaking to potential spokespersons for TV news pieces you need to explain the nature of the story. You also want to carefully document your attempts at finding opposing views. This will protect you both legally and professionally.
In news pieces you have to rely on the telephone to set up interviews. If key people refuse comment or refuse to be interviewed, some producers send these people registered letters, so that after the piece is aired they can't suddenly claim they were denied the opportunity to present their side.
At the same time, keep in mind that when an issue is being litigated an attorney might restrain them from commenting, a fact that should also be mentioned. In case you missed it in Module 55, the basic do's and don'ts of interviewing can be found here.
Some of the greatest problems in our society have occurred when the news media bowed to public, political or economic pressures and simply didn't do their job. This is covered in When the Watchdog Goes to Sleep.


There is evidence that the Internet has significantly changed the reading habits of young people. This had resulted in a disadvantage in taking reading tests and college entrance exams. At the same time, the abilities of "the Internet generation" give them some significant professional advantages. These issues are discussed in The Internet's Impact on Reading Habits and Abilities.
Related Feature Films
The films below relate to news and documentary work and can be rented from a source such as Netflix. They can either be viewed privately or, if time permits, used in a classroom. Note: They are R-rated for language.
• Nothing But the Truth, a dramatic and engrossing film based on a true story telling how protecting confidential sources can sometimes have profound effects. It stars Kate Beckinsale, Matt Dillon, and David Schwinner, among others. Alan Alda's appeal before the U.S. Supreme Court is reason enough to rent this film. More information can be found in this blog piece.
• Welcome to Sarajevo. If you are interested in being a foreign correspondent, you should consider Welcome to Sarajevo, starring Stephen Dillane and Woody Harrelson. The film, which is based on a true story, makes use of actual news footage to very dramatically (and very graphically) show what war correspondents face.
• Live From Baghdad -- Action drama starring Michael Keaton showing how CNN got exclusive television coverage of first U.S. invasion of Baghdad. The film explores some of the ethical issues inherent in 24-hour journalism. Although fictional, it's dramatic and realistic, and based on actual events.
________________________________________
There are many agencies that monitor news freedom and attempts to censor news. One of these is, which specializes in student issues, is the Student Press Law Center in Arlington, Virginia.
________________________________________
* Still photos and video can be transmitted directly from cell phones, or with the help of special software, videos can be edited before being uploaded. Software such as this facilitates uploading from a variety of different sources.

In case you missed it earlier, a moving account of how one person with a video camera affected people around the world is detailed in the tragic story of Neda and the Power of Video.

Module 63
Updated: 05/04/2010
Part I I

News and
Documentary Production

We are at the beginning of a golden age of journalism -- but it is not journalism as we have known it. Media futurists predict that by 2021, citizens will produce 50 percent of the news....

- Online Journalism Review Report on Participatory Journalism





ENG Personnel
The number and type of positions involved in producing a daily newscast will vary from two or three people in a very small station to more than 100 in Toronto, New York, Los Angeles, or Tokyo.
Although responsibilities and titles can vary among stations, generally the news producer is the person who is directly in charge of the newscast.
In this digital, file server era, the role of the news producer has changed. Typically, he or she puts together the list of segments for each newscast based on the stories available.
The Director will then check the segments and make sure they are ready for air and then call for them as the news is broadcast. The person who responds to the director and operates the switcher during the broadcast is the TD or Technical Director.
Larger stations have segment producers in charge of specific stories or newscast segments. Some stations will have an executive producer who is over the producer(s).
As the title suggests, the ENG coordinator starts with the story assignments made by the assignment editor and works with reporters, ENG crews, editors, technicians, and the producer to see that the stories make it to "air."
ENG coordinators must not only thoroughly know their studio and location equipment, but also understand news, which brings us to...

Uncovering Truth
Ultimately, the job of the journalist — especially the investigative journalist — is to uncover the truth about situations and explain that truth to an audience in a clear and succinct manner.
Even when there seems to be a major injustice involved, it's not the responsibility of the reporter to be an advocate of a particular viewpoint, only to bring all of the related facts to the public's attention.
In the case of complex stories and situations, this does not exclude the necessary interpretation of the facts.
In mid-2002 two major stories were reported in the U.S. press: the molestation of hundreds of children by clergy and the largest corporate bankruptcy in U.S. history. In both cases the incriminating facts had been successfully hidden from the public as the situations continued to get progressively worse.
Had the truth been uncovered and publicized earlier, something could have been done to head off the pain and suffering that a great many people had to subsequently endure.
This includes the many additional children who were molested and the scores of people who lost all of their retirement funds while some corporate executives pocketed millions of dollars.
In both cases it was the journalist's job to uncover the facts that people were rather successfully hiding and bring these facts to the public's attention; in other words, to fulfill their role as "the watchdogs of a democratic society." Generally, public exposure is all that is needed to initiate corrective action.
Advice From Mom
"Whistle blowers" who report wrongdoing often have a difficult time.
If they report it, they may face the wrath of influential people; if they don't they may find it difficult to "live with themselves." (And in some cases not reporting known illegal activities is a criminal offense.)
Here's a recent example of a highly controversial case of whistle blowing that went world-wide.
Before it was made public, Army reservist Joseph Darby had a photo CD graphically documenting what he considered to be the torture and abuse of Iraqis by U.S. personnel at Abu Ghralib prison. He reportedly agonized for months over what to do.
Finally, without disclosing the exact nature of what was bothering him, he called his mother from Iraq, and she gave him advice that few experts in law or ethics could match. She said:
"I would remain true to myself, because the truth sets you free. And truth triumphs over evil."

Video Journalists (VJs)
Today, we commonly see "one-man bands" in the covering of television news; i.e., one person doing everything: camera operator, reporter, sound person, and editor.
In case you are wondering what the term "one-man band" refers to, it originally referred to a man who played multiple musical instruments at the same time. In the case of the person on the left, however, we have a one-woman band.
A slightly more modern interpretation is when an on-camera reporter shoots the basic story, then sets up a camera on a tripod, focuses on a mark on the ground, tilts the camera up to his or her height and locks it, puts on a mic and checks the audio, rolls the recorder, and then standing on the mark delivers the opening and closing to the piece.
Once back at the studio, the same person edits the piece and does the voice-over narration.
This has led to the term, video journalist (VJ), a single field reporter who writes, reports, shoots and edits stories.
It's not easy, but it saves hiring extra people. Thus, it's more important than ever to understand the entire production and news process.

Covering News vs. Making News
Scientists say that when you observe an event you in some way change it. Leaving the esoteric concepts of theoretical physics aside, we know that the presence of news reporters and cameras not only changes events, but it can even create news. An example of how this can take place happened one quiet morning in this writer's professional career.

Reporter's Checklist
Broadcast news is a highly competitive business and in the rush to get a story on the air it's sometimes tempting to guess at facts or use information from a questionable source.
However, errors in stories not only damage a station's credibility but they can derail a reporter's professional future. Here are five points to keep in mind when writing news stories.
1. Question those who claim to be a witness to an event and confirm that they really were in a position to see what happened. See the blog, "When Everybody Gets It Wrong."
2. Use a second source to double-check information that seems surprising or may be in doubt -- especially if it could put any person or agency in a bad light.
3. Double-check all names, titles, and places, and, when necessary, write out the pronunciation of names phonetically.
4. When writing the story, do the math on numbers. If a telephone number or address is involved, make very sure they are accurate.
5. Make sure that sound bites selected during editing accurately reflect what the person being interviewed meant.

News Producer's Checklist
Once reporters turn in their stories and a news producer or director takes over, many decisions must still be made before the stories are ready for broadcast.
Among other things, the stories must be reviewed for balance, lead-ins (story introductions) must be written, and appropriate graphics must be prepared to support the stories.
You may recall that in Module 55 we discussed some important considerations in editing news pieces.

News Bias
Conservatives think that TV news has a liberal bias and liberals feel that news has a conservative bias. Being a human endeavor, total objectively in news is impossible, of course. When you analyze bias complaints you are apt to conclude that bias is defined as "any view that differs from mine."
Although the media is often seen as having a liberal bias, it has been shown that most of the large broadcast operations are owned or managed by individuals who, almost without exception, hold views that are politically and socially to the right of center.
Bias can stem just as much from what TV news reports as what it doesn't report.
When it comes to politics, some individuals go to great effort goes into trying to keep certain things from being known. For example, it has been documented that many embarrassing government documents that have nothing to do with national security are marked "classified" simply to keep the information from the public.
To help address this issue The Freedom of Information Act (FOA) was passed that allows citizens and reporters access to some government documents.
However, not only is the process of obtaining documents fraught with red tape and delays, but key information is often blacked out, and in 2008 two-thirds of the requests were refused.
The question is, are the words of Patrick Henry, the prominent figure in the American Revolution (remembered for his "Give me Liberty, or give me Death!" speech) still valid:
The liberties of a people never were nor ever will be secure when the transactions of their rulers may be concealed from them.
-Patrick Henry

Various independent agencies monitor the media for bias. A weekly program that examines all of the news outlets from the standpoint of possible bias and problematic reporting is Reliable Sources, broadcast Sunday mornings on CNN.

At Times, A Dangerous Profession
Throughout the world, and even in the United States, reporters have been imprisoned or killed to keep their stories from being aired.
The Committee to Protect Journalists said that in 2008, more than 100 journalists were jailed. Of these 45 were freelancers working for small news outlets with limited ability to bring pressure to bear on their captors.
For example, in mid-2008 two young American women were stopped by North Korean border guards and sentenced 12 years in a labor prison for trying to do a story on refugees for a small cable channel. Given the conditions in North Korean prisons, some likened this to a death sentence. Responding to world-wide pressure, the North Korean government released them in 2009.
According to the Committee to Protect Journalists, between 1992 and 2001, 399 journalists were killed "because of their work." By 2007, more than 100 journalists had been killed in the Iraq and Afghanistan wars. In the last decade years more than 1,000 journalists have been killed around the world.
To prepare journalists for work in war conditions, a realistic training facility has been set up in Strasburg, VA. The course includes first aid, finding directions with a compass, recognizing and dealing with land mines and roadside bombs, plus information on ballistics.

Suffice it to say, investigating and breaking important stories often carries a degree of professional and personal risk. At the same time, this is the way awards are won and professional careers are advanced — and, far more importantly, wrongs are rectified and needed social change is instituted.
Living Dangerously is a blog piece based on a classroom experience.
Those who feel that covering wars from the battlefield is a man's job need to consider the story of Lara Logan, a young woman who is considered one of today's most successful foreign correspondents.
Very much related is a disturbing and captivating book that you won't soon forget -- Breathing the Fire by CBS television and radio correspondent Kimberly Dozier.
________________________________________
 Journalism.org - A comprehensive site of tools and information for journalists.
 State of the Media - A comprehensive analysis of media today.
 RTNDA, The Radio Television News Directors Association, the leading organization for broadcast news.
 The United Nations News Center - critical updates on developments around the world.
 The RTNDA Code of Ethics.
 RTNDA Guidelines for covering news events
________________________________________

Module 64
Updated: 05/16/2010

The world is a dangerous place, not because of those who do harm, but because of those who look at it and do nothing.
- Albert Einstein

Part I I I




News and
Documentary Production


Twelve Factors in
Newsworthiness
Those involved in broadcast news must understand 12 factors that constitute news value or newsworthiness.
¤ timeliness
¤ proximity
¤ exceptional quality
¤ possible future impact
¤ prominence
¤ conflict
¤ the number of people involved or affected
¤ consequence
¤ human interest
¤ pathos
¤ shock value
¤ titillation component
1. Timeliness: News is what's new. An afternoon raid on a rock cocaine house may warrant a live ENG report during the 6 p.m. news. However, tomorrow, unless there are major new developments, the same story will probably not be important enough to mention.
2. Proximity: If 15 people are killed in your hometown, your local TV station will undoubtedly consider it news. But if 15 people are killed in Manzanillo, Montserrat, Moyobambaor, or some other distant place you've never heard of, it will probably pass without notice. But there are exceptions.
3. Exceptional quality: One exception centers on how the people died. If the people in Manzanillo were killed because of a bus or car accident, this would not be nearly as newsworthy as if they died from an earthquake or stings from "killer bees," feared insects that have now invaded the United States.
Exceptional quality refers to how uncommon an event is. A man getting a job as a music conductor is not news—unless that man is blind.
4. Possible future impact: The killer bee example illustrates another news element: possible future impact. The fact that the killer bees are now in the United States and may eventually be a threat to people watching the news makes the story much more newsworthy.
A mundane burglary of an office in the Watergate Hotel in Washington, DC, was hardly news until two reporters named Woodward and Bernstein saw the implications and the possible future impact. Eventually, the story behind this seemingly common burglary brought down a U.S. President.
5. Prominence: The 15 deaths in Manzanillo might also go by unnoticed by the local media unless someone prominent was on the bus—possibly a movie star or a well-known politician. If a U.S. Supreme Court Justice gets married, it's news; if John Smith, your next-door neighbor, gets married, it probably isn't.
6. Conflict: Conflict in its many forms has long held the interest of observers. The conflict may be physical or emotional. It can be open, overt conflict, such as a civil uprising against police authority, or it may be ideological conflict between political candidates.
The conflict could be as simple as a person standing on his principles and spending a year fighting city hall over a parking citation. In addition to "people against people" conflict, there can be conflict with wild animals, nature, the environment, or even the frontier of space.
7. The number of people involved or affected: The more people involved in a news event, be it a demonstration or a tragic accident, the more newsworthy the story is. Likewise, the number of people affected by the event, whether it's a new health threat or a new tax ruling, the more newsworthy the story is.
8. Consequence: The fact that a car hit a utility pole isn't news, unless, as a consequence, power is lost throughout a city for several hours. The fact that a computer virus found its way into a computer system might not be news until it bankrupts a business, shuts down a telephone system, or endangers lives by destroying crucial medical data at a hospital.
9. Human interest: Human-interest stories are generally soft news. Examples would be a baby beauty contest, a person whose pet happens to be a nine-foot boa constrictor, or a man who makes a cart so that his two-legged dog can move around again.
On a slow news day even a story of fire fighters getting a cat out of a tree might make a suitable story. (Or, as shown here, a kid meeting a kid.) Human-interest angles can be found in most hard news stories. A flood will undoubtedly have many human-interest angles: a lost child reunited with its parents after two days, a boy who lost his dog, or families returning to their mud-filled homes.
10. Pathos: The fact that people like to hear about the misfortunes of others can't be denied. Seeing or hearing about such things commonly elicits feelings of pity, sorrow, sympathy, and compassion. Some call these stories "tear jerkers."
Examples are the child who is now all alone after his parents were killed in a car accident, the elderly woman who just lost her life savings to a con artist, or the blind man whose seeing-eye dog was poisoned.
This category isn't just limited to people. How about horses that were found neglected and starving, or the dog that sits at the curb expectantly waiting for its master to return from work each day, even though the man was killed in an accident weeks ago.
11. Shock value: An explosion in a factory has less shock value if it was caused by gas leak than if it was caused by a terrorist. The story of a six year-old boy who shot his mother with a revolver found in a bedside drawer has more shock (and therefore news) value than if same woman died of a heart attack.
Both shock value and the titillation factor (below) are well known to the tabloid press. The lure of these two factors is also related to some stories getting inordinate attention, such as the sordid details of a politician's or evangelist's affair—which brings us to the final point.
12. Titillation component: This factor primarily involves sex and is commonly featured—some would say exploited—during rating periods.
This category includes everything from the new fashions in women's swim wear to an in-depth series on legal prostitution in the state of Nevada.

News Sources
Broadcast news comes from:
• the local reporter's primary sources
• news services such as the Associated Press
• media outlets, such as newspapers, radio and TV stations
• press releases provided by corporations, agencies, and special interest groups
The world's largest newsgathering association, the Associated Press (AP), operates bureaus in 120 U.S. cities and in more than 130 foreign countries. The AP is a nonprofit corporation that is owned by its 1,400 member papers. The AP supplies text, photos, audio feeds, and videos to thousands of media outlets.
Newspapers, which have been hit hard by the economic downturn have not only been cutting staff, but in place of the expensive membership in AP, some are turning a new and less expensive source of print news: CNN. CNN, which has been expanding its news operations, both in its U.S. and foreign bureaus, now includes a wire service to newspapers.
Although not as large as AP, United Press International (UPI), which was started more than 100 years ago, uses a variety of media platforms including streaming video, blog technology and high resolution photos. This is all uploaded directly to their website. The UPI site is updated 24-hours a day.
Reuters another major news-gathering organization, has a team of several thousand journalists in 200 cities in 94 countries, supplying text in 19 languages.
This organization started in 1850, and even used homing pigeons as part of its original news links. Today, almost every major news outlet in the world subscribes to Reuters. Like the other major news organizations, Reuters has lost numerous correspondents in the Iraq war -- a war that has claimed more journalist's lives than World War II.
Is this loss of life worth it? That's a topic for debate, but recall what Thomas Jefferson said: "When the people are well informed they can be trusted with their own government." We've seen throughout history that relying on censored material, "managed news" or carefully crafted news releases do not result in the public being well informed.

Internet Research
With billions of pages of information available, reporters now rely heavily on reputable Internet sources in researching stories. They also consult newspaper archives, or stories that were previously published in newspapers.
And then, as we've noted, there are the Internet blogs. The writers of reputable blogs have become a significant social and political force in our society. Many of these writers are featured on TV news and interview programs.
Computerized Newsrooms
Today, broadcast stations have computerized newsrooms and the steady stream of news from these services is electronically written onto a computer hard disk. Using a computer terminal a news editor can quickly scroll through an index of stories that have been electronically stored.
Some news editing programs, such as the one illustrated below, allow users to bring up wire stores from the newsroom computer (shown on the left) and rewrite it, or copy segments directly into the news script you are writing (shown on the right).


Computer programs in the newsroom programs are used to -
• store a steady stream of news copy and video from wire services
• provide key word search capabilities for wire copy, Internet sources, and archived stories
• facilitate the writing of stories (note illustration above)
• call up stillstore pages of graphics
• create and call up CG (character generator) pages of text
• program the sequence of stories, video, and graphics (i.e., the complete newscast) on file servers
• provide teleprompter outputs
• instantly rearrange news stories and recalculate times to accommodate last minute changes -- even while the newscast is on the air
Some newsroom computer systems can be programmed to switch video and audio sources to correspond to programmed cues in the teleprompter text.
Television stations affiliated with a network and O-and-O stations (those owned and operated by a network) receive daily afternoon and evening satellite news feeds provided by network reporters and affiliated TV stations. Since most of these stories are not used on the network's nightly news, they make good regional, national, and international segments for local newscasts.
Independent stations (those not affiliated with a network) have television news services they can subscribe to -- the largest being the Cable News Network (CNN).
Whatever the source, the news feeds are recorded for review by the local TV news producer or editor. Stories selected for broadcast are normally saved to a video server or assembled on videotapes and "rolled into" the local news as needed.
Regional, national, or even international stories can often be developed from a local perspective.
As examples, a major event that takes place in a foreign country can elicit reactions from local people of the same nationality; a crime wave in an adjoining county may cause local people to react; or a shakeup in a New York corporation may impact employees or related businesses in the station's area.
Balance between local, regional, national, and international stories must be considered. Plus, the important element of visual variety must be considered, which in this case involves a balance between ENG segments and stories that are simply read on-camera with supporting graphics.
Although the anchor point for most newscasts is a TV studio, TV audiences like the visual variety and authenticity associated with news segments done outside the studio. Newscasts are now routinely being anchored from foreign countries that dominate the night's news coverage.
The File-Based Paradigm
With the advent of server or file-based newsrooms and production facilities, the approach to creating productions is changing.
Five or so years ago documentaries tended to start with "words" (text or a written outline) that told the story, and then video would be found or shot to support the words. Now, with video servers and file-based newsrooms and production facilities there is often a large depository of video on hand (or readily available) organized by title, date, content and segment time. It is often easier to pull up, organize and assemble the most effective video from a general outline, and then write the "words" (narration) to explain (when necessary) and act as the "glue" to hold it all together.
With the file-based paradigm (approach, system) picture often precedes text. The "picture" can come from the facility's storehouse of video segments, from reporters, from specialized news and Internet sites, or from the public via sources such as YouTube and Facebook.

Setting Up A Typical
On-Location News Interview
For better or worse, interviews are the basic staple of news and documentaries.
There are two basic ways of handling an interview: one designed for an extended interview and one for a short interview segment, the kind that is typical for TV news.
1. For an extended interview you could start out by lighting and micing the set for the "A" and "B" camera positions at the same time and set up cameras in the A and B positions, as shown in the illustration.
The position B camera can then get close-up shots of the reporter and over-the-shoulder shots with the back of interview subject. Even when the person being interviewed is speaking, this will provide reporter reaction shots and shots that can be used as insert shots to cover edits in the dialogue of the person being interviewed.
Camera position A is focused on the person being interviewed and provides the same type of shots from this angle.
During editing you always have the choice of two camera angles, which means you have much more creative control. Even so, this approach requires much more set-up time for shooting the interview and editing time to put it together.
2. For a short interview it's easier and takes less equipment to first light and mic camera position "A." Then after you get all of your A-roll footage, move the camera to position "B," mic the reporter, and move your lights to the appropriate position for this (reverse) angle.
In the latter case the camera is first set up in position "A" and focused on the interview subject. The reporter asks all of his or her questions and the responses are recorded on what we've called an "A-roll." Note that both close-ups and over-the-shoulder shots are possible from this angle.
Then the camera is moved to position "B." With the camera focused on the reporter, all of the questions are then asked over again.
This time, however, the interview subject does not answer the questions. In fact, if you can do without the over-the-shoulder shots, the interview subject doesn't even have to be there at all. The reporter simply looks at a "spot on the wall" behind where the person was sitting and asks the questions again.
This results in a B-roll of only the questions, which can be used as needed when the the final version is assembled in editing. Remember that a five- to eight-second pause should separate each question, especially if you are using videotape. Reporter reaction shots or "noddies," which we discussed in the editing section, are also recorded on the B-roll.
During editing, the goal will be to condense things as much as possible and still remain true to the subject's answers. When you cut out an unnecessary segment of an answer, you can cover the resulting jump cut with a "noddie," an insert shot, or a cutaway.
Sometimes a reporter's question will be obvious in an answer and you can save time by not using the question. Remember, the faster you can move things along without sacrificing clarity, the better.
One of the most difficult aspects of editing an interview, especially when considerable editing and rearranging has to be done, is to achieve smooth linking from one audio segment to the next. This includes preserving the brief pauses that normally occur in conversation.
Although editing approaches differ, for interviews most editors first concentrate on audio. Once they have a tightly edited "radio program," they go back and cover the video jump cuts with insert shots, reaction shots, and cutaways.
Lighting, audio, and camera placement for the typical office interview setup is explained in a bit more detail here.

Like Any Good Scout, Be Prepared
Most major news stories come up unexpectedly, and it's the reporter-videographer who's prepared to get to the scene of the news first that has the best chance of getting the story on the air first. "Scoops" of this sort can rapidly advance a career.
First, this means having a checklist of essential equipment drawn up so that you won't forget anything in the rush to get out the door. (There are many sad stories about crews driving 50 to 100 miles, only to discover they forgot to bring along an essential piece of equipment.)
Have batteries charged and all cameras and equipment ready to transport at a moment's notice.
Things happen very fast in a breaking story, so when you arrive on the scene, you should be able to start recording within a few seconds.
While you may not get video of the sudden appearance of an ancient sea monster (note simulated photo here), it should mean that you won't "drop the ball" on an important story.

More Hazards in News and Documentary Work
Although we touched on the hazards that reporters can face in reporting in the last module, we need to note here that seasoned reporters realize that subjects in front of your camera can go into a kind of "shock fog" during crises, and cannot always be counted on to respond rationally.
Add to this the fact that a crew will be working under its own deadline-related pressures and it becomes obvious that special precautions must be observed.

Documentaries that
Changed Thinking
A moving documentary was recently aired showing the atrocities being committed on the people of Afghanistan by the Taliban, the radical religious group reportedly behind the 9/11 terrorists acts on the East Coast of the U.S.
Despite repeated denials by the Taliban that such things were going on in Afghanistan, Saira Shah used a hidden video camera to document widespread instances of torture, rape, amputations, and murder.
In a country where women were forced to beg for themselves and their children because they were prevented from working and even from going to school, this woman clearly risk her life to get the footage. As a result, she influenced world thinking about the Taliban. (Readers have added these this examples to the Forum.)
Finally, if you ever need some ideas for news stories or documentaries that can make a positive difference, consider this.

It Takes Commitment and Courage
When we see news and documentary stories from hostile and dangerous locations, we seldom stop to think that in capturing the story a videographer took the same or greater personal risks than the reporter that you see on camera. (The reporters are often not even on the scene; they add their narration later in relative safety.)
Many of the stories, such as the one that Saira Shah did have had a profound impact on viewers.
The images of bodies floating in rivers in the Philippines broadcast in a PBS documentary started a chain of events that eventually toppled the corrupt dictator of a country.
In 2004, Andy Levine, penetrated high security areas and used a camera hidden in his eyeglasses to document forced prostitution for a moving and disturbing documentary entitled, The Day My God Died.
In each of these cases, and in many more like them, courageous videographers were willing to risk it all for what they saw as a greater good.
In doing a TV documentary the writer had a personal experience in this area. This is reported in the blog piece, "Murder and a Police Cover-Up."
Outside Reading Reminder: Since these modules deal with TV news, computers, and digital cameras, we have assembled a list of sources of news for each of these topics.

These articles are updated each day to provide the very latest information in each of these areas. Click on: Latest on PCs, Macs, Digital Cameras, Plus, Up-to-the-Minute News and Information.

There is also a readily available link to these articles on the TV production production index page.

________________________________________
Module 65

Updated: 04/07/2010




Microwave, Satellite,
Fiber Optic, and
Internet Transmission


It does little good to have a great news story if you can't get it back to a local station, cable news outlet, or network to broadcast. To do that we need to know about the things in this module. First we'll look at the various ways TV signals are sent from one point to another.

Coaxial Cable
Although it's being slowly replaced by fiber optics, especially for transporting TV signals over significant distances, coaxial cable is still the medium of choice for simple video connections and for many CATV (Community Antenna Television) systems.
On the right of this photo you can see a standard coax connector, and to the left a cutaway view of the single copper wire inside.
Note that a white insulator surrounds the central copper wire and that this is surrounded by metal foil. Above this there is electrical shielding consisting of layers of braided wire, and, finally, a rubberized coating.
Today, triax, or three-conductor video cable, is often used to meet extra video needs, instead of the coax (two-conductor) video cable shown above. A cutaway version of triax is shown on the left.
Two other types of coax-based connectors are shown below. On the left is a professional BNC video connector and on the right we see the popular RCA connectors, used in both audio and video.


Although coaxial cable has been used for decades to conduct TV signals, it has a number of shortcomings. Topping that list is the need to constantly re-amplify signals over distances, which can introduce various problems -- problems that fiber optic cables don't have.
Fiber Optics
The medium that has many advantages over coaxial cable is fiber optics (also called optical fiber or OF).
The medium of transmission is light. Light waves have an extremely high frequency and travel at 186,000-miles (300,000Km) per second.
A single OF cable can theoretically carry trillions of bits of information every second.
The thickness of an optical fiber is only slightly larger than a human hair. The photo on the left shows a light conducting OF strand going through the eye of a needle.
The tiny, flexible glass or plastic fiber is coated, both for protection and to enhance its characteristics as a reflective lightwave guide.
Fiber optic cables normally carry numerous OF strands within a single enclosure.
Compared to a coaxial cable, optical fiber has ten advantages:
• Much greater capacity. The information carrying capacity of OF is thousands of times greater than a normal copper wire. Note on the right the comparison between an optical fiber link and a telephone cable with its hundreds of wires. Both have the same information carrying capacity.

• Low and very uniform attenuation (signal loss) over a wide frequency range. This greatly simplifies amplification of the signal.

• Virtual immunity to all types of interference

• No problems with leakage or causing interference with other signals

• Insensitivity to temperature variations

• Extremely small size

• Will not short out in bad weather or even in water

• Low cost
• High reliability The fibers do not corrode or break down in moisture or salt air the way copper wires do.

• Light weight Since they are not based on metal conductors, OF cables are lighter and much easier to transport and install.
As cable and telephone companies continue to move toward optical fiber, eventually home-to-TV-studio video transmissions may become as simple as hooking up your video equipment and punching in the right telephone number.

Microwave Links
In much the same way that a flashlight projects a beam of light from one point to another, microwaves can be transmitted along a straight, unobstructed line from a transmitter to a receiver. In the process the microwave beam can carry audio and video information.
Microwaves were originally only used in broadcasting for coast-to-coast network television and for studio-to-transmitter links. However, as remote broadcasts became more popular, TV stations saw an advantage in having field production trucks equipped with microwave dishes so that news stories, athletic events, parades, civic meetings, etc., could be covered live.

Small, "short hop," solid-state microwave transmitters and receivers, such as the one shown on the left above, can be mounted on lightweight tripods to relay TV signals from the field to a nearby TV production van.
The van then sends the signal to one of the city's relay points -- generally on top of a tower or tall building -- and the signal is then sent to the studio or production center. Note photo on the right above.
Microwave signals must have a straight, line-of-sight path. Solid obstructions, or even heavy rain, sleet, or snow, can degrade or completely obliterate the signal.

TV Production Vans
The photo on the right shows an extendable mast with a microwave transmitter on the top used by production vans in covering news stories.
The microwave signal can be aimed at a dish at the a local station, or the signal can be beamed to a relay receiver to be re-transmitted one or more times until it reaches its final destination.
The inside of a remote production van is pictured below. The van is a mini-production facility with camera control units, audio and video recording equipment, a satellite receiver, and videotape editing equipment.



Vans, Boats, Airplanes and Motorcycles
Although normal microwave signals go in a straight line, it's possible to use an omnidirectional (nondirectional) microwave transmitter to send audio and video signals over a sizeable receiving area.
With this approach signals can then be sent to a TV studio from helicopters, moving cars, boats, and, as shown below, motorcycles.
A complete "very mobile mobile unit" is shown on the left.
A cameraperson sits on the back of this especially equipped (and necessarily quiet) Honda motorcycle with an image stabilized video camera.
While the motorcycle is moving scenes can be transmitted live to the studio or recorded.
When possible, the motorcycle can be parked and the camera set up on a tripod.
Not only can this unit get to news scenes that remote vans can't, but it can get to them faster and in terms of fuel costs much cheaper.

The Canobeam System
Although not in wide use, there is a point-to-point transmission system that offers a number of advantages over microwave. Since the Canobeam system relies on a high-powered light or laser beam to send its signal, no FCC license or special permits are necessary.
Canobeam is multi-channel and bi-directional, and can send audio and video signals more than one mile or about 1.6 kilometers with a quality that exceeds most microwave equipment. It has proven especially useful in countries where delays are common in getting permission to use microwave or fiber optic lines in covering news.
The same principle is used in Terabeam, a system being used to exchange e-mail and business data between businesses and buildings in large cities.
Although heavy fog, rain and snow can disrupt these beams, the systems have a number of reliability features built in that can take over and compensate for most problems.

Satellite Services
Satellites hovering about 36,000-kilometers (22,300-miles) above the earth relay most television programming to world viewers.
Each satellite or "bird" is composed of a number of transponders, or independent receive-transmit units.
Geosynchronous satellites rotate at the same speed as the earth and end up being stationary in relation to the earth's surface. This obviously simplifies the job of keeping them within the range of both the uplink and downlink dishes on the earth.
The reflector dish of a ground station uplink is shaped like a parabola, which is similar to the reflector of a powerful searchlight, the kind that can send a sharp beam of light into the night sky.
Signals reflected from the center element (note photo on the left) will hit the dish and then be sent upward on their 36,000-kilometer (22,300-mile) path to the satellite.
The signal from an uplink ground station is aimed along a precise path to the appropriate satellite.
As illustrated on the left below, once the signal is received, it's amplified, the frequency changed, and then it is sent back to the earth.
The footprint (coverage area) of the returning signal covers many thousands of square kilometers or miles of the earth's surface.



Within the footprint area, receiving dishes work in reverse of the uplink ground stations. The signal from the satellite is collected in a dish and directed toward the receiving element, as shown on the right above. This signal is then amplified thousands of times and fed to a TV receiver.

Satellite Distribution of Programming
Networks and TV production facilities routinely distribute their programming via satellite. This is how TV productions originating in the Los Angeles-Hollywood area are sent to the East Coast for network distribution.
Once they arrive on the East Coast they are recorded, scheduled into the network agenda, commercials are added, and then the programs are beamed back up to satellites for distribution across North America.
When the network-to-affiliate link is not being used to relay regular programming, it's used to send news stories, program promotion segments, and other broadcast-related segments to affiliated stations.
Stations not affiliated with a network can receive news and information from satellite news services.
Cable (CATV) companies also receive most of their programming from satellites. This includes both TV and audio services. Many TV and audio services (satellite "stations") are not broadcast over the airwaves, but are only available directly from satellites.
There are two classifications of satellites used in broadcasting:
• C-band satellites that use frequencies between 3.7 and 4.2 GHz, and from 5.9 to 6.4 GHz

• Ku-band satellites that use frequencies between 11 and 12 GHz.

C-Band Satellites
C-band was the first satellite frequency range to be widely used in broadcasting. Compared to Ku-band, C-band requires relatively large receiver and transmitting dishes.
Although dish size is not a major issue with permanently mounted installations, C-band dishes impose limitations for SNG trucks. (Satellite newsgathering trucks or SNG trucks are vans that have been especially outfitted to uplink ENG stories to a satellite.)
Compared to Ku-band, C-band is more reliable under adverse conditions -- primarily in heavy rain and sleet. At the same time, C-band frequencies are more congested and more vulnerable to interference.

KU-Band Satellites
Because of the higher frequencies (shorter wavelinks) involved, KU-band dishes can be about one-third the size of C-band dishes. Because Ku-band also has fewer technical restrictions, it means that users can quickly set up satellite links and start transmitting. This is obviously an important advantage in electronic newsgathering.
Although many satellite services are scrambled (subscription based), there are several hundred free TV services ("stations") available on C and Ku bands. These include:
• religious programming (more than a dozen channels; mostly conservative and evangelical in nature)
• government programming (such as NASA's channel)
• home shopping services (such as the Home Shopping Network)
• various types of educational-informational services (such as the Wisdom channel)
C-band satellites typically carry 24 TV channels and have names such as Galaxy 9, Satcom C3 and Morelos 2. For example, the Florida Sunshine Network is on Satcom C1, Channel 24.
Because of the limited life of satellites (not to mention their occasional malfunctions), C-band and Ku-band satellite assignments occasionally change without notice. Several newsstand publications are available which represent a type of "TV Guide" for home satellite viewing.
Although most satellite TV programming is in English, Spanish or French, satellite programming is also available in dozens of other languages.
A single C-band or Ku-band satellite channel is capable of carrying both a TV signal and one or more separate audio channels. Taking advantage of this fact are more than 100 free audio services, most in stereo and many without commercials. Some are standard broadcast stations that distribute their signal by satellite. Examples are CBM-AM in Quebec and WQXR-FM in New York.
In recent years, many C-band and Ku-band satellite services have moved from analog to digital signals. This has made it necessary for many home viewers to upgrade their satellite receivers.

Satellite-to-Home Services
For people living in rural areas out of the range of local TV and CATV services, a satellite receiver may be the only way they can get TV programming.
Originally, these were all C-band and Ku-band services. However, most people now subscribe to digital satellite-to-home services, such as the DISH Network and Direct-TV, which use their own satellites and frequencies.
These services have a capacity of more than 50 simultaneous digital TV channels -- many of them in HDTV.
Once subscription fees are paid, the unique identifying number in your satellite receiver is uplinked along with the satellite TV signal. When your home satellite receiver receives this identifying number it unlocks the signal so it can be displayed on your TV set.
In late 2001, satellite radio was launched in the United States. More information on this can be found here.

Satellite Phone
From the large commercial satellite services we now turn to point-to-point satellite applications used in electronic newsgathering.
News agencies have been using satellite phone links to send audio and video from remote locations --generally from third-world countries where standard satellite services are not readily available.
Although satellite phone links were originally just intended for voice transmission, it was found that a highly compressed video signal could also be sent on a standard audio channel or on a higher quality broadband channel. Because of the highly competitive nature of TV news, this technology has seen rapid improvement.
Even though the quality of satellite phone links leaves much to be desired, a satellite phone system is small enough to be put in the overhead bin of an airplane and, once in the field, it can be set up quickly. This is not the case with --

Flyaway Units
In the late '80s portable, freestanding satellite uplinks referred to as flyaway units were introduced for electronic newsgathering (ENG) work. (Note photo on the right.)
These units can be disassembled and transported in packing cases to the scene of a news story.
Flyaway units are used in remote regions, including offshore areas and third-world countries. Unlike satellite phone links, the flyaway units provide full quality video and audio signals.

Internet Transmission of News Stories
Once the story is on a tape, disk, or in a solid-state memory card, it can be sent through the camera's FireWire or USB-2 connection to its destination via a high-speed Internet connection.
Cybercafes or wireless Wi-Fi "hot spots," now in tens-of- thousands of locations around the world, can serve as transmission points.
In TV news, stories can be saved on a USB drives, jump drives, or thumb drives (see photo) and can then be uploaded to an Internet FTP (file transfer protocol) site for downloading by the station for editing and use on the air.
A complete and relatively high-quality news segment can be stored in the two-gigabyte (Gb) storage device. These small devices can now store at least 16Gb of information.
Videos from cell phone cameras are now regularly sent to both social and news sites. Some of them (after being checked out and verified) have ended up on TV newscasts. Today, anyone with a cell phone camera is a potential news reporter. It's often a matter of being at the right place at the right time and knowing what you are doing.
In the next module we'll take up a totally different aspect of the cybercourse: legal issues.
________________________________________
Module 66

Updated: 05/04/2010
Part I



Legal and
Ethical Issues:


Legal concerns are a major issue in video production, especially broadcast television.
What follows is drawn from U.S. law. Since CyberCollege and the InternetCampus reach students in more than 50 countries, readers will find that the laws in their own country differ. Even so, what's covered here represents some common legal concerns.
In any discussion of U.S. law it has to be kept in mind that not only do laws fill hundreds of books (which means that things aren't that simple), but they also vary from state to state. They can also change with new laws and court decisions.
In this module we'll touch on three areas:
• invasion of privacy
• access restrictions and rights
• libel and slander
In Part II of this topic we'll cover three more issues:
• staging
• copyright
• talent and location releases
Privacy: Public and Private Individuals
Even though the U.S. Constitution is the basis for our laws, the Constitution does not cover many issues that have come up since it was written and ratified in the late 1700s.
For example, the Constitution does not talk about a right of privacy or invasion of privacy. But, throughout the years courts have held that citizens need protection from the unwarranted or unjustified publication of images and information of a private nature.
When it comes to invasion of privacy, the laws that evolved make a distinction between private and public individuals. It also makes an distinction between public places, such as streets, parks and sidewalks and privately-owned areas, such as a store or a person's residence.
Once individuals enter the "public spotlight, " either intentionally or through accidental circumstances, they are afforded much less legal protection. We only need look at the supermarket tabloids to see this.
At first, it might seem that everyone should be afforded full protection from the public disclosure of private information. But, the problem arises when that "private" information relates to illegal or immoral conduct. For example,
• If a man is convicted of child molesting, can he claim it's private information? If so, does he have a right to keep the press or Internet sites from disclosing that information (and the people in the neighborhood where he lives from knowing about it)?

• If a politician is found guilty of stealing money from the public treasury, do we have a right to know that (especially before the next election)?

• If an evangelist who regularly preaches against illicit sex has sexual affairs, can he claim that this information is private and should not be publicly disclosed?
In these cases, many people feel that, not only does the public have a right to know these things, but that the press has a responsibility to bring such things to the public's attention.
Recognizing the crucial role that a free press has in maintaining a democratic system, U.S. courts have generally been quick to protect the news media's rights to gather and disseminate information -- as long as the information is true.
But, what if that information is true but voyeuristic in nature and intended primarily to generate ratings or the increased sales of a publication?
It has come as a surprise to many journalists that disclosing true and verifiable facts about someone can be an invasion of privacy.
To be so, the information must:
• be published, broadcast, or in some way disseminated to an audience
• consist of information of a private nature that is deemed offensive to a reasonable person

• consist of information that's not deemed newsworthy or of legitimate concern to the public

Disclosing that a private individual has AIDS, is a lesbian or a homosexual, may fall into this category, if such facts are not deemed relevant to any present newsworthy story.
Individuals have successfully sued news organizations when they disclosed information about mental retardation, plastic surgery, and in vitro fertilization -- information that a jury subsequently decided (in the specific circumstances involved) should have been kept private.
Juries are swayed by three factors:
1. How sympathetic they are to the particular plaintiff (complainant). Children and older people top that list; public figures rank near the bottom.
2. How extensive the intrusion is. Video ranks first, audio further down the list, and a narrative, especially if it doesn't specifically refer to the person claiming injury, ranks at the bottom.
3. How the reporter got the information. Juries tend to frown on reporters who were clearly trying to "dig up dirt," especially if they used questionable means to obtain the information.
All this assumes that the individual didn't freely disclose the information. If they did, or if the information could easily be obtained from public records, courts have generally not seen the disclosure of this information as being an invasion of privacy.

Intrusion
One type of invasion of privacy is intrusion, also referred to as intrusion on seclusion, or intrusion on solitude.
Overhearing and publicizing private conversations or broadcasting images taken from private property typically constitutes an invasion of privacy.
A general guideline is that if you are on public property when you obtain the information or images and you do not use a long telephoto lens or a highly directional microphone, an invasion of privacy case would be hard to prove.
This is because an average citizen could have easily witnessed the same thing.
Things change, however, when you trespass onto private property where you haven't been invited, or when some type of sophisticated surveillance equipment is used.
Reporters often audio record in-person interviews and telephone conversations.
Although asking permission to record an interview may intimidate some people, it makes it possible for you to double-check quotes, and it's cheap insurance in case the person later claims that he or she was misquoted.
Recording telephone conversations can, of course, be done surreptitiously.
Many state laws stipulate that only one party -- which would generally be you -- need to know that the conversation is being taped. Some state laws, however, require that both parties know that the conversation is being taped. In this case it would be illegal for a third party -- excluding law enforcement authorities with a court order -- to tap into phone conversations.

Access
We now turn to one of the grayest areas of the law and one that is most often encountered by news people -- access to locations.
If the location is on public property, there's generally no problem, unless you're seen as interfering with police or public safety officials. Thus, photographing a public demonstration, disaster, or even a crime scene under these conditions should be okay.
This does not mean that someone won't object or try to stop you, even though they don't have the legal authority to do so. Sometimes people, including the police, don't want an event or action publicized.
There have even been arrests in some of these situations. Even though the charges generally can't be sustained, the arrest effectively stops the reporter from covering the event. Thus, in these rare cases, the intended result is achieved.
During the Vietnam war many protesters were arrested in Washington and put in jail (including one of my TV production students at the time) because the Nixon administration didn't want the scope of the antiwar feelings to be obvious to the American public.
The arrested protesters were released the next day when it was determined that they were engaging in a perfectly legal action.
Once you move to private property, you need permission, either from the owner of the property or his "agent" (the person renting the property), or from police.
Court decisions often hold that members of the press (with or without press passes) have no legal privileges beyond those granted to the general public.
At the same time, public officials often grant recognized members of the press special privileges. Government officials even issue "press credentials" for working members of the press.
News people may decide to go onto private property in the pursuit of information or pictures -- until they are specifically asked to leave. Interestingly, courts have held that videographers will generally be allowed to broadcast any footage taken before they were asked to leave.
Sometimes a reporter may feel that a story is worth the risk of being arrested, and under certain circumstances courts have held that the ends (getting a story) justified the means (trespassing on private property).
However, in recent years there has been a more restrictive attitude toward press privileges. We'll get back to this a bit later.
It had always been assumed that if a law enforcement official that has taken charge of a crime scene grants you permission to go onto private property, a trespassing case won't hold up. A recent court case is the one exception here. In some rather unusual circumstances, a judge upheld a trespassing case against a network ENG crew -- even after a FBI agent specifically invited them to accompany them into an apartment during a drug bust.

Guidelines for Intrusion
To conclude, let's look at some summary points (questions) that courts have deemed relevant in intrusion cases.
• Was what you heard or photographed also accessible to the average person standing on public property?

• Were you given permission by a responsible person to enter private property?

• Did you break the law when you could have gotten the information in a legal way?

• Was the information you got in the disputed circumstances newsworthy and of legitimate concern to the public?

• Was "prying" involved? (Prying goes beyond basic curiosity and moves into the area of offensive and inappropriate snooping into a private individual's personal life.)

• Is what you disclosed something that is generally agreed to be of a private nature?
• Would your alleged intrusion be deemed objectionable to a reasonable person?

Commercial Appropriation
Commercial appropriation (also know as misappropriation) involves an unauthorized use of an individual's or organization's prominence in order to benefit someone else.
The courts have recognized that well-known people acquire an identity that is of value and these people deserve to be protected from someone "cashing in" on that value without their consent.
Let's say you are doing a commercial for a restaurant or a health spa and you just happen to catch some well-known person in attendance. Obviously, the commercial would be much more influential if this person appeared to be endorsing this establishment. However, if you run the commercial without their permission, they could sue you. Celebrities often have sued and won damages in such cases.
If you were televising a public event and wanted to show general shots of the audience in attendance, there would be no problem, even if one of the members of the audience were well known. Individuals in this case are considered "background."
But, if one of the people in the audience was a well-known person and you appeared to go out of your way to bring this fact to the attention of the audience, you could be guilty of trying to "cash in on" the person's prominence.
If the public figure is officially participating in an event being covered, he or she can be considered a part of the event. In such cases the camera shots may dwell on the person as much as they wish.

Shield Laws
Most states have some form of a shield law designed to keep courts or judges from forcing news people to reveal confidential sources of information. At the same time, federal powers can supersede these laws. A thought provoking and dramatic production that deals with this topic is Nothing But the Truth.
A well-known investigative reporter in Washington who has written many stories about corruption and wrongdoing in high places noted that without the assurance that news people can protect the confidentiality of their sources, few people would be willing to risk their own welfare in revealing inside information about crime and corruption.
Plus, revealing a person's name who "blows the whistle" on wrongdoing might easily endanger that person's welfare, or, in some cases, even their life.
When ▲ whistle blowers know that a reporter can be legally forced to break a pledge of confidentiality they will probably think twice about tipping off reporters to wrongdoing.
For this reason many reporters -- some from the nation's top publications -- have chosen to go to prison rather than break a promise of confidentiality. Even though prison can disrupt a reporter's life in major ways, breaking a promise of confidentiality can also end the reporter's career in investigative journalism.
Because this issue strikes at the heart of a newsperson's obligation to expose corruption, we need to look at it more closely.
One of the latest court cases on this issue was in late 2004, when a judge found a reporter for an NBC affiliate guilty for contempt of court for not revealing the source of a FBI videotape that documented bribery and corruption in city government.
First, the judge levied a fine of $1,000 a day for every day the reporter refused to reveal his source. When that didn't work the judge brought the criminal contempt charge and threw the reporter in jail.
At the same time, the videotape resulted in people being convicted of their crimes and legal authorities feel that the reporter did not violate any law.
Reporters from The New York Times, the Los Angeles Times, Time magazine, the Associated Press, and cable news networks have all faced similar legal action in their attempts to bring to light wrongdoing.
In what has become the most celebrated case in history, an FBI source dubbed "deep throat" helped bring down a Presidency for engaging in clearly illegal actions. The source had the assurance of the Washington Post reporter that his involvement would be kept confidential -- and it was for decades until the person, himself, disclosed his involvement. (The award-winning movie, All the President's Men, dramatically documents this chapter in U.S. history.)
Today, many things further complicate this issue.
First, it's much more difficult to define "newsperson."
Although examples such as the ones cited above involve the mainstream press, some people now feel the term should extend to the Internet. So should bloggers be covered by shield laws? And if bloggers, how about people gathering information for "tell-all" books? In short, where do you draw the line?
Next, there is the very important legal concept of being able to "face your accuser." If your accuser is held to be "confidential" by a newsperson, how do you face them and dispute their charges?
All of these issues are now being debated in courts of law.

Defamation
Defamation is defined as the communication to a third party of false and injurious ideas that tend to lower the community's estimation of the person, expose the person to contempt or ridicule, or injure them in their personal, professional, or financial dealings.
Libel is defamation by written or printed word and is generally considered more serious than slander, which is defamation by spoken words or gestures.
Some recent court decisions, however, have removed much of the distinction between the two.
Local stations and networks have been sued for millions -- even billions of dollars over alleged slander or defamation.
In 1990, the median jury award against a news organization was $550,000. Only six years later this figure had risen to $2.3 million.
Because even the cost of defending a libel or slander suit is often several hundred-thousand dollars, many stations and production agencies have insurance against libel and slander.
It is widely held that the injured person in defamation cases must be apparent to the audience, although not necessarily specifically named. The false statement must have been presented (or interpreted by an average reader/viewer) as fact, and not clearly intended as satire or fair comment.
Although negligence on the part of the journalist must generally be shown in these cases, so-called "honest mistakes" can also precipitate a legal action -- if it can be demonstrated that a false statement injured someone's good name, professional standing, or resulted in financial loss.
Negligence can range all the way from not taking the time to check facts or proofread copy before it was aired to the much more serious careless disregard for the truth.
Actual malice, which ranges from a careless disregard for the truth to an intent to cause injury, is the most serious form of defamation and results in more serious legal consequences.
The injured party in a defamation suit doesn't have to be a person, it can be a company or institution.
If you state that "all X-brand cars are lemons" or that a particular company produces food that will make you sick or kill you, you can expect to get a very official call from their lawyer.
Instead of saying that all X-brand are lemons, you would be much safer to say something that you could prove, such as "68% of all X-brand cars are in for repair within 30 days of their purchase."
In the case of the food, you could cite hospital records that indicate that 124 people in Peoria, Illinois were admitted to area hospitals after eating Miss Mollie Maple's Muffins.
But in both cases you want to make very certain that you can prove the accuracy of the statements.

False Light
False light is related to defamation. For example, if you are doing a story on drug addicts or prostitutes and film someone -- even in a public place -- who is neither, they can sue.

By placing an identifiable person within the context of a topic that is illegal or can damage their reputation, you create a false link in the minds of viewers. Rather than risking court case to prove whether the person is or is not innocent, producers often pixilate faces to keep them from being recognized.
Confronting Lawsuits
It should be obvious that reporters and producers must carefully check any questionable material before broadcast or distribution. At one TV station the executive producer, the news director, and the station's attorneys view questionable segments.
If you are served a subpoena for alleged defamation, don't try to start explaining your way out of it. You can get yourself into deeper trouble. Also, never agree to hand over a tape or transcript of the segment in question unless ordered to by the court.
If things get this far, Immediately turn the matter over to an attorney who specializes in this area.
And never make a comment regarding the truth of the challenged statement to anyone except your lawyer. If a reporter says, "I'm really sorry, I was in a hurry and I guess I just didn't check my facts," this could constitute an admission of guilt -- even a "reckless disregard for the truth," which could constitute malice. Cases have been lost after such admissions.

Corrective Statements
If a definite error in fact is discovered after it has been broadcast, you may be able to reduce damages by immediately airing a full corrective statement with an apology.
Although this may not eliminate a lawsuit for a serious offense, it may serve to reduce the damages awarded.
________________________________________
This file has some major summary points.
________________________________________
Module 67


Part II

Legal and
Ethical Issues


In this final section on legal and ethical issues we'll cover:
• staging
• copyright
• talent and location releases

Staging
Staging applies to ENG and documentary work and involves the alteration of a scene or the broadcast or reenactment of a news event without telling your audience.
The motivation for staging can range all the way from an attempt to enhance the look of a scene to a blatant attempt to alter the truth.
If staged footage is broadcast and is found to represent an effort to misrepresent the truth, it can result in fines by the Federal Communications Commission (FCC), a lawsuit by an offended party, and a loss of credibility for a news organization -- not to mention severely damaging your professional status.
No matter what their personal feelings may be, professional news people need to present situations and viewpoints as honestly as possible.
"Truth" is easy to defend; slanting or "doctoring" a story isn't.
Although the latter route may be tempting, and may even be applauded by some viewers, in the long run it opens the door to all kinds of legal, ethical, and professional problems.
At the same time, ratings and profits often influence TV production content -- even to the point of "crossing the line" in these areas. There are some important things to consider in this required reading.


Staging also involves the reenactment of events. Sometimes this is deemed acceptable, sometimes not.
For example, if you are covering "the handing over of the gavel" to a newly-elected officer during a meeting, you will frequently find that the people involved often expect -- maybe even prefer -- to do the whole thing over again afterwards for the media.
This allows camera people to light the scene as they want, make sure no people are blocking camera angles, and arrange people so they can all be clearly seen. It is doubtful that the public expects authenticity in this type of situation.
But, there are other times when the public assumes they are seeing "the moment." If you reenact a critical moment in sports history when someone breaks the world high-jump record and you don't bother to inform your audience that what they are seeing is actually a warm-up or a reenactment, it's an entirely different matter.
Question: is it unethical to simply enhance a scene by removing distractions on a desk, moving a coat rack out from behind someone's head, or setting up your own special lighting?
Although "purists" might argue that you are "tampering with the truth" if you change anything in a scene, most videographers routinely do these things when they see a need.
The dividing line is whether you are enhancing a scene for the sake of clarity and technical quality, or distorting the truth.

Using Comparable Footage
A related issue is the use of comparable footage, video that appears to be the event being reported, but is from an earlier time or even from a different place.
For example, you might be tempted to cut in some unused scenes from yesterday's forest fire to illustrate today's story on the same fire. Some would say, "A fire's a fire, what's the difference?"
Well, there is a difference, and the FCC has taken a dim view of this kind of substitution --unless the fact is made quite clear to the audience. Simply keying the phrase "file footage" or an earlier date over the footage will suffice.

Copyrighted Materials
Music, photo illustrations, drawings and published text are copyrighted and cannot be broadcast or reproduced for distribution without clearance or permission from the copyright holder.
Under 1998 U.S. copyright revisions, copyright now extends for the life of an artist or individual copyright holder, plus ▲70 years. Copyrights owned by corporations are valid for 95 years. However these times periods are subject to change.
Using something that's copyrighted without permission can result in a $25,000 fine and one year in prison. And that's only for the first offense; things get worse after that.
For example, these TV production materials are protected by international copyright law and can only be used directly from the CyberCollege or the InternetCampus web sites.
Although they legally can't be copied or used for commercial purposes, one non-democratic country reproduced the materials for its own purposes and in the process deleted references to freedom of the press. (One disgruntled person in that country brought this to our attention.)
Sometimes, especially in the case of noncommercial television, permission to use copyrighted materials will be granted without charge or for on-screen credit. More typically, a fee must be paid to the copyright holder.
But, to protect yourself and your company or institution, make sure you get the permission in writing.
In the case of videos, you can feel reasonably safe using copyrighted material that will be viewed only by family members or a small group of people where no admission is charged.
However, if you are producing the video for broadcast or distribution or you intend to enter the piece in a video contest that is giving away prizes, you'll need permission to use copyrighted material, plus a signed talent release from on-camera principals. More on that later.
Text, photos, film, or video produced by the U.S. federal government do not fall under copyright restrictions unless they were done by an outside agency that used copyrighted material. It's best to check.

The Fair Use Act
The fair use act allows copyrighted material to be used in limited ways for criticism, teaching, scholarship, news, or research without the permission of the copyright holder.
For example, if you were doing an educational retrospective of Michael Jackson's life and you said that Thriller remains the best-selling album of all time, you could bring in a short segment of one of the cuts to remind people of the music. However, you couldn't use a complete cut as theme music for your production.
It also makes a difference whether your production is intended for open distribution or limited to a closed, non-paying audience, such as a classroom.
Unfortunately, the fair use act is not well defined. We'll only get a clearer picture of what constitutes fair use after a number of court cases have addressed the issue.

Works in the Public Domain
A work is in the public domain when its copyright has expired.
Although many old music selections are in the public domain, you need to watch out for recent arrangements of older works that have come under new copyright restrictions.

Securing Rights to Music
This can be complicated.
Obtaining clearance to use a copyrighted music selection normally involves getting performance rights, mechanical rights, and synchronization rights.
• performance rights are required to use music in a "public venue," which includes radio and TV broadcasts.

This type of clearance is available from one of the music licensing agencies involved: ASCAP (American Society of Composers, Authors and Publishers), BMI (Broadcast Music Incorporated)and SESAC (The Society of European Stage Actors and Composers).

• mechanical rights allow you to record and play back the selection in a production as outlined in a licensing agreement.

• synchronization rights are required to use the music as part of a sound track
Are you confused? Then you definitely have been paying attention, because the process is confusing.
What most producers do is turn the whole thing over to an agency like the Harry Fox Agency in New York, which was founded by the National Music Publishers Association.
This agency serves as an information source and clearing house for music-licensing matters.
When you get in touch with them, make sure you have all the information at hand on the specifics of music you want to use.
It's important to note that the standard performing rights license that a broadcast station typically pays for does not normally cover the use of music in commercials, public service announcements, and productions.
You need to check your license carefully to see what it does and does not cover.
If you are producing video for a non-profit or charitable organization, you may be able to get permission to use a music selection for free or for a token one-dollar fee. If this is how you intend to use the music, be sure to mention it when you contact one of these agencies.

Music and Sound Effect Libraries
Since music clearance is complex and cumbersome, many people opt to use audio libraries that include a wide variety of CD musical selections and sound effects.
Once the library is purchased, you can use the material over an extended length of time for most production purposes. This generally means the music comes with a master use clause that includes mechanical, synchronization, and performance rights.
Material in these libraries has been written or selected with the needs of the video and film producer in mind. With titles like "Manhattan Rush Hour," and "Serenity," you immediately know the nature of the musical selections.
One of the largest collections of sound effects, featuring some 2,500 effects on 40 CDs, is the BBC Sound Effects Library.
Under "Cars," for example, you will find sound effects such as windshield wipers, horns, various engines, a car stalling, doors slamming, windows opening, seat belts snapping, and a car skidding. Under "Babies" you will find crying, hiccups, gurgling, laughing, bathing, babbling, coughing, first words, singing, and tantrums.
Many postproduction houses put sound effects on their editing server. When an effect is needed, they just go into a master listing, find what they want and recall it instantly.
Thanks to digital electronics, each of these effects can be modified in endless ways to more perfectly meet needs -- speeded up, slowed down, filtered, and even reversed.

Using Original Music
To get around many of the problems in the use of recorded music many producers prefer original music. The advantages include.
• it solves clearance problems

• the music can be tailored to moods, pace and time requirements

• it eliminates the "emotional baggage" (mental associations) that often accompanies well known musical selections
If the music is relatively simple (possibly a guitar, flute or organ), or it's electronically synthesized (which at least to some degree most music is today), original music can be produced rather inexpensively. In the hands of an expert, a music synthesizer can create the sound of anything from a single instrument to an orchestra.
There are many groups that will compose original music for productions (for a price), including this one, which works with video and film students, and is totally Internet based. This Internet site, aimed at digital video producers, offers copyright-free music.
If you are at all musically inclined there are numerous computer programs that will create simple music with only minimal effort (and musical talent). Just one example is Creative's inexpensive Prodikeys PC-Midi. The combination computer keyboard and musical keyboard is shown on the right.
This file summarizes some key points about intellectual property.

Talent Releases
We noted in Module 66 that using someone's "likeness" (generally, a photo or a video of them) without his or her permission can get you into legal trouble -- unless the video is shot in a public place. (Bear in mind that places like shopping malls are not considered public if they are owned by companies corporations -- which most are.)
By having clearly recognizable persons in your video sign a talent release or a model release you can be granted the permission you need. This step protects you in case the people involved later decide they don't want the footage broadcast, or want extra compensation. Here is a sample talent release.

Location Release
It may come as a surprise that you may need a release to use some locations in a video.
For example, you could not use a well-known amusement park as a setting for your commercial video without the permission of the property owner. Thus, we also have location releases. Even the famous and very public Hollywood sign in California can't be legally used to promote an unrelated commercial cause.
________________________________________
Bear in mind that the "once over lightly" treatment of the legal issues in these two modules is only designed to alert you to possible danger areas. Law libraries have thousands of books on these areas and there is still often uncertainty about what's legal and what isn't.
About the only thing we know for sure is that lawsuits are very expensive for all parties involved, and the best defense is no offense.
________________________________________
Module 68
Updated: 06/23/2009




Non-Broadcast
Television

Although broadcast television is the most visible part of the television business, in terms of personnel, equipment and facilities, non-broadcast production is actually the largest segment of this field.
Included in the category of non-broadcast television is institutional video, which includes corporate, educational, religious, medical, and governmental applications, and avocational television, which is associated with serious personal/professional applications.
Although the field of institutional video may not be as visible or glamorous as over-the-air broadcasting, average salaries are often higher, job security is better, working hours and conditions are more predictable, and there are often more perks (work associated benefits).

Institutional Video
Institutional video has proven itself in many areas. These include --
• a management-employee link It can be an effective tool for supervisors or management in reaching employees with information on policies, progress, or problems. This is particularly important if the institution has branches in diverse areas.

• instructional video In today's highly competitive and rapidly changing world, the ability to keep employees up on the latest techniques and developments is a major concern. Instructional videos are one answer.

• public relations Many institutions regularly create videos to explain policies or announce products, research developments, or major institutional changes.

• marketing While the mass media may be a cost-effective way of reaching a general audience, it's not the best way of informing a limited number of people about specialized products and services. Point-of-sale videos, often seen in the home improvement, make-up, clothing, and hardware departments of retail stores, are one example of this type of marketing.
Institutional television has been particularly effective in seven areas:
1. where graphic feedback is necessary Seeing something first hand is generally more effective than talking about it. This is particularly true when it comes to feedback on artistic work or athletic performance.
2. where close-ups are required to convey information The TV camera can make details and information obvious. This is especially true in medical television.
It's also possible to get cameras into hazardous and hard-to-reach places.
3. where subject matter can best be seen and understood by altering its speed Often, things cannot be clearly seen or understood without the use of slow motion or time-lapse (speeded up) photography.
4. where visual effects such as animation can best convey information Animated drawings, flowcharts, and even animated characters can often make concepts clear.
5. when it's necessary to interrelate a variety of diverse elements Television can pull together and interrelate events and objects so the total effect can be understood. As we noted in the section on editing, the selection and sequence of visual elements generates meaning and emotional response.
6. where it's difficult to transport specific personnel to needed locations Through television, experts are readily accessible to viewers in diverse locations.
7. when the same basic information must be repeated to numerous audiences over time It's more cost effective to use personnel to explain information once to TV cameras and then play the videotape to numerous groups thereafter.
As an example, let's say a company spends $15,000 producing a simple, 60-minute production designed to indoctrinate new employees to the company, its policies, and the various health and retirement plan options.
If 3,000 people view the video over a period of 3 years, the cost would be $5.00 per person. This can represent a major savings in cost and manpower, compared to having personnel repeatedly present the information to individuals or small groups over this time period.

Presentation Formats
There are four basic presentation formats.
• the lecture format In its worst form the lecturer stands at a podium and uses an overhead projector or chalkboard.

Without the array of attention-holding audio and video embellishments normally associated with television the success of this format rests entirely upon the skill of on-camera talent to hold audience attention. The only advantages of lecture format are that it's easy, fast, and inexpensive.

• the interview format Here a moderator interviews one or more experts on specific topics.

Although it's the mainstay of documentary programming, savvy producers strive to reduce the talking head component by adding as much B-roll footage as possible. Keep in mind that while executives may be effective in their jobs, they can come across as stilted and even inarticulate on camera.

• the documentary approach Here, ENG techniques are used to cover a topic from the perspective of the corporation. Unlike broadcast television, institutional documentaries are typically shown to many audiences over time.

• the dramatic format Although drama can be the most engaging way of presenting information, it's the most demanding and it presents the greatest risk of failure.

Probably the easiest form to pull off is a humorous skit where weak acting or production will be easier to overlook. Even so, for dramatic pieces it's worth the effort to try to find professional actors. Some will work for little or no pay just for the chance to gain professional credits and experience.

Holding Audience Attention
One of the findings that consistently emerges from studies on effective television programming is the need for variation in sound, visual information, and presentation style.
In commercial television the commercials, themselves, provide change and give viewers regular "intermissions" from program content. Since non-broadcast productions don't have commercials to break things up, change and variation must be introduced in other ways.
Most viewers can't absorb more than eight to ten minutes of straight information at a time. Unless there's a change in pace, content, or presentation style, attention tends to drift.

Avocational Video
With professional-quality camcorders and editing equipment within the reach of many people, we are seeing a host of vocational and ▲avocational applications. Here are a few examples:
• An insurance agent videotapes the contents of insured homes for evidence in case of loss.

• A psychiatrist uses a camcorder to treat anorexia. To help dispel the physical illusions they hold about themselves, he tries to get the patients to see themselves as others see them.

• An animal rights group videotapes graphic evidence of the inhumane treatment of animals. The tape ends up in a network documentary.
• A camp counselor videotapes the daily experiences of a group of scouts and sells the videotapes to parents.
• After doing a creative job of videotaping his sister's wedding, a man starts his own business producing videotapes of weddings.

• A law student earns tuition money by taking
video depositions for law firms.
• A young woman videotapes graduation ceremonies and sells copies to parents.

• A college student videotapes segments from athletic events at area schools and sells them to local TV stations.
• A husband and wife team travels the world with a camcorder and then sell their videos to video libraries to be used as stock footage.
Here are examples on a more personal level.
• A homeowner videotapes the contents and personal belongings in his home to have as a record, in case of fire, theft, or natural disaster.
• A dying man records a complete will on videotape, talking personally to each person named.

• A family member records the embarrassing and dangerous antics of another family member who is regularly under the influence of alcohol. Mortified at seeing it, the person seeks treatment.

• By videotaping herself as if talking to a trusted friend, a young woman is able to more fully articulate fears and yearnings. When the tape is played back after a period of time, she is able to more objectively view her feelings and fears.

• An organization puts together a videotape explaining the advantages of building a cultural arts center and presents it to the city council.

• An animal lover videotapes inhumane conditions at a local animal shelter and shows the tape on a local cable channel. The public is outraged, and action is taken to correct the situation.

• Parents interview students about gang-related fears and play the tape in front of the school board. The school board decides to take some action.
Many of the points in this module suggest careers in this field, and this is the topic of the next module.
________________________________________
Module 69
Updated: 05/04/2010

Part I

Careers

What does it take to launch a successful career in a competitive field like broadcast television?
If I can speak personally for a moment, I have been involved in television for several decades -- as an announcer and so-called TV personality, as a producer-director of thousands of hours of TV programming (most of it live), and as a university professor.
In the latter capacity I watched some of my students work up through the ranks to become producers of TV series and feature-length films. Others found the going too rough, abandoned their dream, and found employment elsewhere.
What made the difference? Probably eight things.
1. Motivation In any competitive field you must really want to make it. This type of motivation does not waver from week-to-week or month-to-month, but is consistent and single-minded. In short, you must stay focused on your goal.
2. Personality Although admittedly a vague term, it encompasses several things. First, since television is a collaborative effort, it requires an ability to work with others to accomplish professional goals.
Included in this category is attitude. In this context we're definitely not talking about someone who "has an attitude." Quite the opposite. We're talking about the general demeanor of individuals, how they accept assignments, whether they are pleasant to work with, and how they take suggestions or criticism.
There is often considerable pressure in TV production and thin-skinned individuals who can't detach themselves from their work and take constructive criticism are in for a bumpy ride.
3. Knowledge and skills Producers and directors look for individuals who know how to solve problems on their own, how to use the technology to its best advantage, and who can be relied upon to "make it work."
Excuses for not getting the job done right and on time are generally viewed as an admission of failure. Keep in mind that TV is a competitive business and employers know they can rather easily replace people who don't meet their expectations.
4. Creativity Although we've been trying to define this for centuries, it involves so-called thinking "outside the box," and looking at things in new ways and getting your audience to see and experience things from a fresh, engaging perspective.
The more thoroughly you understand the television medium the better chance you will have of using it in interesting, creative ways.
5. Willingness to sacrifice for your goals In highly competitive fields the supply of job applicants exceeds the number of job openings. For starting positions this means that employers may offer low starting salaries.
Those who stick it out and "pay their dues" can end up working in a field that is exciting and satisfying. For many people, doing something they enjoy throughout their lives is more important than making more money in a job that they dread to face each morning.
For those whose honed skills are in demand, the financial rewards can eventually be very great.
But, if your main goal is to have a predictable, 9-to-5 job with optimum stability, the field of broadcast television will probably not be a good choice. There is much uncertainty in the field, and the hours you may have to put in can take a toll on a social life and marriage.
In doing documentary work you may be away from home for days or weeks at a time. In news, you may be called out on a story at any hour of the day or night. Some areas of news, such as being a foreign correspondent, can even be dangerous.
6. An aptitude for working with words and pictures Successful television writers, directors, and artists have an aptitude for images and an ability to visualize their ideas.
Although television is largely visual, it's still word-based. We have to be able to clearly communicate ideas to sponsors, cast, and crew in the form of proposals, scripts, and instructions. An ability to write and communicate well is directly related to success.
7. Reliability and an ability to meet deadlines If you can't be relied upon to get the job done within the assigned time, your chances of getting future assignments will rapidly diminish -- and eventually disappear.
8. Lifelong learning If you assume that when you get out of school you will know all you need to for lifelong success here's a news flash: That's not the way it works.
Although formal education is useful and it may enable you to "get in the door," most students say that it's only when they come face-to-face with on-the-job experiences that they really start learning about their profession.
And, it doesn't even end there.
The electronic media change very rapidly. It's the people who keep up with developments as reported by newspapers and "the trades" (professional magazines and journals; see below) that are in the best position to take advantage of the latest developments.
Knowing how to make best use the latest computer technology can give you an important competitive advantage.
Successful news people, for example, tend to be "news addicts" -- constantly reading about current events. If reading newspapers and newsmagazine and "being in the know" doesn't interest you, you should examine your interest in broadcast news.

On-Camera vs. Behind-the-Camera
It seems as if the majority of students who become interested in television as a career want to be seen on camera. But the majority of jobs are behind the camera.
This means that on-camera jobs are extremely competitive and far more difficult to get than production (behind-the-camera) jobs.
Most on-camera jobs are in news. It's not unusual for a news director or personnel manager in a major market (geographic area) to get dozens of resumes a day for an advertised on-camera news position. Even when there is no opening, applications may come in on a daily basis. Most of these people have a college degree.
Even small market stations that pay low salaries receive many applications from people who want to gain experience with the hope that can later move up to larger market.
Depending on the station and the union restrictions, it's sometimes possible to start out behind the camera and then move on to an on-camera position. Small stations occasionally provide this opportunity. More than one behind-the-camera person, including a female news anchor at a major network station in Los Angeles, stared out this way.
Whatever your goal, it's best to have a "Plan B." In other words, adequately prepare yourself for a job in a second area. You may have to rely on this to pay the bills while you are waiting for the kind of job you want.
This "Plan B" may be a non-broadcast job. This secondary field should be considered when you decide on your college minor.

A College Education
Without a college education you may "get in the door" with a basic job assignment; however, your chances for promotion, especially to a supervisory capacity, will be limited.
Although some successful people brag that they made it without a college degree, keep in mind it was much easier a decade or two ago when they probably got their start.
With a host of new college graduates to choose from each year, employers can now easily specify a college degree as a basic requirement. You may find some helpful information on college scholarships, awards, etc., at the Broadcast Education Association Web Page.
What should you major in while in college?
It certainly helps to major in a field that will directly apply to your aspirations: Telecommunications, Broadcasting, TV Production, Broadcast News, etc.
Note the table in the right thatunemployment is directly related to education, with high school dropouts constituting almost half of the unemployed and those with college degrees representing only 4% of the unemployed.

Education In
Dollars and Cents
And if that isn't enough, keep in mind that there is a strong relationship between education and lifetime income. Statistics indicate that this relationship is growing stronger with each passing year. The figures below are for all U.S. occupations in 2004.
It's significant to note that the yearly income for those with a limited education has actually dropped in the past few years, while the income for those with a college degree has increased.
Later, we'll look at specific salary figures for various positions in broadcast journalism.
Educational Level Average Yearly Income
No High School Diploma $20,400
High School Diploma $28,800
Some College $32,400
Associate Degree $35,600
Bachelor's Degree $47,300
Master's Degree $57,300
Advanced Graduate Degree $76,000
Note that in 2004, individuals with an advanced degree earned earned three to four times as much each year as those who failed to finish high school. By 2009, there was a three-million dollar lifetime income gap between people with a high-school education and those with a college or graduate degree.
Top U.S. Colleges for Broadcasting
The 2004 book, The Business of Broadcasting, listed the top eight colleges for broadcasting as:
1. The University of Southern California (USC)
2. Emerson College
3. New York University
4. Ball State University
5. Temple University
6. Boston University
7. Michigan State University
8. University of North Texas
More and more people with an interest in broadcasting are going on for graduate degrees. A few years ago "US News and World Report" listed the top universities for graduate work in broadcasting. In rank order they are:
1. Syracuse University in New York (Newhouse School of Communication)
2. University of Florida, Gainesville
3. University of Missouri, Columbia
4. University of Texas, Austin
5. Northwestern University, New York (Medill)
6. Indiana State University
7. Columbia University, New York
8. Ohio University (Scripps)
9. University of Wisconsin, Madison
10. University of Southern California
11. University of Georgia
12. Southern Illinois University, Carbondale
13. Temple University, Pennsylvania (tied with)
13. University of Alabama, Tuscaloosa
Even at the undergraduate level these universities represent some of the most respected schools for pursuing a bachelor's degree in telecommunications (radio-TV, broadcasting).
Telecommunications employers also hire people with advanced degrees in related areas. Two possibilities are a MBA (Masters in Business) or a law degree specializing in Communication Law.
Here is some additional information on selecting a college.
Although we are in a period of high unemployment in the United States today (04/08/2010), for college graduates interested in the field of television, there is both bad news and good news. This is discussed in Finding a Job Today.

Careers in Broadcast Journalism
A survey of new hires in TV news found that the vast majority (94%) majored in either broadcast news or journalism/mass communication.
Although the percentage would be lower in other areas of TV, majoring in the field at least shows a prospective employer that you have been preparing to go into this field and that it wasn't just a last-minute decision.
For a college minor you might consider Political Science or Sociology if you are interested in TV News. If you eventually want to end up as a producer-director or manager, consider a minor in Business or Management. A minor in Psychology or Social Psychology would be helpful in any of these areas.
The RTNDF (Radio-Television News Directors & Foundation), the major broadcast journalism organization, recently did a survey of salaries for positions in various sized U.S. markets. Note the great discrepancy in salaries between on-air people in the large and small markets.
Position Market Size
Large Medium Small
News Director $150,000 $75,000 $43,500
Asst. News Director $90.000 $52,000 $46,000
Managing Editor $80,000 $50,000 $52,000
Executive Producer $80,000 $45,000 $22,000
Assignment Editor $47,500 $30,000 $23,000
News Producer $48,750 $27,000 $20,000
News Anchor $173,000 $55,00 $25,000
Weathercaster $110,000 $47,500 $23,500
Sports Anchor $128,000 $43,000 $25,000
News Reporter $78,000 $28,000 $18,000
News Writer $37,500 $25,000 not avail.
News Assistant $32,000 $20,000 $14,250
Sports Reporter $70,000 $25,000 $18,000
Photographer $50,500 $24,500 $17,000
Video Editor $41,750 $20,000 $14,900
Graphics Specialist $42,000 not avail. not avail.
Internet Specialist $38,000 $30,000 $25,000
In Part II of this topic we'll discuss some of the most important aspects of getting a job: internships, resumes, finding openings, handling job interviews and "five knockout factors" that can sink your chances of landing and holding on to a job.
No matter what you do for a living, if you love it and really enjoy the people you're working with, it makes everything worthwhile.
- Katie Couric, CBS

________________________________________
Module 69-B

Updated: 04/26/2010
Part II



Careers



Internships
Apart from actual paid, on-the-job experience, internship experience ends up being the most important "plus" on your résumé.
Among other things an internship suggests that you have been serious about the field and that the school-to-job transition should be easier.
Because union rules often discourage or prohibit stations from hiring interns that are not in school, you need to pursue this option while you are still a student.
Internships can also provide important professional contacts. By keeping in touch with people you meet and work with during an internship, you will often know of job openings far in advance of seeing them posted on the Internet or in professional publications.
One of the best ways to keep in touch with these people is by maintaining a permanent e-mail address. This link has the details on that.
Keep in mind that landing your first job will probably be the most difficult because most people hired in TV come from other stations and have that valuable qualification called experience.

Résumés and Cover Letters
Note: "resume" can be spelled resumé, résumé, or resume. Résumé is the most common.
For each job opening there will generally be a number of interested candidates. Only one will get the job.
When you first apply for a job you will probably be represented solely by a cover letter and résumé. Without dwelling on the need for impeccable writing, organization, etc., let's just say that your résumé and cover letter have to be strong enough to outshine the competition and get you invited in for an interview.
This link has information on writing résumés and cover letters, as well as important related information.
The computer scanning of résumés is becoming more commonplace. This can work to your advantage if you understand the process. The article, Tips on Preparing Résumés That Will Be Computer Scanned, explains this.
Webcam interviews are also becoming more commonplace. How To Prepare For a Webcam Interview covers this.
Preparation for landing that first job must start long before the interview. You need a head start on such things as internship experience and compiling an impressive résumé.
Let's look at some résumé considerations.
Because on-the-job training (and mistakes) are costly to an employer, experience is ranked at the top of desirable qualifications.
Not everyone will be fortunate enough to spend a summer or two working at a TV station. But for those who have been able to wangle even part-time jobs in the field, employment prospects will be better.
Unless you can fully "feather out" your résumé with professional experience, don't neglect unrelated employment, especially if you're just graduating.
Showing an employer that you can hold down a job -- any job -- indicates that you've learned to deal with responsibilities and deadlines. Plus, it will provide an employer with some "real-world" references.
When listing your experience on a written résumé don't overlook extracurricular activities. Have you produced or directed a TV show or a series at your school? Have you won any awards? Such things may separate you from other applicants.

The Résumé Reel
While in production classes be sure to save good examples of your work for your résumé reel. (Even though we are an in era of DVDs, the term "reel" is still used.)
In most areas of TV prospective employers assume you will have a reel of your best work. In assembling your reel, don't save the best to last. Those reviewing a stack of VHS tapes or DVDs often don't take the time to view more than an opening cut.
Ideally, you'll want to lead off strong and finish strong, and make the whole résumé reel no longer than 5-10 minutes. (After you produce or direct several network productions and a few national commercials you can make it longer-and expect it all to be watched!)
Employers know that anyone can make exciting segments out of exciting events. The real test is if you can make more mundane subject matter interesting.
Use a computer to make a professional tape or DVD label, and be sure you include your name and contact information. Rather than just stark black lettering on a white label, more creative applicants have been known to capture an impressive frame out of their video to use as a background for the label. Being creative and computer literate (without being ostentatious or pretentious) are important qualifications in this field. You will, of course, include a cover letter with more information.
It's a good idea to tailor your résumé reel to the job you are applying for.
Is the job in sports, weather, field reporting, studio anchoring, or interviewing? Make sure your résumé reel emphasizes what you are interested in while not closing the door to other possibilities. Study the station's programming if at all possible and include only what seems to fit into the job description and their approach to things. In order to do this you will need to have a lot of raw material to choose from.
If you are not going to have ready access to video editing equipment, you might consider equipping your computer with an editing program and DVD burner (recorder). Once you do, you should be able to quickly assemble tailor made résumé reels as the need arises.
Unlike many fields that may sift through applicants for weeks, jobs in broadcasting are generally filled quickly. This also means that the standard "business school advice" on perusing employment may not be valid in broadcasting . For example, there may be a lengthy telephone interview prior to a possible in-person interview. Thus, you should be prepared for that (with possibly some notes handy). This link and especially the "Notes on Interviews" should help with that.

[A successful future] ...begins with believing in yourself even when odds seem impossible. You also have to commit to being a student for life.
-Lazlo Kovacs, recipient of the American Society of Cinematographers' Lifetime Achievement Award. His work includes more than 70 narrative and feature films.

Video Awards
Of course, TV production awards can make a résumé "sparkle."
Consider entering some of your best work in some of the many video contests. A search of the Internet should net you many possibilities, including this one, which has almost 200 categories. There are also the annual DV Awards, Telly Awards, Ava Awards, Aurora Awards, and the New York Film & Video Awards, to name a few. For college students the Broadcast Education Association has its own video awards.
C-SPAN has more than $50,000 in prizes in their yearly StudentCam video contest which is open to middle and high school students. They have a web page on their StudentCam page that explains the yearly contest, and includes information, suggestions and an entry form.
As a judge in some of these video contests, I can attest to the fact that some contests have few applications in some categories and your chances for netting yourself an award -- even a national award -- can be very good. Just keep your model releases handy in case they ask for them, and be wary of competition that requires a substantial entry fee.

Women In Broadcasting
It may be difficult for today's young TV viewers to imagine a time when every face in TV news (with the possible exception of "the weather girl") was male.
For decades it was assumed that women could not impart the same authority to TV news that men could -- especially in anchor positions. Thus, ratings conscious program managers kept women out of key on-air news positions.
A number of research studies challenged this view, including one co-authored by this writer. After identical newscasts were delivered by several professional male and female newscasters, written tests were given to audiences to determine such things as recall and credibility.
The results, which were published in the "Journal of Broadcasting," found that there was essentially no difference between the male and female newscasts.
Although research dispelling the myths surrounding the credibility of female newscasters may have helped, it was the government mandated equal opportunity laws in the mid-to-late 1900s that were mostly responsible for opening the door to both women and racial minorities in broadcasting.
It Took 50 Years
It took 50 years after evening network news started in the United States before a woman was entrusted in the sole evening anchor position for a major commercial network.
That woman was Katie Couric, one of the most popular network personalities in morning television, who took over the evening news position at CBS in late 2006. However, she couldn't pull the CBS nightly newscast out of its third place ratings position.
But the door was open and in late 2009, it was announced that Diane Sawyer would be replacing the Charles Gibson as the anchor on ABC's World News.
Some 25 years before that Lynn Sherr was put in a quasi news anchor position for a limited time at PBS.* But even at the progressive PBS network, the news anchor position soon reverted to a male anchor for their evening news.
The major organization for women in broadcasting is American Women In Radio & Television which plans to change its name to the Alliance for Women in Media. Founded in 1951, this organization, "promotes progress and advancement for all women in media through education, advocacy and outreach."
Not only have things changed for women in broadcasting, but even in film, a field long dominated by men, women have apparently "arrived."
For the first time in its more than 80 year history of giving the top directing award to men, Kathryn Bigelow, won at the 2010 Academy Awards for The Hurt Locker, a film that also won Best Picture.
________________________________________

Looking For Work In
All the Right Places
CyberCollege and the InternetCampus have links to scores of job listing services. Large media corporations publish monthly bulletins of jobs. There are also numerous media employment agencies -- but make sure you fully check them out before you invest any money. Your school's placement service may have additional information.
Several broadcast-related trade publications, including Broadcasting and Cable magazine, regularly carry ads for jobs.
When all other leads dry up, you can use the "shotgun approach" of sending out unsolicited résumés to selected TV stations and production facilities.
By checking TV station web pages on the Internet, or by looking up stations in the latest edition of Television & Cable Factbook, you can find the names of personnel managers and department heads.
If at all possible, direct your letter to a specific person by name and title. By the way, the three-volume factbook is very expensive, so see if your library has it. On the web it's Television & Cable Factbook.
Even though you may not hear from many of the people you write to -- they are very busy -- they may keep your résumé on file and you may get a call when a job opens up.
If you can get an invitation to the National Association of Broadcasters convention, which is by far the largest professional broadcast organization, you can visit the job placement services, meet station owners, and learn about the latest broadcast technology. The NAB convention is generally held in April each year in Las Vegas, Nevada.
The Broadcast Education Association can provide passes to the annual NAB convention, plus it offers many important services to students including a job placement service.
Those who belong to broadcast organizations have a definite advantage in job hunting. In addition to those mentioned, professional broadcast organizations include:
• Academy of Television Arts & Sciences
• American Sportscasters Association

• Black Broadcasters Alliance
• Broadcast Executive Directors Association (BEDA)

• International Association of Broadcast Meteorology
• International Institute of Communications

• Louisiana Association of Broadcasters
• Minorities in Broadcasting Training Program

• National Academy of Television Journalists
• National Association of Farm Broadcasters

• National Association of Program Executives (NATPE)
• National Religious Broadcasters

• Radio - Television News Directors Association (RTNDA)
• Youth Radio

An Internet search will provide the names of many broadcast organizations at the state level.

Handling the Personal Interview
If you get called for an interview, make sure to do your "homework" before you go in.
Know everything possible about the facility. If you can, talk to some present and maybe even some past employees.
Surveys of employers have turned up some shortcomings of recent U.S. graduates that, if detected during an interview, can knock a candidate out of the running.
Although you might consider some of the following a "bad rap," you still need to know that many employers are on the lookout for these weaknesses. Because of the problems inherent in firing employees, when faced with some questions about a prospective hire, many employers and personnel managers simply adhere to the saying, "If in doubt, don't." In a competitive field like television there are just too many qualified applicants to take a chance.
Suffice it to say, keep these "big five" knockout factors in mind and don't give a prospective employer any reason to doubt your suitability.

The Five Knockout Factors
1. Inability to follow instructions - Employers have said that new hires have difficulty following instructions, either preferring (with limited knowledge about why things are done in certain ways) to "do it their way," or simply not being able to carefully listen to and carry out instructions.
2. Promptness and reliability issues - It's alleged that many new hires, especially those who have not successfully held a job before, don't appreciate the need of getting things done right (the first time) and on time.
3. A need for constant supervision - It's alleged that many new hires sit around wait around for someone to tell them what needs to be done, instead of being "self-starters" (being able to figure out what needs to be done and doing it).
4. Attitude problems - This knockout factor parallels #2 of the 7 criteria for success listed at the beginning of this module.
We're talking about the general positive or negative demeanor of individuals, whether they are pleasant to work with, how they accept assignments, and how they take suggestions and criticism.
5. Slovenly work habits; slovenly personal habits. This relates to everything from being neat, well-groomed, and organized, to following through on important details in work assignments.

The Ability to Effectively Communicate
Although this consideration isn't as applicable to broadcast communication majors (and so it's not listed above), in published studies personnel managers in general list "the inability to effectively communicate" as their number one knockout factor in candidates they interview.
They cite an inability of a candidate to clearly and effective express thoughts, problems with English grammar, and a lack of personal confidence all as factors that significantly lower the candidate's employment prospects.
Personnel managers know that these weaknesses not only make working with the employee difficult, but, since the employee to a degree represents the employer, these problems can, by extension, reflect negatively on the company.

Substance Abuse
A factor that was not mentioned, but one that represents a decisive knockout factor, is substance abuse (primarily drugs and alcohol). Both are clearly linked to accidents, absenteeism, and problems in the work place. No employer wants to risk the problems either represent.
Personnel managers are also aware that smoking has been linked to general health problems, absenteeism, and reduced efficiency. So, when this is an issue and a personnel manager faces a choice between two equally qualified candidates.....well, you can figure that out. (And, yes, it is lawful in some states to refuse employment on the basis of smoking.)
Not surprisingly, promotions and advancement are also related to all of these factors.
Unfortunately, some people only find out about these realities after having been repeatedly been fired from jobs or regularly passed over for promotions. Once you have a string of "negatives" of this sort on you record, getting subsequent jobs and promotions becomes increasingly difficult.
When you are able to get into the field, you'll know that you are in the company of others like yourself -- men and women who have demonstrated that they have what it takes to make it in a competitive, rewarding, and often very
exciting profession.

If you receive a job offer, it may be a mistake to simply jump at the opportunity without taking some important things into consideration. If you find yourself at that point you need to read Handling a Job Offer.
This file on Finding a Job Today has some of the latest information on employment prospects.
________________________________________
* Lynn Sherr 's 25-year struggle to be successful -- some would say survive -- in a male dominated profession is documented in her 2006 book, Outside the Box.
________________________________________
In the next module we'll conclude this course.
________________________________________
The final Matching Quiz will be after module 70.
________________________________________
Module 70
Updated: 04/08/2010




A Final Word



With Module 70, we come to the end of this TV Production cybertextbook.
These modules have covered the essential "tools of the trade." As critical as these tools are, keep in mind that television productions that win awards and provide audiences with ideas, information, and lasting impressions, involve much more than a knowledge of the basic tools. This beyond the basics discussion sheds light on that.
This cybertextbook is a work in progress. Nothing related to a dynamic and rapidly changing field such as TV production can afford to stand still.
If you check back in a few weeks or months, you will note that new ideas and techniques have been added to the modules. Therefore, reviewing the material from time to time will not only keep the material fresh in your mind, but it will enable you to keep you up to date on new developments.
If you find a problem with the content, a broken link, or if spelling or grammatical errors have slipped through, I would appreciate your bringing it to my attention.
Finally, you may recall that this free cyberbook has one string attached.
If you use this material in developing your talent to produce effective television programming, you need to "pay" for the material by at least once producing something to improve conditions in our world.
If you need some ideas consider this.
If you don't go into the field professionally, here is your "price." A textbook of this type would cost at least $60 (actually, much more, with all the color illustrations). Consider your time worth $20 an hour and devote at least three hours in doing something positive and totally selfless for some person or agency.
That's it.
If you do either of these, you've paid for the material and your conscience will be clear.
Here's wishing you great success in your chosen field.
Ron Whittaker, Ph.D.,
Professor of Broadcasting
________________________________________
Copyright Notice
We regularly get requests related to the use of this material. First, no permission is necessary to use the modules and their associated readings directly from CyberCollege.com or the InternetCampus.com at type of public or nonprofit school or classroom.
The English, Spanish, and Portuguese modules and illustrations are protected by U.S. and international copyright law. To print them out for distribution, link only to one or more images, to download them into a server, or reproduce these materials in any other form is a violation of copyright.
Please see this link for additional copyright information. This forum letter also has important information.
________________________________________