Authors: Frank Catalano
The second creative process is one in which animated characters are voiced to already existing animated or live action footage. In this case, the footage has been drawn and edited and the actor must fit the character's voice within this pre-existing framework. In the case of
Robotech
, the characters had been drawn, edited and put into production long before any of the American voice actors that worked in the show became part of the process. I say this in total admiration of all the voice actors that worked on the project. We all literally, had to hit the ground running and had to make character choices based upon the clues that we could find written in the scripts. But remember, voice actors in
Robotech
often saw the script for the first time moments before they were to voice the character. In some instances, actors would get into the studio and have a few minutes to glance at their lines a few pages ahead, read the scene and try to (as best they could) to make some meaningful choices for what they were about to do. You only could hope, that the director had an idea of what was going on in the script because they had been recording it all day or better yet that they were also the writer. That was the best combination, if you could get it. Well talk more about
Robotech
shortly. In animae projects, you might have other factors to consider. Another actor in Japanese or English may have already voiced the character. If that is the case, there could be preconceived parameters that belonged to the character that were put there before you could even voice the first line. In animae, when you enter the project as a voice actor you are arriving late in the creative process. The overall qualities of the character often are already defined for you by the script, edited footage and perhaps a preconceived notion of what had been done before by other actors playing the same role. But this doesn't mean that the voice actor shouldn't approach the role creatively. The voice actor can use these parameters to interpret the role from their own point of view. Such already set parameters could include, physical mannerisms, speech patterns and relationships with other characters. Many of these things are already part of the permanent created footage. Also, let's not forget about lip sync. This is the process by which the actor's performance of the written lines must match as perfectly as possible to the mouth movements of the character that is seen on the screen. The actor is guided through this process by several means. The first rests in the hands of the sound engineer who cues up one or more lines of dialogue to be voiced. When the footage rolls, the character's line or reaction is preceded by three beeps and the actor should begin to voice the character on the fourth silent or imaginary beep. Voice actors have learned to love those wonderful beeps. The serve as a helpful guide to know where to start speaking and helps the process move along more quickly. Another tool the voice actor can use is the time code that appears either on the top or bottom of the screen. It is often helpful, when a bit of dialogue or a reaction is in progress. The director might tell you to place a certain line or reaction at a specific time code. The writer/directors do this more often because they wrote the script and have a visual understanding of where the line or reaction specifically fits into the scene. This is not to say, that sometimes even with all the beeps and time codes, that you just don't do it on the fly by looking up at the screen and saying the lines as the character does them. Also, if you don't hit the line exactly in sync but the director likes your reading, the sound engineer can often digitally move the line a few frames forward or backward to make it fall into place. You just have to love those guys. They can save a performance by doing their digital magic. Actor's can also rely on a well-written script to help them create their character. A well-written script means that the dialogue actually fits (is in sync) with the character's mouth movements and that it contains the appropriate visual directions to help the actor along. It also means that the script has dialogue that fits the action, the situation and the physical gestures of the characters up on the screen. That is not always the case. I've been lucky to work with some of the best dubbing writers in the business. When the script is well written, it allows you the freedom to try different things within the framework of the character.
Often in an anime or live action dubbing scripts voice actors will see descriptive terms in parentheses before the spoken line which indicate whether the character's mouth is visible in frame when the lines are spoken.
They might look like this:
A damaged Alpha Fighter slowly making its way back to Freedom Base.
Angle on Pilot #1 - Interior cockpit of Alpha Fighter
PILOT #1: (MNS) Freedom Base, this is Alpha XFS one⦠squadron leader. Over⦠is anybody out there?
This first example is (
MNS)
which indicates that the character's mouth is not seen. This tells the actor that his/her character is on screen but that their mouth is not clearly visible. This may be due to any number of reasons which might include the character wearing a helmet obscuring his/her mouth or that their head my be turned away or blocked by the cockpit control panel. Sometimes, within the frame, the character's body may be visible moving an arm or shoulders but the mouth is covered. The actor or the writer for that matter has no choice. The footage is already shot and edited, so there must be a line written there and the actor has to go with what they have and make it work. Another term is
(OFF)
which indicates that the character speaking is not in frame. These lines could be from a character voice coming from a speaker or one that is in the scene but is simply not in the frame. Most
(MNS)
lines do not
require sync (unless they are tied to a specific action or reaction to something on screen) so they can be voiced just using the beeps.
INT. FREEDOM BASE CONTROL ROOM - Angle on XFS on radar screen.
CAP COM: (OFF) XFS one â this is Freedom Base, we read you loud and clear!
PILOT #1: (MNS) Looks like I took a few hits⦠my centurium pod is badly damaged and I'm losing altitude. I need to put this baby down fast.
CAP COM: (OFF) Stand by for landing coordinates.
PILOT #1: (MNS) Roger.
CAP COM's voice is heard in the scene, but the shot the audience sees is of the radar screen. CAP COM is not seen at all in this example. It's just a voice within the frame of the scene that is heard only. Lastly, there are those lines which (although) are not usually indicated are deemed to be
(ON).
This means that the character's face and facial expressions are in frame and are visible to the audience. The dialogue spoken must be in sync with their lip movements, facial and body gestures. Normally an
(ON)
line doesn't have the word
(ON)
in front of it. If the line is not limited by an
(MNS)
or an
(OFF),
the actor assumes that it is
(ON).
INT. ALPHA COCKPIT â Angle on Pilot #1 pulling off his mask.
PILOT #1: Sure is good to be homeâ¦
(MNS)
receiving coordinates now. Setting course for sector five.
INT. CONTROL ROOM FREEDOM BASE â Angle on radar screen.
CAP COM:
(OFF)
Confirm sector fiveâ¦
SELLACK:
(OFF)
Is he going to make it?
CAP COM: Yes sir, all coordinates locked in
(MNS)
guidance now active.
Pull back on control room revealing the face of
SELLACK
, the Supreme Commander of Freedom Base One. CAP COM activates the control room monitor tracking the damaged Alpha Fighter as it slowly descends and safely touches down.
SELLACK: Welcome home, son.
In this example the existing animation required the actor to perform the lines within three different dynamics â
(OFF), (MNS)
and
(ON).
The actor must create a emotional and intellectual reality for the character within preexisting framework of the animation and editing of the film. But there is more to it than that. When a voice actor creates a character within an already existing film, it is not as simple as only considering whether or not the character's face and mouth are visible to the audience. There's something else to consider.
When a character's face and mouth are visible to the audience, the actor must conform his vocal interpretation within the lip shapes and movements of the character. Some animated productions have a simple open and closed mouth pattern when their character speaks or in other instances no mouth at all. I did an animated series in which I played an orphan bumblebee. The character, a little bee, had no mouth. But instead, moved his antennae when he had something to say. This was an easy task from my point of view because there was no actual lip-sync for the character. I only knew that when his little antennae moved, he was speaking. There wasn't much else to grab onto other than the antennae movement. However, in more complicated animation and live dubbing, the actor must fit the words written within the movement, tempo and shape of the mouth of the character they are dubbing. But there is a step before the moment the actor puts the line in the character's mouth. Before an actor even shows up to voice a character, a scriptwriter must write lines that are in synchronization with the character's speech patterns and movements within the film. Often in animation, the writer can write lines that fit nicely into a flapping mouth that just opens and closes. Live action films, which are dubbed into a language other than the one the film was originally shot in, can be a particular challenge. It becomes the writer's task to create a story and character that must fit within the strict confines of what is already on the screen. I can't tell you how many times there were lines that I wanted to put in an animated or live action script that were absolutely perfect for the situation at hand which simply did not sync with the character's mouth movements. I always have been of the school of thought that you should favor content over sync. But, while you might be able to write that perfect line, if it is out of sync it takes the audience and your character out the reality of the scene. This is why it takes a unique breed of writer to be able to be true to the content of the material, be imaginative and still stay within the limitations of what they have to work with in terms of sync.
ADR (Automatic Dialogue Replacement) also called dubbing form of scriptwriting is not for the faint of heart. It takes a healthy combination of creative writing talent and
technical knowledge to create a script that contains a believable story and real characters within the constraints of the existing animation or live action footage. It takes an extraordinary writer to be able to get it right. I have had the opportunity to work with some of the best writer/directors in the business. While there are many talented writers/directors that I have worked with who can do this and do it very well, the two that I have worked with the
most
are Gregory Snegoff and Steve Kramer. I have worked with these guys in
Robotech
but also on many other projects both live action dubbing and animation. When you go into the studio and they have written the script and are directing, you know it's going to be where it needs to be. They have the ability to write compelling stories, smart dialogue make it all look the characters are saying it. Both of these guys also have the ability to write on the spot. While this to most may seem like no big deal, it is actually a very fine art. Sometimes in the studio, a line might be too long or short and just not fit. They look down at the script and in a heartbeat will say â just add this or that line and BAM it all fits like a glove. Also, sometimes in the studio you may get a script that is just not written well or is completely out of sync. I may have written a few of those. When that happens, it has to be rewritten as the film is recorded. That's a slow slog through the mud at best.
There is a Mike Reynolds story that is often told in
Robotech
circles. I'm not sure if the script was a
Robotech
script or another show that was being recorded. I am also not sure if he said it to me or it's been told to me so many times by everyone that I visualize that it happened to me. It really doesn't matter either way. Mike was working very late one night recording and directing a particularly awful script. He was just having a hell of a time trying to get what was written to fit what was up on the screen He calmly looked down at the pages with pencil in hand and sighed, “Who wrote this piece of shit!” The sound engineer, deadpan, piped in, “You did Mike.” This was of course not the case, but Reynolds just sighed again, looked down at the pages and said, “Okay, let's see what we can learn from it.” We have all had a laugh over the years recalling that night. But the truth is that Mike is a pro and didn't get rattled by a bad script. He just fixed it where it needed to be fixed and it got done. If you're going into a dubbing studio, these are the guys you want to be with. The worst thing that can happen in a dubbing session is to have a poorly written script. When I say poorly, it may not be necessarily a bad story. Poorly in this case, means not in the appropriate sync or no sync at all. When this happens, the script has to be rewritten line by line by the director and voice actor in the studio as part of the dubbing session. These sessions are long and require a lot of patience because the director and the voice actor are now writing the script on the spot. Sometimes writers write acceptable sync but it is created too long or too short. This is more than likely due to the differences in speech patters between the writer and the actor. As script problems go, this is not the worst. Good director/writers like Gregory Snegoff or Steve Kramer, Mike Reynolds and the late Bob Barren always found a creative way to elongate the line by slowing down your delivery or adding a short word if it was too short. This works as long as the added word doesn't throw off the sync when the line is delivered. Sometimes, even when it's written correctly, the actor just can't get it.