They say a little bit of knowledge is dangerous. I recently attended a seminar on Dolby 5.1 surround audio for television (hereafter referred to as 5.1), and I got a little bit of knowledge. I'm not sure that makes me dangerous, but it does make me afraid. What I think I learned about 5.1 is that it is very complicated.
Let me rewind the time machine 20-plus years.
In the mid-1980s, when the television industry introduced MTS (multichannel television sound) that allowed us to broadcast in stereo, stations started getting a few programs and a few commercials with stereo sound. Most of the time the dubs or feeds of that material were fine. If the station wasn't capable of broadcasting in stereo, or if our workflow or signal path had one or more monaural choke points, we summed the left and right channels together, losing the stereo effect, but maintaining acceptable audio.
But every once in a while, one of the stereo channels was recorded out of phase. The result was that any audio material that was identical on both tracks cancelled itself out.
For example, think of a forest sound with narration. If the forest sounds in the background were different on the left channel and on the right, you could hear the bird chirping fine. But in that out-of-phase situation, if there was narration mixed equally on the left and right channel, it was inaudible.
Luckily, there was an easy way to check for this without even listening. Don't ask me how, but the engineers set up an oscilloscope in master control that would alert the operator visually that the channels were out of phase, and said operator could correct the problem with the flick of a switch.
©istock Dedicated test equipment replaced the oscilloscopes, and such devices were installed in dubbing rooms, at uplinks and downlinks, and any other place in the station where material might pass, which would allow this problem to be caught and corrected.
But you know, once in a while I still see (or I should say hear) movies on cable, satellite, and probably over-the-air, where you can't hear center dialog. That tells me that after 20 years, we're still battling the simple stuff.
Bringing the time machine back to the present, what I just took away from the 5.1 seminar is that 5.1 audio is much more complicated than stereo, and there are a lot more things with 5.1 sound that can get screwed up.
Here is a simple description of 5.1, for the nontechnical person like me.
Let's work backwards from the viewers at home. Some viewers are going to be listening to the audio out of the mono or stereo speakers on a television, where squashed dynamic range (not a lot of difference between loud and soft) is a good thing. Some are going to be listening to the audio out of the five speakers and a subwoofer in a home theater setup, where the full dynamic range (lots of difference between loud and soft) would be appreciated. The current ATSC transmission system is designed to allow audio technicians to optimize the audio for each of those two listening environments.
Instead of sending separate audio signals for each listening scenario, one audio stream is delivered to all the devices. At the same time, a separate set of "metadata" describing how the audio should be played back in each environment accompanies the audio signal. This allows the audio mixer to select one set of instructions for how the sound is to be heard in a 5.1 home theater environment, and a separate series of settings for how it is to be heard in a stereo audio environment.
Back in the viewer's home, the stereo TV knows to look to one set of metadata, and the 5.1 home theater knows to look at the other.
In the hands of an able technician, with a lot of knowledge and an eye (or ear) for detail, this is going to be a good thing. But in the hands of someone who's learning while he's earning, and especially a nontechnical person (like me), there is potential to create a lot of problems.
Perhaps the audio mixing is being done in a small edit suite in an editor's basement. Is the monitoring set up in a home theater-like environment? Does that edit suite have test monitoring to check the audio in each of its different forms? Does the editor know how to test the audio? What happens if the audio gets "leveled" before broadcast but the metadata doesn't reflect this change? What happens if the metadata gets lost or thrown away in your broadcast plant? What happens if the content didn't have metadata to begin with?
That becomes your problem when the piece of content, say it's a commercial, arrives at the stations. Is your commercial check-in workflow designed to evaluate that audio? Are you going to have test equipment to appraise both 5.1 and stereo audio? Are you going to have an actual listening environment to hear it both ways? Are you going to be able to take the time to listen to it twice or more?
To repeat, I'm not way-technical, but this smells a lot like the problems we had with stereo audio way back when, only it could be much more difficult to solve. But then, that's why you have way-technical people at the station.
Craig Johnston is a Seattle-based Internet and multimedia producer with an extensive background in broadcast. He can be reached at firstname.lastname@example.org.