Progress on Loudness And Sync Issues

The two major themes of broadcast these days, programming aside, seem to be either loudness or lip sync. We have discussed both issues extensively in this column and in other columns in TV Technology as well as debating the issues in SMPTE and ATSC committee meetings. Now that they have finally become real targets on industry radar screens, there seems to be some genuine progress that will benefit the broadcast community and ultimately the consumer.


As reported last month, Dolby Laboratories has announced a remote control package for its LM100 Broadcast Loudness Meter. I guess I would now consider myself a power user of this unit thanks to gobs of help from Dolby's loudness guru Jeffrey Riedmiller. Not that it is difficult to use, in fact quite the contrary. There are some setup details such as which channels from a 5.1 channel program you wish to include in the loudness measurement, the length of the measurement window, and which source you wish to measure. If not self-explanatory, the users manual gives a thorough treatment of the reasons for every selection. This is not the power-user part; power user certification should be given to those who have made measurements, captured them with a terminal program, input the data into an Excel spreadsheet and formatted the results to give a reasonable-looking graph. Not a simple task the first time, and for me it involved meeting up with Jeffrey at Barney's Gourmet Hamburgers, our favorite hamburger joint in Berkeley, Calif. to get a lesson after I had captured the audio of the 2003 Grammy Awards broadcast. Not that I don't highly recommend the Turkish coffee milk shakes (they will get you through any lengthy broadcast), but there had to be an easier way to do this, and Jeffrey promised he was hard at work on the solution!

(click thumbnail)Fig 1. Representative screen for the Dolby LM100 Remote control program.It was nice to see the results, and even nicer to see what a slick piece of software this really is-and although in beta test now, it will be included for free with the LM100 (and provided to current users as well). The software not only simplifies the task of making a long-term measurement of a single service, similar to what I did with the Grammy Awards broadcast, but with the NTSC version of the LM100, it allows simplified long-term measurements across all of the channels in a cable plant. The software actually allows the LM100 to interface to any standard digital cable set-top box and control it to automatically monitor all channels-analog and digital-and produce meaningful data that represents the loudness of the entire cable plant.

Fig. 1 shows a representative screen for the Dolby LM100 Remote control program.

Coupled with its new software, the LM100 is set to help systematically cure the long-term loudness problems we all encounter and complain about. I have already heard reports of actual field usage of the system that has forced certain broadcasters and cable plants to stop passing the buck and get the loudness right. Amazing.

But wait, there's more. In addition to all of the loudness measurement capabilities of the LM100, it should be remembered that the unit also contains Dolby E and Dolby Digital (AC-3) decoders, plus the optional NTSC tuner with BTSC stereo decoding (an accurate digital implementation at that). Status information from all of these sources can be collected and optionally used to drive alarms for overmodulation, loss of modulation, CRC errors, and bitstream-specific violations. While these features have always been present in the LM100, they are summarized in an easy-to-read manner with the remote software allowing any operator to have as much information as is needed.


A technology was shown at the Sigma Electronics booth at NAB this year called Digital Audio Time Code or DATC. Developed by Nigel Spratling, George Smith and their team, this system does just what it says: it makes a timecode packet that can be placed anywhere inside a digital audio signal, from the channel status bits to the actual audio payload. If a video time reference such as Vertical Interval Timecode (VITC) is simultaneously placed inside a corresponding video signal, the two signals can be passed through a plant, stored on tape or server, distributed over satellite, and at any point have their timing checked and corrected. No, it is not that simple, and yes there are some major roadblocks hidden in the above scenario, but it is a start. (Since NAB, Nigel and George have joined my company Linear Acoustic.)

One of the primary issues is where exactly to put the DATC in a typical AES signal. The status bits are a logical first choice, but as we have seen, are notoriously unreliable. As it is generally the audio payload data (i.e. the 20-24 bits of actual audio information) that is of primary concern to most users, many manufacturers simply ignore the data contained in the bits that precede the audio payload and regenerate them in the AES transmitter prior to outputting the signal. If the bits can be guaranteed to pass the data, DATC can be inserted into them. If not, DATC can be assigned to any single bit within the audio payload, even if there is audio present. Yes, this could cause problems if the audio payload is carrying compressed signals such as Dolby E or Dolby Digital (AC-3), but in both of those cases there are provisions for carrying either LTC or timestamp information, so the path can be continuous even in the compressed domain.

Granted, the video signal is somewhat of an issue. Calling VITC unreliable in this day and age of compressed video signals is being polite-it is downright impossible that it will pass. However, the world is neither completely compressed nor is it beyond the means of the industry to make provisions for some sort of a timestamp that can be carried within the video signal, and withstand compression or other manipulation. The entire basis for proper synchronization within the ATSC system-and MPEG in general for that matter-is built on the simple idea of time-stamping the audio and video signals during encode, then re-aligning them during decode, because as we are all finding out the hard way, things do move around.

(click thumbnail)Fig 2. Application of DATC to audio and video signals to correct timing.
Fig. 2 shows a high-level diagram of a system using DATC to correct for path differences between the audio and video signals. These path differences can be due to any number of reasons including additional processing of one signal versus the other. Regardless of what causes the problem or which signal is delayed versus the other, the mismatch can be corrected.

It can be seen that the timing signals have an additional field called BBB which is a block count based on AES frame timing, allowing for sub-millisecond re-timing of signals. The applications for such tight synchronization are many, stretching far beyond correcting lip sync, and locking it to a frame rate (AES) that is not tied to video frame rates is another golden idea.

Yes, I know, there are lots of ways to break this system for now, but it is a good start. There are also many creative ways to make the system work and the fact that it can be an automatic process means that even if a break occurs in the path somewhere, the timing will be no worse off than without such a system. For all other areas leading up to the break where DATC and video timecode can be recovered, the synchronization can be perfectly restored. This is definitely a step in the right direction.

Next time, we will take a peek into what to expect audio-wise from the terrestrial television networks this coming fall. We will also begin a very interesting look at the audio technologies being put into place in cable, satellite, and video-on-demand services. Watching these areas will give us clues as to how the rest of the broadcast world will be headed in the future. For more information on the LM100 remote, visit, and for information on DATC, drop me a line.