Embedded audio interfaces - TvTechnology

Embedded audio interfaces

The case for embedded audio is complex. A decision to implement must be carefully researched and engineered. There are significant operational advantages in some cases, and disadvantages in others, that must be vetted and understood. Once you have decided which road to take, be thorough in evaluating all manufacturers' hardware.
Author:
Publish date:

Most consumers don't think of television as separate pictures and sound. Until the advent of home recording, few had any experience with either audio or video cabling. When hooking up a new TV set, all they had to do was use simple RF connections. If only things were that simple today, either at home or in the professional domain.


Embedded audio products ranging from a basic, modular analog audio multiplexer/embedder to a sophisticated, stand-alone digital processing synchronizer can be seen in this photo of the demo master control room at Leitch’s European office in Bracknell, England.

Today, consumers are fortunate enough to have an interface for home camcorders (at least when using DV devices) that is shared with professionals. But there are a number of methodologies involved in interconnecting other devices, like VHS recorders and DTV receivers.

Fortunately, in the professional arena we have a small universe of general-use interconnection standards. We have good old analog composite video on coax and with it analog audio on twisted pair. In the digital domain we usually see SMPTE 259M-1997 (most commonly 270Mb/s), and AES3 (coax and twisted-pair bearers). Though SMPTE 259M was extended many years ago to allow the addition of audio and data carriage along with the picture, for many years the equipment was simply too expensive and the savings too elusive for embedded audio to catch on in general use.

The standard for embedded audio is contained in SMPTE 272M-1994. The standard supports sampling rates from 32kHz to 48kHz (and the NTSC variants at 0.1 percent lower sampling rates). Multiple pairs can be embedded. For component video, the standard allows data rates up to 270Mb/s. Thus, a single component-video connection can carry up to eight channel pairs, or 16 audio signals. The standard can embed the entire AES stream, allowing for the use of data over AES. An example is compressed audio like Dolby AC3 and Dolby E. Using Dolby E, one could construct a single service with up to 64 channels of audio. This has practical, real-world applications in international work, where multiple languages might be required on one program.

But, anything elegant comes with some degree of difficulty. The process of multiplexing the audio, presenting it to the SMPTE 259M encoder, formatting it as H-ANC data, and outputting the composited signal takes more time than a process without modification to the stream. If one were to strip audio, process it and try to replace it, the delay in the de-embed/re-embed process would make the audio data no longer synchronous with the original picture.

If you add any additional delay to the audio signal, the problem accumulates. For instance, if the audio demuxed is Dolby E, you must add the delay of the decompression, and presumably recompression, before adding the audio back into the completed video/audio signal. The picture will also be early, and will need to be delayed by a matching amount. If the delay was less than a line it might not be much of an issue, but it can easily accumulate to a field, making synchronization a major issue. If the extracted AES is Dolby E, the process adds at least a frame for each cycle.

Generally, muxing and demuxing audio requires AES audio infrastructure. If the facility contains analog signals as well, supporting embedded audio may offer little economic advantage since all three types of audio (analog, AES and embedded) might be required. Fortunately, some manufacturers recognize the serious nature of the problem and design embedding and disembedding equipment with analog-audio (and, of course, digital-video) interfaces.

So why use embedded audio at all if the costs and technical complications are so significant? Some types of facilities lend themselves to simultaneous audio and video routing decisions. One example is a distribution facility where most of the processes handle audio and video at the same time. An example might be an MPEG encoder used to backhaul programming over long distances. In a case like this, having a single interface ensures that all program elements are delivered at the destination without having to switch on multiple levels. In such a facility, the digital-video router can handle most, if not all, distribution without an audio level at all. This reduces the cost of a modest facility by at least five figures, and perhaps as much as six figures. At the same time, it reduces the complexity of the facility.

In such a case, a modest number of mux and demux channels might be interconnected in a small AES matrix with the minimum number of processing devices. A pathfinding system would allow simple operator-interface design, with virtual matrices appearing to be discrete levels of routing. The operator may not even know that there is only one primary level of routing, even when making decisions about rerouting channels (pair swaps, and left/right swaps within a pair). The key to making this work effectively is to map out the potential delays required to keep video and audio synchronous, and then set up the routing tables to put appropriate delay in when each potential path is selected.

A facility that is primarily a production plant, especially one where the audio is highly likely to be processed separately from picture, seems an unlikely candidate for embedded audio. A satellite record area within a station may be quite the opposite, with most digital VTRs designed to accept digital audio at the input. In an HDTV facility, embedding all three AES tracks necessary to produce in 5.1 surround sound will ensure that differential delay between tracks does not accumulate when signals are routed between functional parts of the facility.

The case for embedded audio is complex. A decision to implement must be carefully researched and engineered. There are significant operational advantages in some cases, and disadvantages in others, that must be vetted and understood. Once you have decided which road to take, be thorough in evaluating all manufacturers' hardware. The number of options has grown significantly in the last five years, and a full spectrum of choices is now available.

John Luff is senior vice president of business development at AZCAR. To reach him, visitwww.azcar.com.

Send questions and comments to:john_luff@primediabusiness.com

Home |Back to the top | Write us