Technology in Transition: MPEG encoders and multiplexers

MPEG encoders and multiplexers

By John Luff

Compressed video has enabled businesses that were only dreams at one time. Beginning with intensive research over 20 years ago, compression products have become vital to much of what we do today. It is valuable to keep the high level view in clear focus.

Compressed video is all about removing data from a television program without degrading the “perception” of quality. It is generally accepted now that 8 to 10 Mbits/s is adequate to provide a good quality picture. At 10 Mb, approximately 4.5 percent of the original picture data rate is transmitted. Part of this is achieved losslessly, by removing redundant information and run-length encoding the result. Part is achieved by deciding what parts can be thrown away without anyone noticing. This process of quantizing — that is to say, making judgments of the relative value of portions of the data and discarding portions viewers will not notice — is the lossy part of encoding and a prime key to the success of encoding. It is important to remember that MPEG is not an encoder specification, but rather a specification of the content of the bit stream that a decoder must know how to reconstruct. Any encoder may be used, so long as it produces a compliant bit stream, leaving manufacturers considerable leeway to differentiate their products.

Encoding can be done very simply, like that done in software-only products that run on consumer desktop PCs, or with considerable sophistication. Some VTRs now use MPEG, though they use a variant that reduces the coding complexity and raises the bit rate of the bit stream for the sake of producing an encoding engine at moderate cost. Low-bit-rate, real-time encoders are considerably more complex and expensive, for they must compute results with great care to ensure the available bits are being applied where they are most useful.

Two-pass encoding can significantly raise the quality of the result. The picture is first encoded using a known set of criteria, and the result is then evaluated to judge the success of the first approximation. A second encoding pass after either human or machine review is done to optimize the decisions and move the available bits to the most challenging content within each frame or sequence. This can materially improve the picture quality, as seen on consumer DVDs, but with high cost per minute for encoding.

To further improve the picture, one manufacturer has developed a technique that improves the picture when repeated generations of encode/decode cannot be avoided. They do this by aligning the I frame in the output with that of the input, ensuring that less aggressive quantization is needed. This is done by looking at the statistics of each frame and looking for the signature of the repeating pattern of I, P and B frames which all encoders make.

It is important to remember that MPEG is a component video system, and impairments that come from composite signals can force the encoder to work considerably harder on the edges of portions of the content. You can avoid wasted bits by using an external high quality decoder, or avoiding composite video altogether.

Low signal-to-noise ratios also force the encoder to work harder to determine what is the real content it is compressing. Most encoders have noise reduction algorithms, sometimes optional, which eliminate the noise, to the extent possible, before coding begins. The devil is in the details with noise reduction, as all high frequency content is not noise, and low frequency noise is equally troublesome. As above, an external noise reduction system can materially improve the result. Several manufacturers offer MPEG preprocessing devices that perform some or all of these functions.

When several signals are combined, multiplexed together in one bit stream, further tricks can be applied to provide additional available bits to challenging content. This technique is called statistical multiplexing, and makes use of the fact that it is expected that the time varying content in multiple programs will not all be simultaneously challenging to the same degree. If an encoder has available bandwidth that is not required to maintain acceptable quality (See Figures 1 and 2), it signals the multiplexer that bits can be reassigned, and consequently throttles back its own bit rate to the extent possible. Those unused bits are added to the bit rate available to other encoders. In the examples both streams are intended to be encoded with a fixed rate of 4 Mb, but both have considerable unused overhead most of the time and exceed their budget occasionally. When added together algebraically they never exceed the total available bandwidth of 8 Mb, and in fact leave space for opportunistic data transmission at the same time (See Figure 3).

Multiplexers provide other services as well. Fundamentally, mux is the traffic cop that turns on an off-individual bit stream under the management of a control system, which can in turn be acting on the commands of an automation system. The mux does not alter the content, but enables multiple simultaneous uses for the composite bit stream. For example, it can encapsulate IP data for transmission as part of an interactive program, add sophisticated program guide data, enable multiple language services, and manage opportunistic data transmission when the bit stream allows.

A mux is typically loaded with default settings for each service. The settings might include the guaranteed bandwidth, allocation of statmux channels, repetition rates for required tables and program guide content, and other services (see figures 1, 2 and 3). The software application that controls the mux must be understandable and provide the user with feedback in concise, understandable displays, for the real work going on behind the scenes is quite complex indeed.

A mux may allow “remux” functions as well, with the ability to drop one program and replace it with another, and with some sophisticated technology to adjust the bit rate by changing the quantization tables. Keep in mind that once the encoder has thrown away content, nothing can restore it.

A mux may have a number of inputs, typically DVB ASI (Digital Video Broadcast Asynchronous Serial Interface). This brings up a tangential topic which should be known to most readers, but which is often misunderstood. MPEG is a generic specification. It defines a standard, often called a toolbox, with which an end-to-end system can be designed and built. The real world implementations for the most part have settled in two camps. DVB is a European consortium that standardizes the end-to-end system most often used for worldwide interchange of MPEG streams. It can be used over satellite, or in a variant intended for over-the-air broadcast (DVB-T). ATSC has similarly taken plain vanilla MPEG and customized it for terrestrial broadcast. The differences are subtle, though incompatible. It is possible to build a decoder that would respond to either variant, even though the differences are not in the fundamental technology, except for audio coding which is straight from MPEG in the case of DVB, and Dolby AC3 in the case of ATSC systems.

Most encoders allow the user to select between ATSC and DVB outputs. It is “just software” that defines the differences, outside of audio coding, which is most often done in hardware. The biggest difference may well be in program guide and interactivity standards, where the North Americans and Europeans have not been able to find common standards. Though the debate over DTV in the US has centered on the different modulation standards DVB-T and ATSC/FCC have chosen (the infamous COFDM vs. 8VSB debate), there is significant common ground between the two systems. The fact MPEG provided such a rich syntax allows different uses of the standard to be effective and commercially successful.

Lastly, it is valuable to note that U.S. DTH services use neither DVB nor ATSC, but rather proprietary versions of MPEG which, like it or not, are not required to match any standard. As with ATSC and DVB, the complex and open MPEG standard allows such flexibility of use. Thankfully.

John Luff is vice president of business development for AZCAR.

Send questions and comments to:john_luff@primediabusiness.com

Do you have a comment about this article? To tell us your thoughts, click here.

Back to the top