In a world where standards for consumer electronic products have evolved continuously in recent decades, there remains a bastion of stability: broadcasting.

Audio has evolved from mono to hi-fi to stereo to quadraphonic to surround sound. Meanwhile, AM radio continues to thrive more than 80 years after commercial service was initiated, and FM radio has been with us for more than 50 years.

The story for television broadcasting is much the same. NTSC and PAL continue to dominate the television landscape nearly five decades after the launch of color TV broadcasts. These analog video compression standards have endured the test of time, even as video acquisition, recording and display products have evolved beyond their limits.

As testament to the entrenchment of 525/625-line interlaced video, the transition to digital television has been driven primarily by the digital encoding (compression) of these legacy video formats using a standard finalized in 1995 — MPEG-2 MP@ML (Main Profile at Main Level). NTSC and PAL have evolved into digital standard-definition TV (SDTV) delivered primarily by DBS and cable. Meanwhile the transition to digital high-definition TV broadcasting (HDTV) has languished as the consumer electronics industry has used the DTV transition to develop an HDTV beachhead via DVD and DBS. Now the cable industry is embracing HDTV as a premium niche service.

MPEG-2 codes reference frames and differences from predictions using the Discrete Cosine Transform (DCT), applied to 8x8 blocks of samples. The DCT coefficients are then quantized. Excessive quantization can cause distortion of high frequency edges and blocking artifacts, as illustrated in this example.

Since 1995, hundreds of millions of MPEG-2-enabled products have been sold. Last October DVD players passed the 100 million-unit milestone. DBS and digital cable set-top boxes account for another 100 million MPEG-2 decoders, and tens of millions of PCs can now decode MPEG-2 video streams.

So, given the historic longevity of broadcast standards, why are some people, including this author, suggesting that MPEG-2 is growing old? That NTSC and PAL compression will likely outlive MPEG-2?

Adapting to change

For decades, broadcasters have worked relentlessly to improve the delivered quality of their product, while the consumer electronics industry has done the same with television display technology. A major reason for the emphasis on evolutionary improvements in video quality was the inflexibility of analog video compression standards. The entire broadcast pipe could only be used to carry one program, but it still took many decades to reach the point where that pipe became the limiting factor in delivered image quality.

Digital video compression changed the rules of the game, despite the protests of broadcasters. In the early ‘90s the battle cry among broadcasters was: “We won't use no stinking compression.” The industry seemed oblivious to two realities:

  1. Their success was based on the use of an analog compression standard that squeezed three 6MHz (or greater) RGB signals into one 6MHz channel. And dare I mention the use of interlace, which added another 2:1 compression hit?
  2. Two-thirds of their viewers no longer relied upon terrestrial broadcast reception; they had moved on to cable in order to get improved video quality AND improved programming choice.

In 1995, DirecTV proved that MPEG-2-based digital video compression was a viable way to deliver television programming — hundreds of channels of television programming. And they tested the theory that consumers are primarily interested in improved video quality. While broadcasters toyed with the possibility of delivering digitally compressed HDTV in one 6MHz channel, DirecTV learned just how hard they could push the limits of digital compression to deliver a multiplex of programs in one 6MHz channel.

MPEG-2 compression enabled an entirely different way of looking at video quality. By removing redundancy from video streams and using prediction techniques to improve compression efficiency, it became possible, on average, to deliver what appeared to be better picture quality. In time, however, consumers learned to see the Achilles' heel of MPEG-2 compression.

With analog compression, delivered image quality is relatively constant, with the amount of information in the picture varying considerably to maintain the quality, using more or less of the channel. Digital compression utilizes the channel far more efficiently, but it can break down when there is too much information for the allocated bit rate. The average bit rate requirements may be relatively low, but peak bit rate requirements can spike to two or three times the average when there is high information content (too much fine detail or rapid action).

When the MPEG-2 standard was created, there were significant concerns about computational complexity, especially for HDTV encoders. The standard was designed to limit the complexity of the mass-produced decoders, defining the syntax of the compressed stream to be encoded. It was assumed that encoders would evolve to improve delivered image quality, just as analog video equipment evolved to fully utilize the NTSC and PAL pipes.

And this is exactly what happened. The DBS system operators have replaced their MPEG-2 encoders many times in fewer than eight years. Each new generation has improved the delivered image quality for a given bit rate. For the most part, however, each new generation has been used to reduce the bit rate needed to deliver minimally acceptable image quality so that more programs could be delivered.

As unlikely as it sounds, it has taken only eight years to fully exploit the encoding tools in the MPEG-2 standard. In other words, it doesn't get much better than this.

Meanwhile, video compression technology has continued to evolve rapidly, driven by the need to deliver acceptable image quality at much lower bit rates via the Internet and wireless telecommunications devices. At the same time, Moore's Law has relegated the perceived complexity of MPEG-2 encoding to the scrap heap of computer history. Today's ASICS, microprocessors and memory chips provide four to five times the computation resources available for the same cost in 1995.

The factor that has not changed in such a dramatic fashion over those years is access to bandwidth. The demand for more content is growing faster than the bandwidth available to deliver it. DBS needs more capacity to deliver local-into-local broadcast programming to more markets. Cable needs more capacity to offer video-on-demand services to digital cable subscribers. And broadcasters need a business model that is competitive with the multichannel subscription services it relies on today to reach 85 percent of U.S. homes.

There are two ways for these industries to adapt to the rapid pace of change in all that is digital:

  1. Use more efficient modulation schemes that cram more bits into the same amount of spectrum.
  2. Use more efficient video compression to reduce the bit rate needed per program.

A variety of next-generation video compression algorithms are vying for the opportunity to replace MPEG-2. Proprietary codecs from Microsoft and Real Video have been pushing the envelope in the PC-based streaming video markets. And the Joint Video Team of the ISO (MPEG) and ITU have just finished work on a standards-based codec that will become the MPEG-4 Part 10 (ISO) and H.264 (ITU) standards. In March we will examine the technology behind these new codecs, and the prospects for their deployment as a replacement for MPEG-2, and in proposed enhancements to the ATSC standard.

Craig Birkmaier is a technology consultant at Pcube Labs, and hosts and moderates the Open DTV Forum.

Send questions and comments

Home | Back to the top | Write us