HD systems


Facilities like those at ESPN and Turner Entertainment employ HD infrastructures and offer HD production values similar to those restricted to SD producers just a year or two ago. Photo by Andy Washnik, courtesy Thomson Grass Valley.

Broadcasters have been infatuated with HDTV since the ‘70s. In 1987, they asked the FCC to carve out spectrum for HDTV. But, back then, practical hardware just did not exist. Nevertheless, SMPTE slowly and methodically plugged away at creating standards that would eventually allow manufacturers to create compatible, interconnectible systems. By the time the FCC created its Advisory Committee (ACATS), SMPTE had published key scanning and hardware-interconnect standards. But implementation was elusive. And, even just a few years ago, HDTV production and broadcast systems were exotic and expensive. Cameras and lenses cost upwards of $250,000, and an hour of videotape cost the equivalent of several house payments.

How far we have come since then. This year, Sony introduced an industrial camera for less than the cost of many electronic field production (EFP) lenses. JVC is offering HD camcorders for consumer applications. Other camera manufacturers are sure to follow this lead. HDTV broadcasts are widely available and often compelling. Facilities like those at ESPN and Turner Entertainment offer no-holds-barred production and transmission in HD, with production values similar to those available to producers in standard definition just a year or two ago.

Catalysts for change

Several catalysts prodded the change. A handful of manufacturers deserve significant credit for their dogged pursuit of the market. In 1981, Ikegami showed the first HD camera at an NHK presentation during a SMPTE winter Television Conference in San Francisco. At that conference, HDTV production was a technological oddity. Twenty years later, the tools have matured and made video recording of motion picture production not only possible but highly desirable.

While manufacturers have done a wonderful job of creating the tools of the new industry, the members of the production community who faithfully pursued the Holy Grail may deserve the most credit. In the early years, they demonstrated creative use of a rough and immature technology. The productions they created made viewers lust for the depth and clarity of the images they produced, despite severe limitations of the early hardware. Francis Ford Coppola, George Lucas and other high-profile directors worked their craft at financial peril, when editing rooms were barely capable of dissolves and DVE was only a wistful wish.

The FCC also deserves credit for nudging, cajoling and finally mandating fundamental change. It is true that, in 1987, more than 50 broadcast entities initiated the first push for HDTV by requesting spectrum for it. But it was not until the FCC finally granted their wish nearly a decade later that broadcasters began to realize the unprecedented and fundamental change it would bring to television. Behind the scenes in the FCC, the ATSC took action. Despite all the flack the committee has taken for 8-VSB, the arrival of the fifth-generation set-top boxes now seems to have vindicated its belief that HDTV can work.

Making it work

But, for any technology to work, there must be a fundamentally simple and affordable approach to systemization, and HDTV is no exception. SMPTE and ATSC have provided the hardware and software interoperability to make DTV — and especially HDTV — work in the real world. SMPTE 292M-1998 defines the serial digital interconnection that applies universally for all HDTV standards. As one might expect, due to HDTV's high bit rate of 1.485 Gb/s, 292M is usable on copper only to about 100 meters, though the standard also established an optical interface that should be usable more than two kilometers.

Table 1 shows the SMPTE standards and the HDTV formats they define. The table contains a data-rate divisor that may substantially complicate HDTV implementation. All of the standards establish scanning and interconnection for both 60Hz and 59.94Hz. Conveniently, or inconveniently, they are related by the divisor 1.001. If both rates coexist in the production and transmission worlds, that 0.1 percent difference in frame and line rates has pernicious effects on systemization. For example, a production might incorporate 720p cameras at both rates, since pixel counts in the image format are the same. But the time domains don't match, so one of the signals must be converted — much like standards conversion — before the two signals can be combined.

Thus, in SMPTE's early internal-committee discussions, especially in a committee called the Working Party on Advanced Television Production, members strongly recommended that all HDTV signals be generated using a clock locked to the NTSC subcarrier. This helps upconvert 525-line signals for HDTV productions, and helps downconvert HDTV material for NTSC systems. Neglecting this would result in two signals deviating in time by an amount that is precisely the same as drop-frame time code (108 frames per hour; refer to SMPTE 12M section 4.2.2). Clearly, this is not just a frame-sync issue.

Of formats and bit budgets

Everyone knows that modern HD is generally produced in one of two formats: 1080i30 or 720p60. The debate over which one is better may be as irrelevant as the debate over how many angels can dance on the head of a pin. The truth is that neither format is clearly superior in all respects. The 720p60 format has 88.9 percent as many active pixels per second as 1080i30. You can say that 720p60 puts more energy into accurately displaying temporal samples (twice the number of frames per second), while 1080i30 has more static spatial resolution. The most specious contention in this debate is that consumer monitors cannot yet display all 1920×1080 pixels. This may be true, but it also may be irrelevant. The fact is that MPEG-2 satellite broadcasts (and most terrestrial broadcasts) throw away detail that viewers cannot perceive, and they replicate temporal sampling using motion-estimation techniques. H.264 and WM9 may, at equivalent data rates, significantly improve the decoded picture quality. But broadcasters will almost certainly use the extra bit budget to gain coding efficiency to decrease the bits per pixel instead of raising picture quality. In our industry, digital has allowed increased capacity, and that is what programmers are asking of technology. Make no mistake; quality goes up as coding becomes better. But the drive to reduce bit rates will win over increased quality for the same cost.

Aspect ratio issues


Table 1. SMPTE standards and the HDTV formats they define. Click here to see an enlarged diagram.

HDTV is much more than esoteric numbers. It's about wider aspect ratio, better sound quality, improved color accuracy and wider gamut. The most obvious of these improvements is probably the wider aspect ratio. Here also, technology must deal with a world in transition. Nearly all legacy material coming from 525 and 625 production continues to retain the 4:3 (1.33:1) aspect ratio that evolved in the transition from film to electronic production. The 16:9 (1.77:1) ratio is a better match to modern film than 1.33:1. But, with filmmakers shooting in aspect ratios as high as 2.35:1, HDTV can only claim to be a more modern match to today's film productions. Remember that, about a decade and a half ago, SMPTE debated 16:9 production. The result was an extra standard-definition format and data rate that was intended to increase the quality of recorded and transmitted pictures. SMPTE 259M includes both 270Mb/s and 360Mb/s data rates. The intent was that 360Mb/s using 18MHz sampling would permit widescreen production with the same quality as the 270Mb/s rate using the13.5MHz sampling with which we have grown quite comfortable. But, after some real-world tests using images captured both ways, the industry decided that the difference in quality was not worth the resulting complication. Keep in mind that, back then, memory costs were higher, processors slower, and the cost of digital recorders much higher. We can only wonder what would happen if the same consideration took place today.

In any case, the result of that decision is that, for standard-definition imagery, the aspect ratio issue remains in the optical domain. Equipment compliant with SMPTE 259M optically (anamorphically) converts 16:9 images to work in 720 horizontal samples. Here, hardware that is aspect ratio-aware (production switchers, DVEs, graphics processors) process the images. Finally, the display equipment converts the images back to 16:9. Fortunately, HDTV has been designed exclusively to be a widescreen environment. The images are native widescreen throughout processing and distribution. Only at the boundaries where legacy material is incorporated into or cut from an HDTV image do we face the inevitable technical and production issues. Formatting graphics often presents the most thorny problems. For example, a left-justified graphic created for a 4:3 image, when displayed on a 16:9 screen with side panels, appears near the middle of the picture (see Figure 1). And a lower-third graphic created for a 4:3 image, if displayed with the left and right sides justified to the edges of the frame, disappears on a 16:9 screen.

One solution in dual-format productions is to mix graphics after reformatting for display. Productions created in 16:9 HDTV might have a “center cut” drawn out and then graphics overlaid. This type of production will likely continue for years, perhaps decades, so we will just have to get used to either formatting graphics that are pleasing in both displays (not likely), or spend the capital to switch and mix twice. One might envision a future video switcher designed to handle exactly this problem by sending crosspoint outputs to two mixers, one or both of which contain aspect ratio converters and separate keyers to handle the two formats.

Cover the gamut

Gamut and color primaries are system issues only at the interface points. Like aspect ratio, color standards for HDTV were designed to be consistent worldwide (well, almost anyway) at the time the standards were established. Image processors, like upconverters and downconverters, usually can convert between the different color spaces, in addition to performing spatial and temporal conversions. (For a detailed explanation of these issues, see Poynton, “Technical Introduction to Digital Video” and “Digital Video and HDTV; Algorithms and Interfaces.”)

Audio issues

As for audio, HDTV does not always involve surround sound, although many people assume that it does. A surprising number of homes are equipped with surround-sound equipment. Dolby estimates that, by the end of 2001, over nine million U.S. households had surround-sound receivers. The company says that, worldwide, over 100 million households are equipped with Dolby Pro Logic II and Dolby Surround decoders. That does not necessarily mean that these households have full surround-sound speaker setups. Nor does it mean that television receivers, or even DVD players, were connected to them. Nonetheless, this number constitutes a significant portion of the market. Jupiter Research found that 24 percent of U.S. online households now have digital surround-sound systems. The research firm says that extremely low-priced home-theater-in-a-box systems have expanded the market and made surround-sound systems broadly affordable (although not necessarily profitable for manufacturers). As a result, many program producers have opted to provide full 5.1-channel surround service.

This home market penetration has had a dramatic effect on audio systems in production and broadcast plants. The most obvious effect is the need to establish calibrated 5.1-channel monitoring for quality control and mixing for air. Dolby offers white papers to guide and assist designers in planning the mixing environment and is happy to support implementation as well (because that sells hardware and increases the penetration of licensed decoders). Discrete 5.1-channel sound can be easily transported on three AES pairs (one pair, if AC-3 encoded). Keep in mind that anything you do to one channel you must also do to all channels because of material that overlaps single channels. A minor time misalignment can produce some pretty strange effects. Network distribution to your station might arrive on three AES pairs, requiring switching and mixing on all three. Keeping channel assignments straight and aligned is critical. Mixing in master control is not difficult if the hardware is designed to do six channels for every input. There is, however, the dreaded metadata question.

Dolby AC-3, established by the ATSC as the format for DTV audio, carries several chunks of metadata that allow the home receiver to reproduce the sound as the producer and mixer intended. This metadata can be carried with the audio in the AC-3 stream, and on some VTRs as well. The proper handling of AC-3 is a subject beyond the scope of this article. Dolby offers excellent white papers on mixing and metadata and other surround sound issues on its Web site (www.Dolby.com/tech/).

Another way to transport multichannel audio is to use Dolby E. It permits up to eight channels of good quality audio, along with metadata, in one AES carrier. But encoding or decoding Dolby E is a (precisely) one-frame process. This can require some careful mapping of the video and audio processing to maintain lip sync along the way. At least two master control switchers now contain an internal Dolby E decoder, which is particularly useful for networks that deliver Dolby E to the station. The lip-sync question has the greatest potential to confound an otherwise successful implementation plan.

The good news

The good news for broadcasters is that a rudimentary knowledge of 525-line digital video systems will go a long way to help them understand HDTV system implementation. The tools are remarkably similar. Surround audio is a bit more mysterious. But, with some care, it can be understood easily. The best advice to those implementing HDTV in a station or elsewhere is to read extensively, invite experts to your facility to educate you, and use common sense. HDTV is still composed of pictures and sound — it's just a whole lot better.

John Luff is senior vice president of business development at AZCAR.