As a broadcast application, HDTV conversion is going to be around for the foreseeable future. Even when the transition to HD broadcasting is “complete” and SD sets become as anachronistic as black-and-white receivers are today, there will still be a need to upconvert archival material to HD and to downconvert the HD output — at least wherever the market or government regulations dictate that legacy SD receivers must continue to be served.
Today's problem for broadcasters is how and where to handle HDTV conversion in the context of an ongoing transition, where nearly every facility needs to bridge and interconnect existing islands of analog, SD, HD and file-based equipment.
Perhaps the most basic question is where in the broadcast signal path HDTV conversion should take place. Less than two years ago, many in the industry were looking to do all the production in SD and upconvert just before transmission as a relatively inexpensive approach to get up and running with HD. The problem with this approach is that SD-originated material has a limited horizontal and vertical resolution. This might not be noticeable on a traditional-size screen, but it becomes obvious as viewers upgrade to big-screen plasmas or projection TVs. Consumers are starting to demand sharper, more detailed images, and conversion prior to transmission looks less and less like a long-term option.
A better approach is to upconvert incoming feeds and signals as required. With this approach, broadcasters can take advantage of any HD-originated content that they have, and they will be providing as much HD-originated content as possible. The HD programming may then be downconverted for SD network distribution.
Figure 1. Interlaced video is made up of two fields, each containing half the lines of the image. Click here to see an enlarged diagram.
With this approach, it's important to invest in a good-quality downconverter, particularly where computer graphics and scrolling captions are used. Care must also be taken as some of the downconverted SD output will have originally been SD that was upconverted, and problems can arise if this goes through a subsequent downconversion step. An alternative to this approach is to maintain SD production alongside HD production, easily allowing for different branding on each channel.
Lower prices for HD equipment are making these scenarios more and more practical. In many cases, the premium for HD gear over its SD equivalent is relatively small. This means the cost of obtaining HD content goes down as well. HDTV cameras priced well under E5000 are now available and allowing facilities to equip their studios and crews for HD production at an increasingly lower cost.
Perhaps the most important key to conversion for HDTV is the right conversion technique to handle interlaced material. There are a variety of solutions ranging from linear through to adaptive and motion compensation.
An interlaced video is made up of two video fields. (See Figure 1.) Each field contains half the lines of the image, but, critically, the fields are sampled at different points in time. On stationary scenes, the fields can be superimposed to give full vertical resolution, but once motion occurs, the vertical resolution is halved. The key to achieving superior upconversion is to effectively deal with interlace and motion.
Figure 2. Shown here is native de-interlacing for linear conversion. Where motion occurs, there are potential drawbacks. Click here to see an enlarged diagram.
As can be seen in Figure 2, the output of a typical linear approach is essentially constructed from the input. Where motion occurs, there are potential drawbacks. However, linear techniques can range in complexity, and good design can overcome some of these drawbacks. One major benefit of a linear approach is a low-delay conversion process.
Adaptive techniques vary in type and complexity from global- to pixel-based adaption signals. Global adaption is based on such things as whether the material is film or video, and the conversion mode is varied on a field-by-field basis. But problems can arise if the material being converted is mixed film and video. Delays in switching from one mode to another can occur, resulting in a momentary loss of resolution. If the material has video inserts in film-originated material, the conversion may be non-optimum for an area of the image. These effects are disturbing to the viewer.
Pixel-adaptive techniques overcome this issue but present their own problems. A smooth and seamless implementation of the adaption is needed; otherwise, disturbing flickering effects may be seen in different areas of the picture, depending on the material content. Again, this kind of effect is disturbing. The key benefit of adaption techniques is that they can offer significant improvements in sharpness.
Figure 3. Shown here is optimal de-interlacing with motion compensation. This technique accurately measures motion, resulting in a significant conversion performance improvement. Click here to see an enlarged diagram.
Finally, we come to motion compensation. The previous approaches are all designed to get around the fact that parts of the image are moving. In these areas, the resolution is effectively halved. Without motion compensation, we don't know whether the information in the adjacent fields is static or if it is a moving object — hence, the need for complex adaption strategies to improve conversion performance. With motion compensation, we can accurately measure motion and make accurate predictions. This gives a significant conversion performance improvement. (See Figure 3.)
Another point that broadcasters need to keep in mind is that HDTV conversion is not just about the video. Multichannel audio, compressed and uncompressed formats, and closed captions all offer their own unique set of challenges.
For example, EIA-708 closed captioning for HDTV is much richer than its EIA-608 SD counterpart. When HD-originated, closed-captioned material is downconverted, proper conversion needs to ensure that EIA-708 closed-caption data is translated correctly and not lost.
Control and monitoring, as well as time code, represent two other issues that need to be addressed. The first is essential for broadcasters in large plants to assure the user that the unit is working correctly. The second is particularly relevant for products that go into facilities and are used for duplication services that must meet stringent broadcast delivery requirements for correct timecode and program length.
With the multitude of standards, we must also deal with a variety of different color space environments. It is crucial that this issue is dealt with correctly to ensure the look of the program is not changed.
Ian Ellis is product manager, conversion & restoration for Snell & Wilcox.