New delivery technologies

New video compression technologies differ from the composite video concept. They use the MPEG-2 concept with all its permissible variations. The MPEG-2

New video compression technologies differ from the composite video concept. They use the MPEG-2 concept with all its permissible variations. The MPEG-2 concept results in a more efficient compression and maintains a reliable performance level at considerably reduced bit rates. In addition, the level of performance is a user choice. The MPEG-2 compression methods start with a signal conforming to the ITU-R BT.601 standard at the 4:2:2 level. The system works best if the I/O ports are SDI@270Mb/s. This would normally happen when the operational facility is fully digital.

In real life, it is found that the majority of telco service users feed and request NTSC composite analog signals to interface with their facilities. This mandates the use of analog composite NTSC-to-SDI@270Mb/s decoders at the MPEG-2 encoder input and SDI@270Mb/s-to-analog NTSC decoders at the MPEG-2 decoder output. This article will deal with the decoding of NTSC signals and its problems.

The input interface concept

Interfacing analog NTSC source signals to SDI@270Mb/s requires decoding to basic analog components and digitally converting them to their digital equivalents as per ITU-R BT.601. While encoding to analog NTSC is a relatively simple process, the decoding back to components is difficult. The system was originally conceived as 1-way: camera to transmitter to receiver. With modern production requiring many passes through various types of production equipment, the tendency has been to operate in component, analog or digital, to avoid concatenation effects and encode to NTSC only once: prior to transmission. Using a decoded NTSC signal to drive an MPEG-2 system introduces unnecessary decoding artifacts. While these artifacts can be reduced, they cannot be completely eliminated.

Figure 1 shows a simplified block diagram of an NTSC B-Y/R-Y equi-band decoder. The original 1953 NTSC encoding process proposed unequal bandwidth I/Q chrominance components instead of the original, and later universally adopted, B-Y and R-Y components. The aim was to allow for a wider bandwidth, and consequently resolution, for the I component chosen to carry face-tone information. This created an unnecessary encoding and decoding complication and was quickly discarded by studio equipment and TV receiver manufacturers, which used B-Y/R-Y instead of I/Q encoding and decoding. PAL and SECAM never even considered the I/Q concept.

The decoder consists of two parts: the luminance (Y) and chrominance (C) separator, and the chrominance decoding. The basic chrominance decoder consists of two demodulators fed with two recovered subcarriers in quadrature (90° phaseshift) which, if fed pure chrominance, can recover the original color information without fail. In order for the chrominance demodulators to operate satisfactorily, it is important that the Y/C filter completely separates the luminance and the chrominance information. This is, however, a practical impossibility. Various filtering schemes are either rudimentary or feature various degrees of sophistication without achieving perfection.

Figure 2 shows a simplified block diagram of an unsophisticated Y/C filter. Essentially, luminance information is recovered by low-pass filtering the composite NTSC signal to 3MHz, thus removing all higher frequencies and the luminance detail they carry. A 3MHz luminance bandwidth results in a horizontal resolution of the order of 240LPH, roughly equivalent to a better VHS tape deck. The chrominance information is recovered by band-filtering the NTSC signal to a ±600KHz spectrum around the suppressed chrominance subcarrier of 3.58MHz. What is ignored here is the fact that the spectrum around the chrominance subcarrier contains luminance information interleaved with the chrominance information. The recovered chrominance signal is, therefore, contaminated with high frequency luminance information which, when decoded, produces “cross-color” effects resulting in spurious and flickering color displays.

Figure 3 on page 24 shows a simplified block diagram of a simple comb filter. The comb filter takes into consideration the following:

  • The chrominance subcarrier alternates its phase on a line-by-line basis as a result of its frequency being a multiple of half the horizontal scanning frequency; and
  • In most pictures, the brightness and the color information do not change abruptly from line to line.

Consequently, subtracting the delayed signal of the preceding line from the information of the present line cancels, or “combs out,” the luminance information and enhances, or “combs in,” the chrominance information. In Figure 3, a 1H (the duration of one TV line) delayed signal is subtracted from the present line yielding “combed chrominance.” The two signals are suitably reduced in amplitude (x0.5) to recover the normal amplitude chrominance signal. The combed chrominance is subtracted from the composite NTSC signal, yielding “combed luminance.” Because all luminance information has been removed from the chrominance signal, no cross-color effects occur. A similar, but less perceived, “cross-luminance” effect also is eliminated. Now all this works if the assumptions made above are true. If, however, there are sharp transitions between the lines in the same field, consecutive fields or consecutive frames, the comb filtering fails, and the result is known as “hanging dots,” resulting from the non-removal of chrominance from luminance.

A solution to the failing of the comb filter is to use an adaptive comb filter. The adaptive comb filter senses the failure of the comb filter and instantly switches to a LP/BP filter for the duration of the disturbance.

Chip manufacturers offer a selection of LP/BP, line-comb, field-comb, frame-comb or combinations such as adaptive filtering oscillating between comb and LP/BP. Some of the available chips digitize the composite NTSC analog signal and carry out the filtering and the decoding in the digital domain. Field or frame combs, while more efficient for operational environments where an operator uses the most satisfactory filter given the ever-varying characteristics of the image, are not advisable for a network distribution environment where the decoder operates unattended.

So the best choice for an MPEG-2 feed would be an adaptive line-comb filter. A much better choice is to feed the MPEG-2 encoder with an SDI@270Mb/s signal and avoid all the hassles of decoding NTSC.

Michael Robin, former engineer with the Canadian Broadcasting Corporation's engineering headquarters, is an independent broadcast consultant located in Montreal, Canada. He is co-author of Digital Television Fundamentals, published by McGraw Hill.