Format scan and conversion

The original purpose of the standards converter was to address the incompatibilities between the world's multiple television standards. Dealing with standards

The original purpose of the standards converter was to address the incompatibilities between the world's multiple television standards. Dealing with standards featuring different frame and field rates, with different numbers of lines and fields in each frame, has long been an issue facing the video industry. Today, that problem is compounded by the need to convert material from interlaced to progressive with a mix of SD and HD formats, each with its own color space.

In addition to addressing these image-related factors, conversion systems now also must account for the embedded audio carried along with virtually all TV signals. Before audio was embedded in the digital signal, it remained the domain of audio engineers. However, integrating audio with digital video signals has made it hard to separate the two without the danger of introducing lip-sync errors. Consequently, format conversion must be considered both an audio and video issue.

Standards conversion workflows

An investment in a high-quality format and standards conversion solution is valuable whether implemented in a facility that performs live event transmission or mastering and duplication. Within the live transmission chain, there is no time for manual intervention, and the standards converter must be able to provide a high-quality picture, regardless of the circumstances. Any errors introduced during conversion will find their way into the end product.

The quality requirements of a system used in post-production and mastering and duplication facilities are high as well. Clients often want to send the finished program to multiple markets, and this requires the facility to convert material to the appropriate standards, corresponding frame and line rate, scan (interlaced or progressive), and so on. In this type of workflow, the quality of the standards converter affects not only the quality of the end product, but also the cost-effectiveness of the project.

Sometimes saving money on a less-expensive converter with many operational modes may be appealing. However, the time and labor spent fixing the end product may quickly erode any cost savings. Manual intervention and quality assurance are costly, so conversion must be performed right the first time. It is therefore essential that the converter have only one operational mode, i.e., “On.” Post and duplication facilities cannot afford rejection of the converted product, as it could damage both the company's bottom line and its reputation. A robust standards converter that provides highly automated operation and sophisticated image and audio processing is usually a better choice.

Image processing technologies

The most important thing to know in purchasing a standards or format converter is the type of underlying image processing technologies being used. There are several approaches to deinterlacing and frame rate conversion. Let's look at them along with their respective strengths and weaknesses.

Early conversion solutions relied on a linear filter, combined in some cases with motion adaption technology to try to adapt how the output image behaves based on the presence of motion. The most basic linear deinterlacing solution simply filters out all of the odd or all of the even fields, effectively reducing the image resolution by half.

Adding a degree of sophistication to this process, motion adaptive techniques maintain both fields in those parts of the picture where there is no motion, but they blend half the resolution in those parts of the picture with movement. Like linear techniques, this process leads to image softness, blurring and apparent motion judder.

Motion estimation provides an alternative method for converting interlaced video to progressive. There are three primary technologies used to estimate movement: block matching, gradient techniques and phase correlation.

Block matching and gradient techniques are the easiest to implement in hardware and software-based solutions. However, both share one detrimental side effect. The performance of these techniques is influenced negatively by changes in noise and luminance levels. Noise is present in all signals, of course, and luminance changes are nearly as common as is movement in television programming.

One example of luminance level changes is a football kicked in late afternoon through both sun and shadows across the field. Luminance levels change significantly as the ball transitions from sunlight to shade. Other examples include models walking on a catwalk as flash photos are taken and actors walking the red carpet as paparazzi take pictures. These types of scenes can dramatically upset both block matching and gradient techniques.

Phase-correlated motion estimation

An alternative to block matching and gradient techniques is phase correlation, which is based on the principle that the displacement in the time domain relates to the phase shift in the frequency domain. As a frequency domain technique, phase correlation is free from the effects of luminance and noise and thus offers good immunity to changes in both light and noise.

Implementing phase-correlated motion estimation requires sophisticated algorithms and a high degree of processing power. Within phase correlation, a Fourier transform breaks down the video into a series of sine waves. The phase for each of these sine waves is thus provided, and motion can be measured with the phase information available from successive images. Spectral analysis of two successive fields and subsequent subtraction of individual phase components yields phase differences that, when subjected to a reversed Fourier transform, provide a correlation surface with peaks corresponding to the motion between successive images.

By using multiple stages, it's possible to derive a motion vector to sub-pixel resolution. This level of accuracy provides for a system that is most able to recreate the movement and changes in a given scene.

The same technique can be used to perform accurate frame-rate standards conversion, as both processes must be able to recreate the position of images at any point in time. When converting video from 50Hz to 60Hz, motion compensation can be used to replace 50 frames of video with 60 frames of video.

A common misconception regarding standards conversion is that 10 frames are either added or subtracted to achieve the correct frame rate. Not so. Every frame in the converted output is synthesized from scratch, so to speak. Because each frame-rate standard samples different points in time within the same one-second interval, phase correlation technology measures the motion between two inputs that straddle the desired output field and then scales the motion vectors accordingly. As a result, an entirely new set of frames is generated accurately.

Processing challenges in motion estimation

In a motion-compensated standards converter, the interfield interpolation axis is not aligned at the time axis in the presence of motion. (See Figure 1 on page 22.) In practice, the interpolation axis is skewed by using the motion vectors to shift parts of the source fields. The displacement is measured in pixels, and the value is divided into the integer part (the nearest whole number of pixels) and the fractional part (the subpixel shift). Pixels from input fields are stored in RAM, which the interpolator addresses to obtain input for filtering.

The integer part of the impulse response shift is simply added to the RAM address so that the pixels from the input field appear to have been shifted. The vertical shift changes the row address, and the horizontal shift changes the column address. Address mapping, commonly used in DVEs, moves the image with pixel accuracy. This stage is followed by using the subpixel shift to control the phase of the interpolator. Combining address mapping and interpolation in this way allows image areas to be shifted by large distances with exceptional accuracy.

What makes the quality of motion estimation so critical is the behavior of outputs when the converter makes a mistake in creating an intermediate field. Say a camera pan causes a person to move from left to right; the motion compensation system must locate that person in a point in time never actually captured by the camera. If the technology moves the head and body differently, for example, the human eye and brain combination will know at some level that something's not right. Thus, a minute error in motion estimation, even for one output picture, is enough to create a significant disturbance for the viewer.

It is widely understood that fast and complex motion can be a challenge for conversion, but in practice, the speed of that movement can help mask the effects of poorly performed motion compensation. (See Figure 2 on page 26.)

The motion estimator's ability to manage still pictures is also important. While it seems like an easier task to convert this type of material, some technologies can turn stills into a picture with bits moving around. Even seemingly benign images can go to pieces, such as a still shot of a building exterior, with the windows in motion.

Moving roller credits are a particular challenge. It takes sophisticated motion estimation to account for the movement of small objects within a picture. Phase-correlated motion compensation can enable high-quality de-interlacing as well as precise, clean frame-rate conversion even for complex graphics, fast-motion sports, film and variable speed camera outputs.

The mark of exceptional motion compensation is not just higher accuracy in creating pictures, but also the way in which it does make mistakes. A mathematical byproduct of phase correlation is a reliability indicator that can tell the system when it is working effectively and when it must tread more carefully. This information provides a graceful fallback mechanism for concealing any errors the system inevitably will make.

Added image and audio processing

When motion-compensation standards conversion became available, phase correlation was so effective that often the only residual artifact for the viewer or clue for the downstream broadcaster was that cuts were no longer clean. Since then, different technologies have been developed, helping ensure clean video transitions between scenes and programs. Some solutions allow operators to choose the field dominance of the converter output. Prior to this development, converters scrambled field dominance, whether or not it was correct on the source.

The problem with fluctuating field dominance is that it becomes difficult to edit programs. It is also an issue with international program exchange. Post-conversion master editing becomes tricky when the field dominance isn't consistent.

Because most content is compressed somewhere within the delivery chain — whether on DVD or over a broadcast media — fluctuating field dominance is also a problem here. The issue makes it difficult for a downstream compression system to insert a single clean I-frame. The efficiency of both workflows — and the quality of the end product — can be compromised when standards conversion doesn't provide clean transitions.

Consider pixel accuracy

Although the number of pixels involved in SD and HD conversion differs, the problems remain the same. If content is shot using an interlaced camera and the content contains interlaced content on a field-by-field basis, the only transparent way to convert between formats is to use motion estimation to measure the movement between fields and compensate for the effects of the movement between them.

The first step is to use motion estimation to nullify the effects of any movement and to make sure that pixels within the input frame are aligned in time. Within this deinterlacing process, it is the motion estimator's job to deliver motion information that can be used to near-perfectly compensate for movement.

Another issue accompanying modern standards conversion is aspect ratio. While most of Europe watches television in a widescreen SD format, the transition to a 16:9 aspect ratio is still relatively new to most U.S. broadcasters. This means that production companies and broadcasters must effectively deal with at least two aspect ratios. Fortunately, every standards converter made today includes built-in aspect ratio conversion with various preset and user-definable modes.

The audio side

Dealing effectively with audio also has become the standards converter's responsibility. It is imperative that standards converters accommodate 16 channels of audio, or eight AES pairs, and be able to resample audio and perform sample rate conversion from the input rate to the output rate. The Dolby audio standard is used extensively throughout the broadcast industry for multichannel surround. This means handling up to 16 channels filled with Dolby E, discrete 5.1 audio, an additional stereo mix and perhaps a second language or soundtrack information.

Dolby E brings its own special requirements to conversion because it is locked to the incoming frame rate of video and must be decoded, recoded and relocked in order to ensure that the audio can be re-edited downstream without corrupting the Dolby E signal. Any standards converter you select should be able to compensate for video delay introduced during conversion and even peripheral delays and, on a per-channel basis, account for lip-sync errors on the incoming program.

Continued product development

Investing in standards conversion technology is an important decision in the life of a broadcast, post or duplication facility. Advanced solutions today include comprehensive film tools for 23p, 24p, 25p, 30p and sF formats, along with 3Gb/s capabilities to handle 1080p. Standards conversion platforms are capable of operating within both the hardware or software domain.

Realize that issues of standards conversion do not disappear in the file-based workflow. Content must remain at the correct frame rate through the entire workflow right to the end of the consumption chain. With the shift to software-based platforms and increasing work with HD, fast and effective motion compensation will depend on manufacturers' ability to maintain accurate motion estimation. This development likely will leverage hardware acceleration to turbo charge software-based solutions to enable efficient, file-based interchange of content.

David Tasker is head of technical sales for Snell & Wilcox.