Business Models: Improving video quality

As new broadcast services battle for the same amount of fixed transponder bandwidth, improving quality at low bit rates has developed into a fertile area for technological advances – and has become a primary concern for service providers. Lowering the bit rate of video services makes space available for new, advanced services. Finding ways to do so without sacrificing video quality is the new “holy grail” for compression scientists. There are now a number of techniques that pre-processing and compression software can apply directly to each individual program to achieve the best quality within the available bandwidth. These content-based techniques analyze various aspects of the content’s creation to efficiently apply compression to different scenes.

Pre-processing techniques

The following pre-processing steps can be applied sequentially to condition the video and optimize compression.

Film mode.

To translate first-generation film material into video, the telecine process converts the 24 film frames into 48 fields, then duplicates 12 of the fields to come up to the required 60 video fields per second. The film-mode process, often referred to as “3:2 pull-down reversal” or “inverse telecine,” automatically detects and eliminates any duplicate fields. This method can yield up to 20 percent compression efficiency when compared to video-originated material, and is one of the most widely applied pre-processing techniques today. Detecting “bad edits” that may have been created during the post-production process reveals where the 3:2 sequence has been interrupted. This allows the pre-processing activity to drop out of film mode when necessary and re-enter it when a consistent 3:2 sequence is again detected.

Scene-change detection. Carefully allocating the available bit rate to either picture detail or motion improves the quality of MPEG-2 compression. A scene change in a program results in drastic content change between adjacent frames, mimicking the effect of significant motion. As a result, the compression engines reduce the bit rate assigned to picture detail, which reduces the instantaneous picture quality and often consumes more of the available bandwidth than needed.

The scene-change-detection process eliminates these problems by storing several video frames in memory to compare to adjacent frames. The process can then detect a scene change in time to gracefully close one MPEG-2 group of pictures and build an anchor frame for the next sequence. This process allows the software to allocate more bits to the new anchor frame, preserving the quality of the next picture sequence.

Fade and flash detection.

Similar to scene changes, fades and flashes can also be detected and processed to avoid unnecessary artifacts. Fades and flashes change all luminance values over a number of frames, mimicking the effect of significant motion. By using a multi-frame memory, the software can examine picture sequences to detect fades or flashes and allocate more of the available bit rate to picture quality.

Noise reduction through filtering.

Filtering processes, either temporal (between frames) or spatial (within a frame), can reduce noise that the encoder might mistake for extensive picture detail or motion. Noise removal significantly improves compression efficiency, allowing the encoder to focus on the meaningful portions of the content.

Motion-compensated temporal filtering is a sophisticated process for removing source video noise between adjacent frames in moving pictures. By tracing the motion of each pixel between frames, the process appropriately filters according to the related motion vector. This process removes random and impulse noise in moving objects, preserving the detail of the object and further improving compression efficiency.

Adaptive spatial filtering removes source video noise between adjacent pixels (picture elements) within a frame. This process works on a pixel-by-pixel basis to remove random and impulse noise within the video frame, appropriately filtering according to the values of its surrounding pixels within the frame and carefully preserving the edges of any perceived object. Typically, the service provider can adjust the strength of this filter to match the desired encoding quality, balancing picture sharpness with perceived digital artifacts.

Compression techniques

After pre-processing and noise reduction, the video source material is ready to move through the compression process. Compression techniques such as motion estimation, statistical multiplexing and dual-pass compression produce the final MPEG-2 stream.

Motion estimation.

Motion estimation is the most critical and computer-intensive step in the video compression process. Up to 80 percent of the redundancy in moving pictures is a result of the temporal correlation between the current picture frame and elements of past or future encoded frames. The accuracy of the motion-estimation process affects both the final picture quality and the efficiency of the compression process. Any improvement made to this process is of great value. The motion-estimation technique most often used works by varying the “search range” and the search strategy. (The search range is the number of pixels surrounding the target pixel, which the software analyzes to determine the exact motion path of the target pixel.)

Statistical multiplexing. Statistical multiplexing is based on the principle of a fair distribution of quality across all the video streams within an MPEG-2 multiplex. An examination of the statistics of full-motion video reveals that it spends most of its time in less complex scenes that respond well to digital compression techniques, and only briefly experiences scene changes or other forms of rapid movement. This provides an incentive for pooling the available bandwidth and treating multiple programs as a group. Statistical multiplexing can dynamically allocate the group’s bandwidth where it is needed most, improving the quality of those pictures that are experiencing rapid movement or contain significant picture detail. In a statistical multiplex, each service operates at a variable bit rate. Individual services make demands on the group’s bandwidth based on the complexity of the video. The encoder’s multiplexer receives these bandwidth requests from each member of the group and determines the maximum picture quality that the available group bandwidth can sustain. It then allocates a bit rate to each service to achieve that level of quality. The dynamic allocation of variable bit rates ensures that bandwidth is not wasted on less complex scenes, allowing for higher quality complex scenes. Statistical multiplexing enhances the quality of the entire multiplex and enables service providers to achieve higher levels of multichannel service.

Dual-pass encoding. Repeating the process via dual-pass encoding further improves statistical multiplexing. During the first pass, the encoding system determines the precise bandwidth requirements of each stream, rather than just the instantaneous demands of a frame from each program. It then uses that information to accurately allocate the bandwidth on the second pass.

Combining all of the above mentioned compression techniques in a tight feedback loop further enhances the encoding process. For instance, if a single program is consuming an inordinate amount of bandwidth, the statistical multiplexer can instantaneously modify the filtering values for that stream – or a number of other less critical streams – to optimize the performance of the entire multiplex. Only a year or two ago, it appeared that there was little opportunity to squeeze more bits out of the multichannel compression process. In that short period of time, however, scientists and engineers have made major advances in compression efficiency to meet the increasing demands on transponder usage and the competitive requirements of service providers. Now, a new generation of cost-effective, high-speed media processors allows the real-time execution of an increasing array of compression and pre-processing techniques – and marks yet another conquest in compression scientists’ continuing pursuit of their grail.

Martin J. Stein is senior marketing director and Mark Schaffer is senior product manager of Motorola Broadband Communications Sector, Satellite & Broadcast Network Systems Division in San Diego.