Digital video quality control

In today’s sophisticated digital broadcast chain, maintaining the quality of the received sound and picture has become more challenging than ever. As new receiving devices come into use, the variety and sophistication of the decoders expands, and with this the possibility of decode errors caused by encoding, or even the elementary files themselves, quickly rises. New testing methods are required to monitor and test the variety of signals, their paths and files in use today.

Quality control

Testing your signal from the studio to the viewer is important for quality control. As the number of programs transmitted increases, there are fewer personnel to monitor them, and without the proper monitoring equipment, they can’t detect many of these problems that can arise before they are seen on screen. From HD 50in flat screens to cell phone-sized mobile devices, they all need to decode the signal you broadcast and do it without fault.

Testing within the studio is the first step in the process of detecting errors before they make it to your viewers’ screens. Starting with the digital video that is routed within your plant, we will follow the signal path as it makes its way to the viewer.

SDI (Serial Digital Interface) can be tested in a variety of ways that will inform you as to the integrity and quality of the signal. Today’s digital waveform monitors and rasterizers will analyze the SDI signal and display the results in ways older waveform monitor never could. In the analog world, the parameters to be monitored were frequency response, timing and amplitude — the SDI signal has many more critical parameters that affect the quality of the picture it carries.

Today’s digital video monitoring equipment can display an overwhelming amount of data about the selected signal; what follows is a short list of the most important parameters.

SDI testing & monitoring

One of the most basic tests is for EDH (Error Detection and Handling) errors, which is a simple CRC (cyclic redundancy check) that counts the bits in one frame and adds this number to the beginning of the following frame. If what the receiver counts and the EDH states do not match, there is an error. The EDH standard also incorporates a flag that indicates if the signal was received with an EDH error already detected upstream. This means the signal can be checked back down its path until the flag for an incoming error is dropped, showing that the next piece of equipment is where the EDH error is coming from.

EDH errors can have many causes, some of which will not affect the received video (which is more common), while others will. When an audio embedder is used, it is adding data to the SDI signal, and if a new EDH is not generated, the original EDH being passed on will be in error and detected at the next device, even though there is no fault in the signal. The same thing will happen if other ancillary data (closed-captioning, time code, etc.) is embedded within the SDI signal and no new EDH, or a faulty one, is generated. Of course, the EDH fault can be caused by a transmission error that causes the receiver to miss bits within the SDI signal, but to find out which requires further digging.

To find out if it is a true bit error (the most serious), then checking the amount of jitter in the SDI signal is the next step. Although jitter might be thought of as an error that would only show up in very long cable runs, it can be caused by faults in the system (from faulty installation to equipment failures). The specifications for jitter are very strict, no more than 3.7ns for SD and 673ps for HD. Some equipment will present a numeric readout of the amount of jitter, while others display an eye diagram that shows an overlay of the pulses within the SDI signal. A clean and sharp display indicates a lack of jitter, while a fuzzy or blurred display shows excessive jitter.

As stated before, ancillary data can be a source of EDH errors, but they can also have errors within themselves that will not affect EDH. Today’s monitoring equipment provides the means for checking this ancillary data for checksum and parity errors, but does not tell you if the data is correct.

Cable length was never monitored in the past, but in SDI, it can prove to be a critical parameter and can lead to EDH errors. When an SDI signal is received, it must be equalized to ensure proper decoding, the amount of equalization is related to cable length. Long cable runs beyond the capability of the equalizer to compensate for or excessive loss of signal amplitude will cause errors. Alarms can be set in today’s monitoring equipment to notify the operator if a signal exceeds a preset cable length limit.

One of the more serious errors that can be checked is TRS (timing reference signal), which is a timing reference word that is used to ensure the EAV (end of active video) and SAV (start of active video) are in their correct places. If an error occurs in them, the associated video lines will be misplaced in time. TRS-ID (timing reference signal identification) is only used in composite digital video signals and is used to identify horizontal and vertical sync. Errors in the TRS-ID will cause problems in reconstructing the analog video signal timing.

Gamut errors are caused by the video signal being outside the limits of standard SDI encoding parameters where it would exceed the color space limits. NTSC, SMPTE 259M (SD-SDI) and SMPTE 292M (HD-SDI) all have different color gamuts, or color space, and they all make room for the signal they carry to exceed this color space. But to limit the video to within the range of colors that the final monitor can display, they need to be kept within the legal limits of their particular color gamut (color space). In the 10 bits used to quantize the video components in digital video, there is headroom and foot room built in. When the luminance or chroma components exceed the legal levels, a gamut error is triggered because it can cause loss of picture information and distortion in the displayed image.

Another type is composite gamut error, which is where the component levels are analyzed to see if they would exceed the (NTSC) color space when converted to composite analog video. In an all-digital plant, this is probably not required, but in a mixed analog/digital plant, it can prove to be very important because digital-to-analog conversions are common.

Most of these errors can be corrected with a digital video legalizer, which is basically a modern-day processing amplifier (proc amp). A legalizer can correct gamut errors, ensure that the TRS is correct and generate a new EDH. It can also pass or blank out any ancillary data, thus passing on a fully legalized serial digital video signal within set parameters.

Next time

The next “Transition to Digital” will cover the issues of file testing, stress testing and ASI versus SMPTE 310 and 8-VSB.

Digital video quality control part 2 >>