In part one of this series, the basics of monitoring the parameters of the digital video contained within an SDI signal as well as testing the components of SDI itself were covered. All of those tests were carried out on the SDI signal while in service. This time, the topic will be the actual path that an SDI signal is transported on, which involves stress testing the SDI signal path when it is out of service.
SDI path testing and the cliff effect
In an analog plant, a color bar test signal is run though a piece of equipment and its output is observed on a waveform monitor. The signal’s sync level, white level and chroma level could all be measured to assure that everything was working correctly. If the sync tips showed overshoots or rounding, or chroma was attenuated, these errors would indicate that there was a problem somewhere in the video’s path. In a digital plant, an SDI signal is passed though a chain of equipment (DAs, switchers, reclocking DAs, etc.), and when it arrives at the test equipment’s input, it is checked for errors. If it passes, then the signal is assumed to be good and no other checking is needed. But is this true? While the video and SDI parameters may pass all the tests, the actual path may be on the verge of failure. In other words, the operator needs to know how close the signal path is to that digital cliff before the signal falls off of it. This is where stress testing comes in.
The only relevant SDI stress test is the pathological test pattern, also known as an SDI check field. This test signal stresses the SDI receivers by introducing long sequences of ones and zeros while adding a DC component, all of which stress the capabilities of any SDI receiver.
SDI data is transported by a series of pulses, and a clock is required to accurately read these pulses. Because there is no separate clock signal, the clock is recovered from the edges of the pulses themselves. Also, a NRZI (non-return to zero inverted) format is used where a pulse’s edge, and not the voltage level, indicates a change from zero to one. (See Figure 1.) This means that in an all black (all zeros) picture, there would be long sequences of zeros that make clock recovery impossible, since there would be no edges to recover the clock from, introducing a large DC component and making it difficult for the receiver to recover the pulses (SDI is all AC coupled). To alleviate this, the bits representing video are scrambled to ensure there are no long sequences of zeros; thus, the pathological test presents a situation that will never happen in a real-world SDI signal, but will demonstrate the ability of the receiver to handle various signal extremes. When applying the pathological test signal, look for EDH errors, because this would be the first indication of lost bits from the equipment or path under test. (See Figure 2.)
The most effective stress test that can be performed on an SDI system is to add a length of cable to the path and observe for errors. This is called cable length stress testing, and when combined with the pathological test signal, it becomes a very meaningful indication of the how close, or far, your system is from that cliff. The added cable will test how much headroom the SDI path has by dropping the level of the signal and forcing the equalizer in the receiver to work harder. There are no guidelines for the amount of cable, but a good rule of thumb for SD-SDI is that by adding 150ft (50m) to the path and no errors occur, the path is healthy and will not fall off that cliff. For HD-SDI, the added length is 60ft (20m). If errors do occur, then the cable path could be too long for the signal or it could have been installed incorrectly, such as a bend that’s too tight or other damage to the cable. It could also be that the SDI transmitter is failing.
A good studio configuration would be to add 150ft (60ft for HD) of cable between two ports on an SDI patchbay, making a quick test of any part of the system easy to perform. (See Figure 3.)
Video server file testing
One of the best ways to assure the quality of digital video is to test it before it’s even played out. Testing and verifying video files has many advantages, the main one being the assurance that the file and all its parameters are what is expected before air. In today’s ever-changing world, many stations may be receiving video files rather than videotapes, and if the quality control system only consists of someone watching a playback of the file, then much can be missed that can affect the on-air playback.
The various parameters to be monitored can be correct file format for its intended use; file parameters; metadata; gamut for the video format used; frame rate; frame size; compression type; audio levels; presence of test tones; duration compared to metadata; encode errors; and many others. The point is that as stations rely more on file-based video sources, it becomes even more important to be sure that what is stored will be useable when it comes time for on-air playback.
In most server-based facilities, several different types of storage systems are in use, and while specifics may vary, there are three basic storage areas. First there is offline or archival storage, where data tapes are used for long-term retention in either a manual or automated retrieval system. Second, would be nearline storage that consists of large hard disk RAID (redundant arrays of inexpensive disks) arrays where newly arrived files, finished projects from the NLE (nonlinear editing) systems and files transferred from the archive are kept on a temporary basis. These files are then either transferred to the archive for long-term storage or to the online system. The final online storage system is part of the video servers themselves, and only the files to be played on-air in the next few hours or days are kept here. For video network protection, a gateway computer can be used to quarantine any files brought in from the outside where they can be checked for viruses or other corruptions before being moved to nearline storage.
The best points to examine the files on the aforementioned system would be at the point of ingest and after they are transferred to nearline storage. This would catch any problems before the files are moved to long-term storage in the archive and before they are moved to online storage for playback. In this way, each file would be tested at least twice, once before moving to the archive system and once again as it is retrieved from archive to on-air playback. (See Figure 4.)
Sources of problems
Files from your own NLE station may contain several parameters that can be out of tolerance even though it looked fine when played back in the edit bay. Effects software may expand the color and/or the luminance gamut to process a desired effect and then return a file that is out of the legal color gamut. This is similar to looking at graphics on an RGB monitor and then seeing different colors on-air after the graphics have been encoded to NTSC because of the different color space and gamuts of RGB and NTSC. Metadata may be incorrect and the video servers may not recognize the file. As file-based systems become more complex, correct metadata will also become more important because it informs the system that it has the correct file for playback.
The same holds true for files transferred from outside sources; once again, just playing back the file will not provide the necessary information needed to ensure that there are no problems during on-air playback. As the number of channels that broadcasters feed increase, from over-the-air (OTA) DTV and mobile/handheld (M/H) DTV to Web streaming, all of these feeds require different files with different parameters. Unless tested, there is no way to be sure they are of the correct file format and legal until a problem shows up on-air.
Several companies manufacture systems that will automatically examine files on video servers and then report back on what they find. The type of information returned varies but they all check the video and file parameters.
Transition to Digital
will cover how to test for quality control after the signal has left the plant.
Digital video quality control part 3 >>