As the television broadcasting industry makes its transition from analog to digital technology, the traditional means of ensuring broadcast quality and reliability are no longer adequate. Fundamental differences between analog and digital architectures introduce system behaviors that demand a new monitoring approach, with new procedures and tools.
Just as with analog broadcasting, digital broadcasters need to monitor their signals to anticipate problems.
Analog television signals represent video and audio as a continuous range of voltage values that can assume an infinite number of states. Imperfections in the distribution or transmission channels can produce noticeable errors in the picture or sound, but there is some tolerance for these errors. Although quality steadily declines with increased degradation, the content remains intelligible even in the presence of substantial errors.
The virtue of analog technology is that experienced broadcast engineers can recognize the onset of channel impairment simply by viewing the television broadcast. They can diagnose the type and degree of impairment and take corrective action before quality degrades to an unacceptable level. For more precision and repeatability, engineers can turn to analog monitoring instruments such as waveform monitors and vectorscopes. These tools look at baseband waveforms and quantify the amplitude and phase characteristics of the signal components.
Digital television signals present a new and different challenge to broadcast producers and engineers who must ensure broadcast quality. Digital video and audio information is delivered as a discrete but finite set of values. Digital content is relatively impervious to minor imperfections in the channel; picture and sound quality remain acceptable until impairment level reaches a certain threshold. Beyond this point, the “cliff effect” occurs: signal quality drops off drastically, or the picture disappears altogether from the viewer’s screen.
In the digital broadcast environment, engineers cannot detect the onset of channel degradation by watching the broadcast; they can only react to severe quality problems once they appear — when it is too late.
Digital broadcasters need monitoring approaches that let them anticipate problems, just as they could with their analog broadcast content. They need to address channel degradation before it leads to noticeable quality problems.
A new generation of digital monitoring instruments has emerged to answer this need. These confidence-monitoring tools help broadcasters achieve the same level of confidence in digital television that they achieved in analog. Ultimately, digital technology can deliver a better product to end users. Confidence monitoring helps the technology live up to that promise.
Digital technology is the foundation for the convergence of video, voice and data-distribution systems (networks). Digital telecommunication-network operators gain new sources of revenue by offering distribution services to broadcasters, and broadcasters can use these services to reduce operating expenses.
These arrangements introduce additional transitions in the distribution chain. When content passes from one broadcaster to another, the responsibility for quality control goes with it. Both broadcasters must rely on contractual quality of service (QoS) obligations to preserve the content on its way to end users. Clearly, this complicates the process of maintaining quality.
Digital convergence enables broadcasters to take a new approach to system management. It allows them to adopt and adapt the centralized monitoring and management model that has worked so efficiently in the telecommunications industry. New digital video broadcast management systems will rely on network-capable confidence-monitoring devices that can report networkwide status and send alarms to a central video network operations center.
The digital cliff effect, the increased number of transmission handoffs, and new centralized management approaches are influencing the requirements for confidence-monitoring systems in digital television. The latest generation of tools integrates the solutions for all of these challenges.
One of the biggest architectural differences between an analog television system and its digital equivalent is the “layered” architecture that is central to the entire digital distribution/transmission chain.
In an analog television system, distribution and transmission channels are, in effect, a series of analog signal-processing steps. But, with digital technology, broadcasters can use signal- and data-processing techniques to improve quality of their product and the efficiency of their networks. Hence, distribution and transmission channels in digital television systems contain sequences of signal-processing and data-processing steps.
Figure 1. This graph shows the three layers of digital television broadcast.
Figure 1 shows the layered digital-system model. Table 1 summarizes the signal- and data-processing steps for these layers.
Quality-control challenges within the layers
So far we have shown that the digital video broadcast architecture is made up of three layers. In each layer, signal- and data-processing steps can add errors.
From source to consumer, program content may move through each of the three system layers several times. Transitions between layers can dramatically alter the nature of the digital information. For example, the uncompressed digital video data in the formatting layer is entirely different from the compressed digital video in the compression layer. The additional processing needed to move across layers increases the probability of quality errors at these transitions.
In the formatting layer, broadcasters must accommodate the variety of new formats for both standard- and high-definition digital television. As with analog television, they need to ensure correct colorimetery and verify conformance to standards. In addition, they may need to convert from one format to another, as in downconverting HD content for broadcast on an SD system. These format conversions can introduce quality errors. Lastly, separate processing of digital video and audio can lead to synchronization problems.
Table 1. This is a summary of the signal- and data-processing steps involved in each of the three layers of digital TV.
In the compression layer, broadcasters must confront an entirely new architecture that differs dramatically from analog television. Compression introduces new types of quality defects such as blockiness, in which some areas of the picture split into granulated areas. Errors can occur during the complex process of multiplexing programs and system information into a single datastream. Errors in timing and synchronization parameters can compromise the decoding process and lead to noticeable content-quality errors.
In the distribution layer, broadcasters encounter familiar RF technology in the transmission networks. But these systems use different modulation techniques and offer new challenges in understanding coverage and interference problems. For internal distribution, broadcasters increasingly rely on telecommunication technology, which introduces problems with latency, packet loss and synchronization.
Compounding all these potential problems, errors in one layer can cause errors in a different layer and obscure the original error source. For example, blockiness errors can arise from problems in the compression layer, or as a byproduct of uncorrected bit errors in the receiver (distribution layer). There is no visible difference in the appearance of these errors, despite their different origins.
As always, the way to handle a big, complex challenge is to split it into smaller, more manageable pieces. Electronic repair technicians will attest to the effectiveness of this method. Fortunately, the digital video architecture lends itself to this approach. By monitoring discretely the behavior of each layer, it is possible to control the quality of the video signal as it progresses through the system, preventing the buildup of errors. This practice, which is rapidly gaining acceptance in the digital broadcast industry, is known as distributed multilayer confidence monitoring.
Figure 2. This figure shows how you might deploy multilayer confidence monitoring in your broadcast system.
Distributed multilayer confidence monitoring
A well integrated confidence-monitoring system can deliver both quality-control and system-management functions:
- Layer-specific probes and tools detect the errors before they propagate from layer to layer.
- Multilayer monitoring makes it possible to quickly isolate the root cause to its origin within a specific layer.
- Extended monitoring capability alerts engineers to system degradations before they become quality problems.
- Network control supports system-management strategies.
Let’s take a look at how you might deploy multilayer confidence monitoring in your broadcast system. Figure 2 shows a very simplified version of a digital terrestrial television broadcast facility and the potential monitoring points, which include:
- MPEG monitoring at the multiplexer output to detect data rate problems, protocol errors or errors in inserting PSIP information
- MPEG monitoring at the other end of the studio-transmitter link to detect PCR jitter problems
- RF monitoring at the transmitter site to detect degradations in transmitter performance
- Waveform monitoring, picture quality monitoring or A/V delay monitoring at master control to detect potential errors before you compress, multiplex and transmit.
Layer-specific monitoring probes
In a confidence-monitoring system, each monitoring device acts like a probe, monitoring quality at a particular point and layer in the distribution or transmission chain. No single tool can span all of the layers and processes, but some tools offer powerful networking and integration features to aggregate the information from diverse probes. Each layer has its own specific toolset.
At the formatting layer, digital waveform monitors help broadcasters detect quality problems. Like their analog counterparts, these probes monitor characteristics of the digital video signal. Other formatting layer tools include digital audio monitors, picture-quality monitors for detecting blockiness and other picture impairments, and probes for detecting audio/video delay.
At the compression layer, processing must adhere to MPEG-2 standards. Broadcasters need MPEG protocol monitors capable of detecting problems in the basic MPEG processing, as well as the additional processing defined in the DVB, ATSC or ISDB broadcasting standards based on MPEG. The signal should emerge from the compression layer fully compliant with MPEG requirements and any contractual QoS policies.
At the distribution layer, broadcasters need probes to detect quality problems in a variety of distribution and transmission channels. Probes in this group include devices to monitor RF transmissions in DVB, ATSC or ISDB formats. They also include probes for monitoring information sent through either cable or fiber telecommunication networks.
Each layer has its own unique probing and analysis solutions. Since errors can arise in any layer, all three layers must be probed. Tracking only the compression layer, for example, will detect problems in that layer. But the problems might have originated in another layer. There is no way to determine problems’ sources unless each layer is monitored separately.
To gain a complete picture of system quality, and to quickly detect and isolate quality problems, broadcasters must rely on multilayer confidence-monitoring solutions.
Extended monitoring capability
A confidence-monitoring system is a valuable asset for broadcasters. It helps them efficiently locate, analyze and correct problems that affect video quality. Basic confidence-monitoring probes within the system track a small set of key quality parameters and provide an “indicator light” (actually a selection of screen displays and alerts) to tell the broadcaster when something has gone wrong.
But basic confidence-monitoring probes do not offer a complete solution. What is missing? Most importantly, the information needed to prevent objectionable quality problems in the content. Confidence-monitoring processes must detect and display the small clues that predict an imminent fall from the digital cliff. Moreover, the processes must support a proactive approach to quality control with timely alerts, alarms and documentation. This situation calls for extended confidence-monitoring probes. These tools use more advanced analysis to make additional measurements of quality parameters.
To understand the distinction between basic and extended monitoring tools, consider RF transmission monitoring. Basic RF confidence monitors measure the bit-error rate (BER) of the signal. BER will remain low — and apparently safe — until the transmission approaches the digital cliff. But the BER increases dramatically as the transmission falls off the cliff. This gives broadcasters only slightly more time to react than they would have if they watched the transmission on a picture monitor.
Extended RF confidence monitors add more detailed measurements such as modulation-error ratio or error-vector magnitude. These qualitative measures decline more gradually as system performance degrades. The monitoring tool notes this decline and sends an alarm to the engineer in time to make adjustments or seamlessly transition to backup systems.
Network control and system management
System management concerns are a part of any confidence-monitoring strategy. The “system” may extend beyond the master network facility and reach out to far-flung regional distribution centers, as Figure 3 shows. Broadcasts sent to these regional centers via airwaves may need to be monitored to ensure that the program material reaches end users in good, unimpaired condition.
Figure 3. A distributed network benefits from distributed monitoring. Status data and alarms go to a central network operations center through the Internet.
A distributed network benefits from distributed monitoring. Status data and alarms go to a central network operations center via the Internet.
In another scenario, the master facility may receive contribution feeds over a telecommunication network. Because a third party (the network operator) is involved, there will be contractual QoS levels that must be verified. The broadcaster may install confidence-monitoring probes at the network operator’s points of presence to assist in this process.
Both scenarios call for flexible networking capability in the probing tools. If these features include Internet connectivity, they offer a ready-made solution for monitoring distributed sites no matter where they are located. They can easily report status and alarm conditions to the master location. An RF monitor, for example, could quickly notify engineers of increases in BER at a transmission site hundreds of miles away.
Form factor and cost are also important considerations in a confidence-monitoring system. Large, card-modular solutions are appropriate in central facilities with a large number of signals and multiplexers. These full-featured tools are usually configurable for every conceivable monitoring need. Smaller, more specialized, single-channel probes are usually preferred for work in remote locations such as transmitter sites. And smaller yet are the handheld confidence-monitoring probes that fulfill the needs of installation and maintenance teams. Although handheld probes are more limited in functionality, they too may offer networking features to connect the technician with the central plant.
The concept of extended, multilayer, distributed monitoring with appropriately scaled tools is the key to ensuring competitive quality levels in tomorrow’s all-digital broadcast networks.
Digital broadcast technology is changing the way the industry creates, processes and distributes content. At the same time, certain characteristics of digital architecture —particularly the digital cliff effect — are influencing the monitoring methods broadcasters use to ensure quality throughout their systems.
Increasingly, broadcasters are discovering that multilayer confidence monitoring is the best insurance against unforeseen failures in digital broadcast systems. They are installing layer-specific probes to deliver timely details about the behavior of a signal as it passes through the formatting, compression and distribution layers.
New types of monitoring probes are arriving with extended monitoring capabilities to help broadcasters proactively address performance degradations before they become quality problems. And networking features such as automated status reporting and alarms are enabling broadcasters to develop systemwide monitoring and management procedures.
All this monitoring and management goes on behind the scenes, as it should. The critical result is that end users receive content that lives up to the promise of digital television broadcast technology.
Greg Hoffman is a product marketing manager for Tektronix.