A cable company says your signal is horrible and that is why it looks like a 1950s station when its customers tune in. What do you do? It is tempting to say, “It looks good leaving here.” I remember experienced engineers at one station I worked at long ago saying the problem is “west of Denver,” which I was told was a remnant of the time when microwave circuits converged in ATT long line offices in major hubs, making it possible to isolate to a region, but often not a specific location. In the world where mountains of data can be gathered in seconds from seemingly impossible locations (falcon cams come to mind), is it not possible to remotely monitor and diagnose problems? Of course it is, given the right tools, sufficient bandwidth, a clear definition of the data or pictures you want to keep track of and how much it is worth.
Limitations of monitoring
Remote monitoring comes with limitations, particularly if the public network is used to transport the data. Latency, the delay between when the event/sample happens and when the report of that arrives at the remote monitoring point, is the most serious issue with monitoring over a WAN, particularly the public network. When the Internet is busy, particularly in peak times after school hours, generally latency increases, and throughput on any last-mile link decreases. So long as the latency remains low and throughput remains high enough to allow quality (no pun intended) monitoring, there is no problem using a public circuit. Security is a separate concern, but assuming you use a secure VPN to carry the traffic, you will likely be safe from digital prowlers. However, video monitoring is a bandwidth hog, and a few missed frames here and there can be a major operational headache. For instance, how do you determine that the video being monitored is not dropping frames when the monitoring circuit is failing? Second, in efforts to reduce bandwidth and make service possible on links with packet jitter (variations in latency), we often choose lower-bandwidth codecs, which usually require longer to code the content. This makes monitoring a live feed more difficult, akin to watching a satellite feed of the signal and perhaps much worse latency than a satellite's one-fourth of a second. There are some low latency options that can be strategically important in cases like remotely monitoring master control output in a centralized operations approach.
Where the monitor circuit is only intended to show a representation of the content and quality can be lower, it is easier to get low latency. Using fewer samples (CIF, Q-CIF), reducing audio bit depth below 16 bits per sample and using long-GOP coding reduce bandwidth. Insidiously, using long GOP actually increases latency. As we move inexorably toward all HD content in distribution and delivery to everything except cell phones, picking the right codec and having sufficient bandwidth are becoming more complicated.
There are monitoring options that can utilize features added to otherwise ordinary products, like distribution amplifiers. The features can include the obvious parameters like presence and absence of sound and picture, captions, and levels. These remote monitoring probes often return thumbnails of the video at a low bit rate and simple alarms.
A more powerful strategy is to combine the basic monitoring functions with an extended capability that takes signals requiring more thorough evaluation, as in the case of an alarm condition, and switch the signal to a streaming media encoder or device with more complete analysis capability. Remote monitoring might thus be attached to a routing switcher giving access to many different points in the system. This requires good knowledge of the system being remotely monitored as well as a remote control capability. This monitoring by exception has been appropriately termed “lean back, lean forward” by one manufacturer. When the alarms go off, the operator looks at a generalized screen showing status on many devices. When a failure occurs, the device of interest is brought forward, and the operator leans in to see the detail and work the solution to the problem. When the monitoring and control are tightly integrated with a sophisticated multi-image display system, this can be particularly effective at providing both the overview and the tools necessary to fully understand a failure.
A clever use of good technology is to connect a modest multi-image system to a streaming encoder to bring back many signals all at once. Quality will be lower than individual signals, but if you have control over the multi-image system, you can switch it into a single channel mode when appropriate. The ability to get an overview of a remote system is particularly valuable in centralized operations.
There are less expensive options with stand-alone modules that peek at signals and report the results over low bandwidth. This is great for looking at your signal at a distant cable headend, or the other end of an STL, and can use dial up or Internet connection when it is available.
Don't forget that today we might have no picture at all when looking at the output of a compressed link or off-air receiver, yet the data may be arriving just fine. MPEG transport stream analysis is another tool that can be crucial to troubleshooting a device you can't get in front of. Syntax and statistics lead to effective understanding of issues with compressed links. When combined with information about the transport medium itself, for instance remote ATSC signal analysis, you can get a complete set of tools that allow a technician with good understanding of the symptoms to properly diagnose problems without leaving home.
Lastly, SNMP offers non-picture related data that fills in a complete picture. Knowing the status of fans, power supplies, disk systems and even the air-conditioning system in the remote facility helps you understand a complex and confusing picture of a system you can't put your hands on.
John Luff is a broadcast technology consultant.
Send questions and comments to: firstname.lastname@example.org