Facility monitoring using SNMP

As a facility grows and its equipment multiplies, its infrastructure becomes large and decentralized. Thus, it faces an increasing need to monitor the equipment through an easy-to-use, integrated, central system.

As broadcast facilities grow in size from an individual building to geographically separate locations, broadcasters must be able to monitor the system in its totality from a central location. To understand and operate increasingly complex infrastructures, a modern broadcast engineer must be a video engineer, audio engineer, network system administrator, computer programmer and support technician. Finding such qualified and experienced personnel can be difficult and expensive. A facility-monitoring system that offers ease of use to personnel of all skill levels could circumvent this problem.

Equipment using Simple Network Management Protocol (SNMP) can enable broadcasters to monitor a large, decentralized broadcast infrastructure through an easy-to-use, integrated central system.

Many equipment manufacturers are incorporating SNMP capabilities into their products, and offer data sheets and manuals describing these capabilities. Many also offer facility-monitoring control applications that run on a variety of computer platforms and operating systems.

What is SNMP?

Simple Network Management Protocol consists of a set of agents, a management-information base (MIB) and a network-management station. An agent is a program that resides in a device and monitors its operation. An MIB stores this information in its data record. A central network-management station monitors and displays the status of the network devices. A command, control and monitoring (CCM) network facilitates communication among these three components. The SNMP monitoring application, using the CCM network, periodically polls devices for their status condition. If a device develops an error, the resident agent sends an alarm message to the monitoring application. (For more information regarding SNMP, refer to IETF RFC 1155, RFC 1157 and RFC 1213.)

SNMP was born in the IT world. But equipment-monitoring and topology-mapping techniques designed for telco and computer network environments do not easily fit the broadcast infrastructure. Currently, there is no system available that offers a complete network-monitoring solution for broadcasters. Therefore, it is up to broadcasters to manage the design and implementation of such a system.

Design and implementation

An SNMP monitoring system comprises four main functional areas: facility modeling, dynamic signal-path monitoring, fault detection and corrective action. To develop a plan for monitoring the entire facility through SNMP from a central location, begin by investigating requirements in these areas.

Facility modeling. Understanding the complexities of a broadcast infrastructure can be difficult. The monitoring system can use physical and logical drawings to create a conceptual model. The resulting model can help you document and manage elements such as signal flow through equipment, wire-run lists, database models and many others.

Figure 1. In an SNMP-based monitoring system, a graphics workstation can interconnect with a facility’s devices and resources to serve facility modeling. Click here to see an enlarged diagram.

The monitoring system can generate on-screen facility-infrastructure diagrams by importing CAD drawings, media-network topology maps and architectural drawings. It should document equipment locations in the facility, generate essence flow-tracing diagrams and identify control-room sources and destinations. Figure 1 shows a simplified block diagram of how a graphics workstation can interconnect with the facility’s devices and resources to serve facility modeling. Remember that each manufacturer has developed an SNMP implementation particularly for its own equipment. To make all the features of each vendor-specific monitor-and-control application available in a single monitoring system, you have to develop a single, user-friendly, infrastructure- and device-specific GUI.

Dynamic signal-path monitoring. The system must dynamically monitor the sources and destinations of all broadcast equipment so it can trace the signal path through any resource in the entire broadcast infrastructure. But essence is often converted from its native format into a file, so the monitoring system must trace the essence in various formats through both the traditional broadcast infrastructure and the media network. Also, it must update the media-network topology as routing tables change. The system can perform these tasks through MIB updates and subsequent SNMP agent reporting to the central monitoring database. The monitoring application must incorporate this information into its database. By generating a block diagram of facility resources, the system can facilitate auto-tracing the flow of a signal to a trouble spot. Figure 2 shows an example of signal tracing in the monitoring system.

Figure 2. In this example of signal tracing, we can describe the router connection as S1 = RT1(I_0, O_1) and the signal path as S1 = RT1(I_0, O_1) -> ED1(I_1, O_1) -> RT2(I_1, O_0). The trace of this signal is S1 = ING1(I_0) -> media network -> PLO1(O_0). The route of the file through the MN by IP addresses is S1 = ING1(I_0) -> -> -> -> -> PLO(O_0). Click here to see an enlarged diagram.

Fault detection.

Many manufacturers are replacing RS-422 ports with RJ-45 connectors in their new products. Through a LAN network, RJ-45 connectors allow software applications to monitor and control the equipment, and allow SNMP agents to monitor devices. If an error condition occurs, an agent can inform the network-management station and, if necessary, trigger one or more alarms.

If the monitoring system can access “rundown” information, it will be able to check for all necessary program elements (video, audio, CC, data, etc.) as they exit a program control room. Similarly, access to the automation “playlist” will allow the monitoring system at the master control room to verify compliance with traffic commitments, regulatory requirements and technical specifications. It can ensure that what gets to transmission is what you intend to air.

Of course, to get programs to air, you must ensure that routers, servers, automation and other equipment remains operational. You can do this by designing the system to periodically monitor applications to see if they are up and running and communicating properly with the equipment. Such applications must verify transfers of program files to the primary and backup playout servers and report the status of the transfers to the central monitoring station. This will confirm that the system redundantly stores program material, and prevent “black holes.” The system must also monitor individual computer configurations for compliance so that no one can install software on a machine without authorization. SNMP can facilitate these monitoring requirements.

Corrective action. In broadcasting, on-air support responsibilities are distributed across a number of departments and numerous staff members. Broadcasters must manage the level of privileges that each staff member has to monitor and control each resource according to the staff member’s job function and department responsibilities. In the IT world, access privileges are strictly controlled based on user accounts. But broadcasters must be able to do whatever is necessary to keep the facility on air in an emergency — without the interference of login-restricted access.

Intelligent signal-path tracing allows the monitoring application to prioritize alarms. This prevents operators from being overwhelmed by cascaded phantom alarms and allows them to find the origin of a problem condition. Pop-up dialog boxes can advise operators of the proper procedure to resolve a problem, warn them about the consequences of various actions, and let them know whom they should notify. Intelligent signal-path tracing also supports automated activation of e-mails, beepers and trouble-ticket initiation.

Knowledge-based diagnostic capabilities can facilitate “auto failover” selection of an alternate guard-signal route and keep the station on the air. Intelligent learning capabilities that develop a knowledge base of past problems and resolutions can allow a monitoring system to suggest corrective actions based on previously successful corrective actions.

The SNMP-driven monitoring application can “virtually” guide on-duty personnel to the point of failure on the display at the central monitoring station. The operator can then select the indicated device and quickly determine its status, reroute the signal, switch to backup equipment or make adjustments. If necessary, the application can direct the operator to the appropriate physical location to correct the problem, not just by a Floor 2, Room 3, Rack 102 paradigm, but through a visual map derived from the facility model.

Figure 3a. In this graphical representation of signal flow, the heavy blue line illustrates the route the SDI has taken through the infrastructure. Note that the arrow now points out of the MN cloud to the GWS, representing a file transfer to the GWS. We can describe the trace of an SDI essence as it is ingested through Router 1, transferred as a file over the media network to the graphics workstation, and transferred as SDI to Router 2 as S1 = RT1(I_1,O_6) -> ING1(I_0) -> -> GWS1(O_1) -> RT2(I_1, I_0). Click here to see an enlarged diagram.

It’s a lot easier to get where you need to be if the shortest route is plotted for you. Wireless capabilities and handheld devices would allow personnel untethered access to the monitoring system. In a difficult situation, an engineer can diagnose and isolate a problem by querying for signal traces of essence transfers or by probing system equipment and verifying its operating condition. Figure 3a is an example of a query by signal. Figure 3b shows the resultant trace of multiple essences to a single “program” (i.e., a query by program).


The infrastructure for each facility will vary greatly. A small operation with one or two racks in a single room may only need a block diagram showing device interconnection and signal flow. A simple audio buzzer or indicator-lamp alert system with minimal GUI-accessible features would suffice. By contrast, a large facility with numerous racks and equipment rooms, program-control rooms, master-control rooms, and studios needs to have a central monitoring station and instant fault analysis, notification, isolation, location and resolution. Otherwise, ascertaining the status of any resource and maintaining the infrastructure would be virtually impossible. This could jeopardize the facility’s ability to ensure that it gets its programs to air.

Figure 3b. In this query by program, we can describe the trace of the two SDI essence signals through the infrastructure, mixed to a single SDI signal for air in downstream keyer 1 as Air Signal = S1 + S2, S1 = RT1(I_1,O_1) -> GWS1(I_1, O_1) -> PS1(I_1, I_0), and S2 = RT1(I_8,O_6) -> ING1(I_0) -> -> PLO1(O_0) -> PS1(I_6, I_0). Click here to see an enlarged diagram.

Managing change. Facilities are often upgraded, so there should be a relatively painless way to update the monitoring system’s database without bringing the system down. There are no built-in maintenance windows in a round-the-clock operation. MIB updating and SNMP reporting to the central management station can help automate this task.

Managing the project. The amount of information communicated by each device is relatively small, but the total amount of SNMP traffic on a LAN can raise congestion issues and affect overall media-network performance. Make sure you make space available for SNMP-dedicated network switches in equipment racks and for additional CCM LAN cabling in cable trays.

Although this is a broadcast-oriented monitoring system, the merging of traditional broadcast engineering with media technologies, networking and computer applications requires a breadth of expertise that is nearly impossible to find in one person. Thus it is necessary to assemble a team consisting of broadcast engineers, network architects and computer system administrators to successfully implement a complete infrastructure-encompassing SNMP monitoring system.

Failsafe backup of control LAN. It is important to design LAN equipment-control capabilities in such a way that, if you lose LAN connectivity, you can still get your program(s) to air. For example, operators must be able to access equipment front panels to manually select routing functions and adjust signal levels; they must not be locked out by an SNMP network failure. Also, it is important to provide backup routing of all necessary essence (program elements) to circumvent the media network and stay on the air in case the media network experiences a catastrophic failure.

All together now

An integrated package using SNMP as described above will allow technical personnel to monitor and control the essence, media network and application layers of its infrastructure through a single GUI from a central management station. Such a system is necessary to meet the monitoring demands of a growing broadcast facility.

Philip J. Cianci is a broadcast media technology engineer at ESPN.

Home | Back to the top | Write us