Developments in automation
Dennis Raymond (left), WTTW-TV's engineer in charge of broadcast operations, performs program ingest and preps video clips using Sundance Digital’s FastBreak Automation system, while broadcast technician Barry Blue (right) operates the main switcher for air operations.
The way we were
Automation seems to be a hot topic in our industry. But it's not a new topic. In fact, the first automation system commercially developed and deployed may have been created in 1968 by Broadcast Computer Services (BCS) and deployed at KOOL-TV in Phoenix. The system was an outgrowth of an automated billing system that BCS was developing. It used punch cards to input data to the system, which was not unusual in those days.
In the early 1970s, taped commercials aired from quad VTRs. Much content still played from film chains called islands (perhaps so named because they were huge and needed stable metal plate “islands” beneath them to keep them in alignment). Prerolls were set by an offset on the command and by parking the tape or film several seconds back from the start of media. An instant-start film chain was a luxury, and only the AMPEX AVR-1/ACR-25 quad recorders had near-instant start (about a quarter of a second, nominally five frames).
Displays for the automation systems of that period were on ASCII terminals, offering character-mode displays, nominally 80 columns by 25 lines. The computers were often DEC or IBM systems, and programming them was a bit of an art. Suffice it to say that hard disks, serial control protocols, graphical interfaces and ubiquitous device control were a long way in the future.
Clearly, the roots of automation are still quite relevant today. It is just as true now as it was then: Broadcasters' automation needs are unique and much more complex than most businesses. The root of the complexity is the requirement that things happen in a deterministic manner with absolute precision. We expect no less from an automation system than we would from a human pushing the buttons in master control. The difference, of course, is that the human can assess issues using many sources of data and can process that data rapidly to make decisions about what to do when things go wrong. A machine can only process data by rules set in advance and is unlikely to have the range of options as a human doing the same job.
Jose Estrada, operating engineer at KTVK-TV in Phoenix checks logs against Sundance Digital’s Titan.
Get the TV Tech Newsletter
The professional video industry's #1 source for news, trends and product and tech information. Sign up below.
We've come a long way
The result is that we demand highly reliable automation systems that can control a wide variety of devices using a wide range of features to create complex programs. Such devices include program and interstitial sources, routers, master control switchers, keyers, DVEs, still stores, character generators, audio clip players, VTRs, satellite receivers and others.
Computers today have come a long way since the early '70s, so it's not surprising that the range of automation options has expanded proportionately. From the small number of automation pioneers in the '70s, the field has grown to an array of manufacturers of hardware and software solutions in a range of prices. Systems now can control from one to literally hundreds of channels. In the case of cable ad insertion, the number of controlled outputs can reach into the thousands using the SCTE 35 ad-insertion protocol. SCTE 35 is intended to allow a program distributor to issue splice commands to remote edge servers or other commercial-insertion systems by embedding the commands in the distributed signal. Essentially, this approach comprises a large distributed automation system where the central site has no direct, real-time control or status of the changes made on a local level.
Device control
Similar paradigms exist in the broadcast industry as well. Some networks use remote splicing and/or insertion regularly to distribute content. By so doing, they allow regional stations to insert commercials at the time of air. This precludes distributing many feeds of the same program with slightly different interstitial content. These networks distribute commands within the compressed MPEG stream to edge servers where the action takes place. In more traditional control, a closed loop is established with commands sent out to the far end. Status and, presumably, monitoring are returned to the central site, where as-run logs are justified to the traffic system.
Until now, device control has been relegated almost entirely to RS-422 control circuits. Over short distances, the communication is highly reliable and offers sufficient speed for nearly any transaction. A few automation companies make products that speak to a small number of controlled devices, mostly video servers, using TCP/IP over Ethernet.
Moving to more generalized network control is extremely attractive. The physical layer is simple, and the control dialect can expand to include complicated commands that require specific acknowledgement/negative acknowledgement (ACK/NAK) messages from the controlled device. It can also show status, which is cumbersome to manage over asynchronous circuits.
Recently, OmniBus and Pinnacle suggested to SMPTE that the Media Object Server (MOS) communication protocol could be the basis for a rich language for sending complex control messages to a wide range of network-aware devices. MOS is an XML-based approach to communication between newsroom automation systems and controlled devices. The concept is that the controlled devices would include a MOS-compliant software interface that would allow them to listen to the dialect for a generalized type of device (character generator, teleprompter, video server, etc.).
This would simplify developing control systems and controlled devices because both ends would be writing to a known specification that supports common features and requests. Pinnacle and OmniBus say that MOS could offer a standard method of communication for a broad range of automation-related needs.
Figure 1. When controlling a device over a LAN using the MOS communication protocol, the process for a command that schedules an event, called a Type-S (schedule) event, might look like this. Click here to see an enlarged diagram.
Figure 1 shows the process for a command that schedules an event, called a Type-S (schedule) event. Figure 2 shows the process for a command for an event that the automation system requires to be executed immediately, called a Type-N (now) event.
MOS proponents say that it offers precisely the kind of device independence that might allow automation to talk to a class of devices in a known dialect without having to write new interfaces to each and every device as manufacturers release (and later modify) them.
This would preclude device manufacturers from debugging interfaces to a large number of automation suppliers. End users might be freed from the need to get certification from the manufacturers of all of their controlled devices before upgrading their automation system. Indeed, if the final system works as well as might be expected, an end user might see a day when he or she could change an automation vendor and know that all of the system's devices will still talk to the new system.
That is, perhaps, a lofty goal. But the use of standards in communications between automation and devices has worked well in the past. This was the case when the industry broadly adopted the Louth (now Harris Automation) VDCP protocol for communication with video servers. Just as VDCP opened up a more level playing field and allowed a modicum of standardization, MOS might permit enhanced functionality.
The proposal suggests that a new profile be established in the MOS protocol to support automation-specific uses. In the past, SMPTE has attempted to create generalized protocols for such uses. Indeed, the ES Buss protocol standardized communications uses RS-422 interfaces. ES Buss was never accepted industrywide, though some manufacturers have used it as the basis of their own machine control topology. This time, it seems much more likely that SMPTE can work with MOS due to the base of deployed MOS solutions in a segment of the broadcast industry.
It is also clear that some vendors decided long ago that the services such an approach could offer are so attractive that they warranted proprietary development. A number of vendors have designed device controllers that use IP communication to send commands to localized device control engines that take IP-based commands and translate them into machine language.
Figure 2. The process for a command that the automation system requires to be executed immediately, called a Type-N (now) event, might look like this. Click here to see an enlarged diagram.
These interfaces can maintain synchronization using NTP services, even allowing for local-time offsets to ensure that the commands are executed in a deterministic manner over wide-area control networks. The proposal to use MOS as the basis for communication can move these stand-alone control interfaces inside the controlled device, which would then receive standardized commands directly, without translation from IP to RS-422.
The user interface
This quite recent development shows just how pervasive networks and standardized approaches have become in automation systems. Databases were once proprietary and specific to each installation. Now, off-the-shelf databases such as SQL Server hold the playlist and logs for all content of which the automation system is aware. This not only simplifies the job of the code writers developing a system, but also it allows backup and restoration plans to use common IT techniques, software and hardware. This enables local support personnel who are familiar with the products to assist in maintaining broadcast automation with less training specific to a “broadcast-only” product.
Conversely, if a station would prefer that mission-critical systems such as automation be supported by personnel specific to broadcast operations, it is possible to obtain training from a much broader range of sources than just the automation provider.
The user interface is a critical part of the modern automation system. The amount of information that master control operations must have available is staggering. Once a log is processed from traffic and converted for air use, it is likely to have associated with it a number of other lists beyond the air log. These include material dub (content that must be dubbed into a server and prepped for air), satellite record, missing material and purge lists, and perhaps others specific to manufacturers. The lists must present themselves in a way that tells the operator what information is critical at any one moment. A spot that is unavailable might not scroll onto the screen until literally minutes before air, though it may have shown up in numerous other places on dub lists, missing material, etc. The GUI should present the critical information at all times and notify the exceptions in clear and concise ways.
Increasingly, broadcast automation systems are handling more than one channel. Many stations run local news operations on a cable channel, a second station in a duopoly or LMA, or additional channels in a DTV multiplex. The GUI must present multiple channels of information without overloading the operator. The ability to use a second monitor, or to “drill down” when exceptions are noted, is critically important. For instance, the normal display may show multiple horizontal moving timelines monitoring several channels. The display would highlight a segment that has not been found or failed to cue for air, perhaps in yellow an hour before air and in red 10 minutes before it is due to play to air. The operator would move to a separate screen to get more detail and solve the problem. Or, he or she might want to get another operator to help from another workstation if the workload is too high.
Integrating automation with other products in master control could preclude operators from having to monitor many screens for potential sources of problems. Integrated monitoring systems using flexible monitor matrices can reconfigure the monitoring to suit the needs of the current operational status of many devices.
The operating system
Take, for instance, a server that has failed to cue a spot. The server might report the exception not only to the control loop connected to automation but also to the master control monitor system, perhaps flashing an icon on the screen to get the operator's attention. It might offer the choice to change to a backup port. The monitor system can allow the operator to drill down to the problem without having to use a windowed computer interface and without needing to know how many products the GUI displays.
The automation monitor screen would be but one of perhaps several tools available to the operator to help solve complex problems. GUI versions of control panels for the master control switcher, high-level block diagrams with hot spots where he or she can drill down to another screen, and other techniques can allow the operator to combine many screens and enhance his or her ability to run the master control.
The next five years
Any discussion of master control and automation would be incomplete without at least a passing reference to the computer operating system. Most of us have had experiences with the “blue screen of death.” And the last place you want to see that is in the automation system. Older systems based on Windows 95/98, OS2 or other operating systems that are not designed to run real-time, mission-critical processes clearly are not wise choices for modern, complex automation systems. Unix, Linux and the latest Windows XP products offer much better protection from “excitement,” which is the last thing you want to see. A reboot might be a simple cure, but when that takes the automation system offline for 10 minutes, life can be quite challenging.
There are things you can do to ensure that uptime exceeds downtime. Keep general office systems on separate computers and off mission-critical networks for automation, router control and other related systems. Use strict version control and don't allow automatic updates to the operating system. You might find the automation company has not even tested current patches for conflicts with its application(s). And never update software on one device in master control (servers, routing, etc.) without checking with the automation company about the new version. The companies are usually clear about what they have certified, and unilateral changes are just not the way to start a good day. Always check first.
In the next five years, expect tight integration with asset management, archive, and MXF transfer from spot- and program-delivery service with full metadata support and export to PSIP. Automation is rapidly moving from a machine control engine to facility management and automated, unattended operational capabilities.
Automation has come a long way in the last five years. Systems are more stable, cost less to maintain and offer advanced features and dazzling user interfaces. Carefully reviewing the growth strategy for your system with potential automation vendors will help keep smiles on everyone's faces and keep make-goods a minor headache.
John Luff is senior vice president of business development at AZCAR.