The perfect broadcast server

Here are some thoughts on the “perfect broadcast server” compiled from discussions I have had with people over the years. If you agree with what is written here, or especially if you disagree, send me an e-mail at brad_gilmer@primediabusiness.com.


Figure 1. Systems that run video through the CPU may suffer from poor performance.

Some of the things here are contradictory but, in a perfect world, we could have anything we want, even if some of our wants conflicted.

The server should support digital interfaces including SDI, SDTI, ASI and AES audio. The server should support analog video and audio as well, although this could be handled in external conversion. The requirement for SDI and SDTI is clear. ASI is being used both as an STL interface and as a distribution interface. AES is the standard audio format. The perfect server would have anywhere between one and 20 video channels, with most users happy with four. Likewise, audio requirements are all over the map, but seem to settle somewhere between eight and 16.

In an ideal world, the server would support all video formats, including MPEG TS (long and short GOP), DV native, etc. Many people have told me that they would like a server to take in a piece of video and play it back in a wide range of video formats, performing the conversion on the fly. People running multi-language facilities asked for track patching — the ability to switch any audio channel to any output in real time.

Major advances have been made in the area of networking, and broadcasters are chomping at the bit to move forward. The server should be Ethernet-friendly, both from the standpoint of file transfer, and monitoring and control. There should be support of standardized implementations of Simple Network Management Protocol (SNMP). SNMP and the monitoring software that goes with it should not only tell us that a server or part of a server has died, but it should alert us that a problem is pending. Users are asking for more control of servers over Ethernet. As an example, setup parameters could be established over Ethernet rather than requiring separate RS-422 connections to each server. One idea I heard was to provide a file analyzer to detect any problems in the incoming file. I would extend this to silence/video breakup detection as well. Some servers are used in an automated ingest environment. If an error occurs during ingest, there is no way to know until the ingested material hits air. If the file is delivered in MPEG or some other format, it would be good to have an optional analyzer that could detect file format problems. This is above and beyond error correction algorithms employed during transmission.

Fibre Channel is great, but with 40-Gig Ethernet on the horizon, and Gig-E available today, servers should support transfers via either medium. The network architecture should be fault tolerant with dual networking cards configured in redundant nodes or rings. Operators should be able to disconnect one network without taking down all the servers. The network should automatically reroute around the segment that is down. The system should allow users to take advantage of off-the-shelf protocols and tools such as FTP to access and administer their systems.

There is a lot of talk in the industry these days regarding metadata. GXF, MXF, AAF and proprietary formats move information along with the basic video and audio. As networking becomes more popular, users naturally see the need for an “electronic tape label” that describes the information being transferred in the file. Users in the post-production environment see the need for a more full-featured information set for the exchange of information such as layering, composition and effects. Users are now asking for metadata support in servers.


Figure 2. High-performance video servers typically direct “reads” and “writes” to and from storage systems without passing through the server’s central CPU.

The server should support faster-than-real-time transfer and multiple-stream transfer over the same medium. As networks get faster, users are asking for connection-based virtual channels from one server to another. Material can be streamed between servers as fast as the network will allow, or it can be streamed across a preset number of “pipes” or virtual connections within the network. There should be a user-definable priority scheme for setting bandwidth control on clients both for network file transfers and total server capacity.

Everything should be modular — power supplies, CPU, I/O, fans, disks, you name it. Filters should be easily accessible without removing any screws. Speaking of screws, the server should be designed so that everything can be swapped out without having to remove a large number of screws. Parts should be hot-swappable and interchangeable with other servers. Mid-plane designs with modules that plug into both the front and the back of the server seem to be popular with users. The server must be designed so that it can withstand the rigors of a mobile service. It should allow for addition of storage quickly and easily. The storage should be available to a large number of I/O units and available to other servers without bottlenecks. Fault-tolerant everything. Need I say more?

There seem to be two distinct camps regarding manipulation of video and audio once it is inside the server. Some people want a no-frills bit-bucket. Others want to be able to manipulate the material with video effects (squeezeback where video is reduced in size during closing credits to play a promo is one common requirement), cuts, audio fades, simple wipes and so on. In some applications, MPEG splicing and logo insertion is required.

The server should be a general purpose IT-type server with SAN or NAS (pick your flavor) storage. When it comes to operating systems, some users have firm convictions, but the majority of people are just looking for a stable off-the-shelf system that is easily maintained by their staff. They really do not care what OS is used; they just want it to work. The server should pipe I/Os directly to storage without going through the CPU, unless absolutely necessary.

The system should allow the use of message-based middleware for control and integration in large facilities. There is a strong requirement for fully disclosed and well-documented API and control interfaces so that users can smoothly integrate a server into their plant.

The server should have a reasonable cost basis, allowing servers to start small, but become large in a reasonable and cost-justifiable way. The market will naturally support different servers at different price points, with different features. Users will purchase the server that makes sense for their operation. The reality is that many users purchase a smaller server and then expand it as their needs grow. Users are particularly irritated when they try to grow a small server, but storage costs are irrationally high, especially when their buddy next door just bought a 100GB disk drive for $100. There may be valid reasons why this comparison does not work, but manufacturers must be aware that server pricing will always bear some relationship to commodity storage costs, and that users resent it when storage costs for broadcast servers bear little resemblance to what they can buy off the shelf.

And finally, the perfect server should have a Mean Time Between Failure (MTBF) of infinity, be 1RU high, generate no heat, have no moving parts and cost four dollars.

Brad Gilmer is president of Gilmer & Associates, executive director of the AAF Association and technical moderator for the Video Services Forum.

Send questions and comments to:brad_gilmer@primediabusiness.com

Home |Back to the top|Write us