The evolution of video servers

The introduction of information technology (IT) into the traditional broadcast facility allowed television's associated workflows to become even more integrated — and thus, more efficient. With this IT-fueled growth in networkability and processing power, the video server emerged as the preeminent component within the broadcast television facility.

We are now on the cusp of the latest evolutionary leap — a platform that combines the file-based workflow functionality of the video server with the ability to host previously discrete media applications in a single, compact chassis. Along with reducing the hardware cost and complexity of the broadcast facility, this platform provides the infrastructure to add new media applications, processes and services with the ease of installing software-enabled functionality. Integrated applications also reduce the costs of installation, operation and maintenance.

The rise of server-centric workflows

Until the introduction of the video server, most broadcast facilities built their workflows around the capabilities of their routers. In VTR/router-based infrastructures, a large router managed the flow of content throughout the facility, and components were discrete and connected via baseband. Moving content from one part of the facility to another often meant that tapes had to be physically copied, or dubbed. In a fully implemented video server model, in which a storage area network (SAN) becomes the facility nucleus, all users gain simultaneous access to the same material, so the whole concept of dubbing tape evaporates.

The ability to share media over a SAN reduced baseband routing requirements and enabled new server-centric workflow models. Furthermore, network access to media as files allowed the efficient implementation of new services with minimal incremental staffing requirements. The “IT-ization” of the broadcast facility was now fully under way.

As technology advanced and products evolved, external applications and processes that were once discrete were incorporated into the core server and master control platforms. Simpler applications and processes, such as play-to-air sequencers and proc amps, were the first candidates for integration. This assimilation was usually accomplished by adding special-purpose hardware and software to the platforms. With fewer independent components, routing requirements were greatly reduced. Generally, every function built into the server reduced the load on a routing switcher by three ports.

Just as production processes became more streamlined, the storage infrastructure improved in bandwidth and capacity. New capabilities emerged, such as SAN-based editing, which offered specialized environments such as television newsrooms the ability to get stories to air faster than ever before.

CPU-based processing capabilities of the platforms continued to increase, and more complex components became integrated as software features. Enhanced processing capability added multiviewer monitoring to routers, and allowed the integration of master control applications. It also brought the ability to integrate up/down format conversion and aspect ratio conversion to servers.

Benefiting most from Moore's Law are CPU and storage-based components, and the video server is no exception. Dual-core processors multiplied software-processing capabilities exponentially. The newfound processing power allowed codecs to move from hardware ASICS on add-in cards to CPU-hosted real-time processes. Compression format flexibility was now possible, and new compression standards, such as XDCAM and DV100, could be retrofitted as software upgrades.

What's more, as multi-core CPUs and graphics processors (GPUs) are added, the server platform becomes powerful enough to host many more applications in the broadcast chain, and a new platform is born. (See Figure 1 on page 71.) The addition of previously discrete processes such as channel branding delivers the ability to dramatically improve workflow. These user applications essentially become thin “clients,” and the need for routing is eroded further.

The evolution of the video server into this new platform not only revolutionizes the approach to facility design, but also streamlines operations. Smaller facilities can be built around one or two (for redundancy) platforms with internal storage, while larger facilities can take advantage of the SAN architecture.

The platform is designed around the latest computer architecture, taking advantage of 64-bit multicore processors, high-performance GPUs, 8GB of 800MHz memory, and using the PCI Express (PCIe) bus for I/O. Attached via the PCIe bus is a two-input/four-output HD-SDI interface card. All media operations are performed via host CPU/GPU combination. Because all media operations are internally hosted, careful attention is paid to processing capacity and memory bandwidth resources. An appropriate approach is to create a resource budget and allocated it among all facility operations.

The shift to IT-centric infrastructures has gone further than just accelerating product integration through processing power increases.

The actual architecture of applications is also shifting more towards the IT client/server model. Recognizing that workflow benefits of device “virtualization,” application development is shifting to the application server/thin client model. The platform architecture accommodates this change by dividing core processes into four sections:

  • The media engineThe media engine performs all real-time media operations from ingest to playout. Typically, it can consume up to 90 percent of the platform resources.
  • Drivers and servicesThis section handles the real-time control of the media engine, allowing for both externally (i.e. Ethernet, RS-422) and internally hosted control. Also at this level is the external media file interface that supports file and data wrapping and unwrapping operations.
  • Media application serversApplication servers are implemented as real-time components designed to decouple the human operator interface from the media platform. This not only allows for secure, remote operation, but also it improves reliability because most software bugs typically reside in user interface code. To further improve reliability, application servers such as automation are designed to continue to run against an internal schedule even in the event of communication loss to the automation client.
  • Client applicationsThey can either run locally or as Ethernet clients. This is particularly beneficial to media platform development where the real-time heavy lifting requirements of media handling can be separated from the user interface (UI). By moving the UI from the real-time application platform to attached clients, additional focus can be placed on reliability and security.

For redundancy sake, client applications are able to log-on and concurrently control primary and back-up media platforms.

Conclusion

The integration of graphics into server playout or the addition of a built-in multiviewer are notable accomplishments. Combined, the hosting of multiple applications onto a single platform truly represents the next milestone in equipment design — and will ultimately change the way broadcasters build their facilities. Put simply, the platform improves workflow efficiency throughout the entire broadcast chain, allowing new broadcast channels to be easily added with almost no additional operating costs.

Todd Roth is vice president of technology for NEXIO server systems at Harris.