Skip to main content

The State of the Media Server

Tapeless, file-based workflow is a reality that depends heavily on server platforms which are becoming integral elements of media organizations. Video servers and storage technologies have evolved to the level where they are employed in multiple sectors of the overall workflows. Regardless of whether the server system is composed of purpose-built hardware or assembled with commodity based IT-products; servers fundamentally have certain critical design and functionality requirements that must be met. Among those are: the capabilities to ingest material once at the front end of the workflow; and, then throughout the rest of the workflow, maintain a file-based operations model clear through to play-out, distribution and archive. The components necessary to meet these objectives include codecs, networked storage and management, and applications which handle the user interfaces and processing parameters necessary to manipulate all the essence in a file domain.

There are other elements that are equally important to the growth of the businesses that servers must support. Servers must offer a high level of scalability with an easy path to extensibility as facilities continue to experience growth in the creation and delivery of media for broadcast and content packaging. These include the ability to scale the number of simultaneous ingest channels at a baseband video level as well as offer unrestricted access into and out of the storage platform—whether serving files to the playout side, to the content distribution and repackaging side, or to support native post-production processes without necessarily having to externally convert (wrap, transcode or transrate) the essence.

Additional levels of file-based activity must be provided that extend the platform through the integration or management of the media under the direction of a newsroom system and/or a media asset management (MAM) system. Furthermore, the media asset management system may need to work in direct conjunction with a facility automation system, feeding file-based content to remote systems or to news/production platforms with a minimum amount of latency or human intervention. This near virtual environment allows content acquired from any source to be used by any entity and released in any form to a widening variety of destinations—both internal and external to the organization.

Throughout the workflow, and as program preparation is completed, finished files must be registered, tagged with metadata categorizing it by rights, ownership and description, then placed into the system where they can be archived and later accessed through convenient user interfaces which provide for program play-out via the appropriate server components. Ultimately, all these components must provide reliable performance and day-to-day stability that is critical to maximizing quality and services through a modular design with flexible bandwidth and storage capacity. Today's server platforms can now provide the organization with an easy extensible foundation for future growth.


With a continued emphasis on IP, the server platforms are now dealing as much with file activities as they are with ingest and playout. This has created new challenges in supporting infrastructures that interconnect the servers to peripheral components including data networks, Ethernet switches and Fibre Channel or IP-storage networks. The functionality now required in content systems needs to support HD and SD video distribution over TCP/IP networks. The servers must be able to support multiple simultaneous streams of live, scheduled and on-demand video to multiple clients through a centralized content distribution platform which is managed by automated and manual menu-driven user interfaces. All the while, these systems must also consider the "green" environments into which they are placed.

The challenges ahead for the server platforms of tomorrow will focus more on content management than the technologies of ingest, encoding and play-out. The industry has pretty much completed the work on these later activities. Still, there is nothing "traditional" about the facility of tomorrow. With a continued focus on doing more with less, server platforms now need capabilities that allow internal altering of the file structure so as to prepare the content for other purposes and packages. Content files may now be encoded for adaptive bitrate streaming (ABS) which ready the file sets for on demand distribution over public networks (for example, Silverlight), and for other services (e.g., UltraViolet). Files may require bitrate adjustment (transrating) for applications such as digital signage, mobile devices or other apps. Many servers may eventually be fitted with built-in analytical tools that will assure compliance with the business parameters outlined by so called third party "workflow engines" that are the order entry processors which prepare and deliver files based upon "user orders" received. These engines may incorporate self-healing activities that analyze the file, make appropriate corrections, log and report those corrections, then send the files on to the next stages of the workflow.


Companies are now beginning to see the need for management of their digital assets for a wider range of uses and applications. Earlier it was mentioned that servers are now using commodity based IT-hardware. This growing trend can be seen in server products employed for transmission systems (e.g., the Omnibus iTX platform), for enterprise wide MAM (e.g., Dalet Digital Media), and certainly for video-on-demand or streaming applications (e.g., Digital Rapids). Tighter integration of these platforms is possible through the use of open applications interfaces such as those that end users in concert with the AMWA (as AS02 and AS03) have developed. In these models, constraints are placed on the MXF file formats that provide for improved "application specific" interoperability using a standards based approach.

Finally, edit-in-place servers and storage platforms seem to be coming into their own domain. Users appear to be less satisfied with proprietary implementations that limit collaborative workflows. Users want to continue to use the edit packages which they are trained on, but without having to routinely make hardware changes in order to improve productivity. It seems inevitable that sharing the content between platforms will migrate toward platforms sharing the storage resources—which could fundamentally change the environments of video and media servers in the not too distant future.

Karl Paulsen is senior consulting engineer with Diversified Systems and regular columnist for TV Technology.

Karl Paulsen is the CTO for Diversified, the global leader in media-related technologies, innovations and systems integration. Karl provides subject matter expertise and innovative visionary futures related to advanced networking and IP-technologies, workflow design and assessment, media asset management, and storage technologies. Karl is a SMPTE Life Fellow, a SBE Life Member & Certified Professional Broadcast Engineer, and the author of hundreds of articles focused on industry advances in cloud, storage, workflow, and media technologies. For over 25-years he has continually featured topics in TV Tech magazine—penning the magazine’s Storage and Media Technologies and its Cloudspotter’s Journal columns.