Originally featured on BroadcastEngineering.com
The evolution of the video server
The server is the heart of file-based operations.

The video server is one of the key enabling technologies behind the move from videotape recording to file-based production. The server provides the bridge between content in the digital domain and real-time video. The device has come a long way from its early days as a VTR replacement, and it now forms an essential component in the carriage of video from acquisition to air.

The use of data compression gave video servers higher capacity.

Figure 1. The use of data compression gave video servers higher capacity.

The roots of the video server can be traced back to the DDR. These allowed editors in a linear suite to bounce effects back and forth without adding tape generations, in the analog days a quality loss. The capacity was limited to minutes, but it was easily sufficient as a temporary store for short effects clips. The video was usually handled uncompressed, sampled to Rec. 601.

The breakthrough was to use video compression and to leverage the ever-increasing capacity of hard drives. Compression gave higher storage capacity, measured in hours, and eased internal data read/write speeds to the disks. (See Figure 1.) The Tektronix Profile and the Hewlett-Packard MediaStream were successful early examples from the mid-1990s. The former used motion JPEG compression, and the latter used MPEG-2.

Since then, the servers have evolved so that the video server is now more of an edge device providing an I/O port between real-time streaming video and the file domain.

The video server

The challenge for the early video server designers was to build a device that would interface between the bursty random access of the drive and the constant flow of video frames. When recording, the video input must be converted to a stream of data and recorded to disk without any loss of video data.

Conventional computing was best effort. Writing to disk took as long as the process took. Similarly, to play back files, the server had to create a continuous stream of video with no dropped frames. Video demands a deterministic system, driven by frame sync.

Even a low-cost DVR can achieve this today, but in the 1990s there were many obstacles to overcome. Standard operating systems like Windows were not deterministic, so specialist real-time operating systems were used to manage the video data path, with Windows providing a management overlay. That has since changed, and servers today run on Windows, Linux and Mac OS.

Commercials to program playout

The servers immediately found application for the playout of commercials. The constantly repeating playout caused wear and tear to VTRs, and expensive library management systems were needed to store and load tapes to play the spots.

The move to play out programs as well as commercials from disk required much higher disk capacity. With program duration 60 or so times larger, manufacturers had to wait until disk capacity increased. In the late 1990s, the server manufacturers started to add ports for external drive arrays to add capacity.

The servers developed more along the lines of conventional IT storage, NAS and SAN, but with the addition of the real-time video I/O ports. A broadcaster could look at storing many days of programming on the disk arrays and seriously consider retiring those old robotic cart machines. It was at this point the terms “tapeless” or “file-based” came into common use.

The addition of a storage network also facilitated the sharing of files across transmission servers. As broadcasters used dual servers to provide redundancy, one file could be mirrored across the air servers. For multichannel operators, commercials could easily be copied across many air servers.

Evolution of mass storage

The early servers used parallel SCSI interfaces to the additional storage. Fibre Channel and, later, Ethernet have replaced the earlier technology, following the move by the IT sector. Although the NAS and SAN are the mainstay of storage, very large broadcast media storage systems need the advantages of active storage typified by Harmonic's MediaGrid and EMC's Isilon. Such nearline storage provides lower-cost mass storage for the program library, leaving the traditional video servers as the real-time edge devices.

As it became common to back the video server with large storage arrays, the function of the server specialized. Instead of a single server with input and output, much like the VTR before, servers became dedicated to output — the air or transmission server — and the ingest server. The latter added features like simultaneous encoding of a broadcast-resolution file and a low-resolution proxy for browse viewing. The ingest servers still retain video outputs, as an important function is to view files while they are being written. In news, this is to speed workflows, and in general playout applications it can be for QC.


Specialization continued with the development of the production server. Increased performance from the servers component parts means that higher bit rates can be used. Instead of the 25Mb/s or lower bit rate typical for playout, acquisition codecs can be used, allowing live recording from cameras to files of sufficient resolution for editing.

The production server supports features not required for playout, notably slow-motion playback of two- or three-phase recordings from slo-mo cameras.

Equally useful for sports or studio production, the video server has replaced expensive banks of VTRs recording iso feeds, along with the attendant need for time-consuming ingest to the editing systems. It has increased flexibility and eases the production of fast-turnaround shows like reality TV. Live shows can feature packaged highlights, only possible with servers, as the videotape workflow was too time-consuming.


The video server has evolved into a complete storage system to support broadcast operations.

Figure 2. The video server has evolved into a complete storage system to support broadcast operations.

Freed from the capital and maintenance costs of cart machines, the video server has provided broadcasters the ideal platform for multichannel delivery. Time-delayed channels are just another output port on the server system. It has given broadcasters the means to build subchannels that use multiple plays of the same programming, akin to the near-VOD movie channels.

Many broadcasters need to process incoming programs to create trails as an essential part of their channel branding and promotion. This has led to the video server architecture evolving into a complete playout system to support ingest, editorial compliance, promo creation and playout. (See Figure 2.) Clips can be edited in place on the central storage (nearline) or downloaded to local storage in the NLE workstation, with the finished results copied back to the nearline server ready for transmission.

For all the advances in disk capacity, it still makes for more efficient operations to use a low-bit-rate long-GOP codec for the air servers, easing the network requirements for file transfer and making for more efficient use of storage. However, for program preparation, it is preferable to use a master format, like AVC-I. The playout system may well run with two codecs, so transcoding will form an essential process in the system.


Newsroom playout is a special case, where the schedule is a constantly changing rundown, and each clip is manually cued rather than running against the clock. However, the requirements are not dissimilar from a playout server. The source of the clips may be an ingest server, recording agency feeds or incoming lines. News clips will also come from SNG feeds — again, needing an ingest server — or local camera memory cards.

Unlike playout, all news clips will be edited, and edit-in-place on the central storage array is the most common process. Finished clips can then be transferred to the air server via instructions from the newsroom computer system (NRCS).

Graphics server

Modern computer processors give the server the power to perform additional processes, including mixing live video and keying graphics.

Figure 3. Modern computer processors give the server the power to perform additional processes, including mixing live video and keying graphics.

It becomes possible to add additional functionality to the video server as computing power has increased. Graphics is one area where low-cost commodity components have transformed what can be achieved within the server. The GPU provides huge graphics processing power at negligible cost. Several functions within master control can potentially be collapsed into a single appliance. This reduces system complexity and with that, overall cost. It is possible for the server to perform squeezebacks and picture-in-picture effects, add graphics, captions and EAS, potentially saving a string of 1RU boxes downstream. (See Figure 3.) The channel-in-a-box adds further functionality to the video server with the integral playlist automation.

Broadcast infrastructure

The video server long ceased to be a stand-alone appliance like a VTR. Connected to a storage network and controlled by digital asset management, editing systems and playout automation, the server forms part of the broadcast infrastructure in an end-to-end process from ingest to multidevice delivery. The ease with which the server can be controlled and the media transferred around the storage network lies at the heart of file-based operations.


The video server has come a long way from a simple replacement for the VTR. Although in essence still a port between the real-time SDI video and the file domain, the server has enabled the entire file-based ecosystem at the heart of broadcast operations. That adds many new facilities just not possible with tape, like browse, rough-cut editing, read during record, and simple copying and wide-area transfer.

The video server added broadcast facilities to an IT platform: video reference, SDI I/O and real-time operation. Of necessity, the device needs video cards to add these features. However, the video reference can be replaced by network time protocol. Already the output stream is carried over ASI, not SDI, in many applications, and IP is becoming more popular. The vastly increased processing power and bus speeds mean that deterministic operation can be guaranteed from commodity platforms. Another development is the SSD, which is finding application in acquisition and at the other end of the chain, in cable head-ends. Although the hard drive still has many advantages, the SSD is closing the gap in many applications and is starting to show up in video servers.

As videotape fades to a memory, the ingest server becomes less useful. Programs arrive as files via fiber or hard drive. The only function for the ingest server is to record live feeds. Advances in the carriage of video over IP threaten even that function. Once every camcorder had a VTR mechanism; now there is just a slot for a memory card. Will the video server survive, or will it become another version of the streaming server?

David Austerberry is editor of Broadcast Engineering World Edition.

Post New Comment
If you are already a member, or would like to receive email alerts as new comments are
made, please login or register.

Enter the code shown above:

(Note: If you cannot read the numbers in the above
image, reload the page to generate a new one.)

No Comments Found

Wednesday 9:02AM
Analysts: TV Regs 'Not as Dire as We Thought'
We feel the negatives are known and are a lot more comfortable recommending the space.

Featured Articles
Discover TV Technology