Skip to main content


Imagine a television station that shoots on Panasonic P2 or Sony XDCAM. That station could potentially broadcast without using videotape at any stage in its production and transmission pipeline. Traditionally, video storage meant an archive or library of videocassettes. The drive for process efficiencies means that broadcasters are now looking to handle video as files rather than as videotape. As a consequence, video storage is undergoing a step change.

The ideal video storage system would be reliable and scalable, with high packing density. It would sport high data transfer rates and facilitate collaborative workflows. And it would cost less than using videotape.

The classic islands for video files have been postproduction and playout. All other broadcast processes — such as acquisition, distribution and archive — have relied on videotape. The need to repurpose content for delivery over new platforms like mobile and the Web means that broadcasters are rebuilding their workflows from the ground up in order to lower operating costs.

Advances in storage devices and architectures now make it feasible to run a station without a single videotape on the premises. File-based broadcasting claims to leverage the low cost of commodity IT systems. This is partially true, but storage for video has a different set of requirements than typical IT files, such as a database record, or an e-mail or office document.

A 110-minute movie archived with MPEG-2 I-frame compression at 50Mb/s is a file size of about 50GB, with an HD version being much larger. That is 1 million times the size of a typical word processor file, so it is no surprise that regular IT storage is not going to be suitable. But it is not just file sizes. Broadcasters need many special features in order to get the optimum performance from a storage network. One is support for containers or wrappers. Another is read-while-write. A third is partial restore.

File storage for video

Video files are usually containers that store interleaved video and audio essence, or possibly references to external atomic audio and video files. The containers may also contain metadata: structural-like time code or production data that is essential for controlling operations. The video storage systems must be designed to handle these special features that set video apart from general office files. For example, storage may need to be MXF-aware.

A big issue with video operations is read-while-write. Small office files are written so quickly, we can wait for the process to finish before opening the file. In broadcast, we are used to video streaming through real-time connections. A waveform monitor or picture monitor can be used to view the content during the transfer.

A file cannot normally be viewed until the index is written, the final operation during a move or copy as the file is closed. In newsroom operations, it is essential to read files while they are being written. During long-form program ingest, a check of the process confirms the operation. What if the source VTR had a head clog? The operator would be unaware until the completion of the ingest, a process that may take over an hour. It is expected with video systems, and file-based storage must support a similar facility.

Partial restore

Another video issue is the partial restore. Often, a short excerpt of a file is needed. Consider the creation of a trailer. About two to three minutes of material is all that is needed to cut a promo for a one-hour program. Conventional IT systems would recall the entire file from the tape archive to disk in order to access the short clips. Not only will this take a long time, but it also ties up a data tape drive. A tape library will need additional drives to recall data that is not required, which is an unnecessary cost. Broadcasters need a partial restore of part of a file that can be defined by viewing a browse-resolution file. Then they need to mark the requisite time codes needed to edit the promo. SMPTE time-code values must be turned into tape locations so that efficient use can be made of the available drives and only part of the file is restored to disk. Such functionality can only be found in video-centric archive controllers.

Disk advances

One of the recent changes in disk technology has been the serial interface. Parallel SCSI has been the workhorse of editing systems, with Fibre Channel used for high-performance applications. In an effort to lower manufacturing costs and increase performance, a serial interface has been developed to replace parallel SCSI. The extension of parallel interconnections to support higher transfer speeds is limited by data skew as the data word travels along the ribbon cable. Cable dimensions stemming from variations in manufacturing tolerances lead to slight differences in propagation rates. The need to recover a single word from the skewed data limits cable length and transmission clock rates.

Moving to serial interconnects avoids these problems, but immediately pushes data rates up by a factor of the word length. Early examples were no faster than parallel interfaces at 3Gb/s, a similar rate to Ultra-320 SCSI, but future developments should take speeds up to 12Gb/s. Apart from the potential speed increase, the serial connector is much smaller than the old parallel connectors, which is a big advantage when constructing dense drive arrays.

Similar to SAS is SATA, the serial implementation of the ATA/IDE interface used for desktop PC drives. Current products support a 3Gb/s interface speed. As SAS/SATA is a point-to-point interconnection, bandwidth is not shared, unlike the multidrop bus used in parallel SCSI arrays.

Enterprise-class drives are now available with a choice of Fibre Channel, SCSI, SAS or SATA interfaces. It is possible to mix interconnection technologies, so a RAID controller may use Fibre Channel for external connections to the network and the lower-cost SATA for local links to the drives.

Network capacity

Just as disk capacities increase, network speeds improve. Gigabit Ethernet can now be found in domestic networks, but for video applications, 10GigE represents the current implementation of the technology. Fibre Channel offers 4Gb/s data rates, but real-time video is moving to 3Gb/s for 1080P50 and 1080P60. Data interconnections are only just keeping pace with typical video data rates for uncompressed HD, 2K and 4K files.

The saving grace for many systems is that compression is used. With the majority of multichannel systems still running SD, the real-time video data rates are more likely to be 50Mb/s, which gives current data networks plenty of capacity.

Storage networks

Classic storage networks used the network attached storage (NAS) or storage area network (SAN) architecture. Both feature a small number of processing heads and a large number of drives. The processors manage the indexing of files and run the filing system.

The NAS is lower cost and simple to administer, but it does not scale and suffers from a bottleneck at the network interface. Adding one or more NAS appliance to build higher capacity introduces file management issues, as a project can expand to span several drive arrays. The SAN manages a single pool of storage, but is generally more expensive to purchase and requires skilled personnel to maintain.

Many SAN products only support one operating system, but for television production, it is essential to have cross-platform support for the client devices (for Mac OS, Windows and UNIX) and a shared file system.

To meet the cost and performance requirements of broadcasters, storage vendors have looked to new architectures that can avoid some of the drawbacks of the NAS and SAN. As processors have gotten cheaper and more powerful, it is now possible to disperse processors throughout the disk system, moving from dumb or passive disk arrays to “smart” or active arrays.

Active arrays boast features like fast rebuild after drive failure. A conventional RAID array is at risk after a drive failure and replacement. If a second drive fails, there could be potential data loss. Active storage can use more sophisticated data protection strategies, a consequence of the processors being dispersed throughout the storage system.

Data tape

Contrary to rumors, tape is not dead. For video archives, data tape has many advantages over spinning disks. Although disk capacities are increasing every year, tape technology is keeping apace. As green issues come to the fore, tape has a big advantage because it only consumes power when access is needed. Disks need power to keep spinning and to run air-conditioning to remove the generated heat. Recent analyses show typical costs of tape libraries are about one-tenth that of disk storage systems.

Tape cannot provide the instant and random access of disk arrays, but used together in nearline and archive configuration, tape and disk can provide the functionality that broadcasters need, while controlling cost.

Just as videotape has evolved through many formats, from the 2in quad through to the HDCAM-SR, data tapes take many forms. Current formats include Linear Tape-Open (LTO), SAIT and SDLT. Fourth-generation LTO-4 represents the state of development of the latter format, with a development roadmap leading to a future capacity of 3.2TB with transfer rates up to 270MB/s — 8X that of SDI.

Some may ask: “Why not use videotape as the archive?” A single LTO-4 data tape cartridge with 800GB capacity can store about 35 hours of 50Mb/s MPEG-2 recordings, a much higher packing density than the videocassette. In addition, the LTO-4 cartridge volume is one-quarter that of a 1/2in videocassette (23cm3 versus 92cm3).

Broadcasters who want to build secure archives can take advantage of two features supported by the latest LTO products. The LTO-4 generation has introduced tape drive encryption, which simplifies the systems needed to prevent unauthorized access to archives. LTO-3 provides for Write Once, Read Many (WORM) support, again providing extra security and proof against problems like viruses and tampering.

Tape technology does have issues. It is difficult to reuse tapes; deleted files cannot be simply overwritten with new data like a disk drive. Data is generally appended (an assemble edit in videotape terminology), so large archives with much deleted material have inefficient use of tape. The simplest way to defragment the storage is to copy the scattered files to fresh cartridges.

Data tapes differ from videotape in the write process. Videotape streams to the tape and covers tape defects by error concealment of playback. Data tape can check a recorded block and rerecord if there is an error. To a high level of accuracy, what you record is what is played back. Data tape can be losslessly duplicated, whereas videotape will suffer a small generational loss.

It is in this fact that lies some of the benefits of a data tape library. A file can be QC'd and written to the archive in the knowledge that it can be restored in several years time in its original form. A videotape must go through another QC stage on playback. A data tape library can be migrated to a future generation format by a simple copy command. Dubbing videotape to a new format is a labor-intensive operation frequently requiring video signal restoration with attendant quality losses.


As television moves away from videotape, some working practices will have to change. In the past, there has always been the comfort factor that if something goes drastically wrong with disk systems, the original videotapes can be reingested.

In the future, that may not be possible. That means media and entertainment companies will have to take the same precautions as banks to prevent data loss. Although some broadcasters do have very strict policies, others still have lax procedures. Losing a single tape is rarely a catastrophe, but getting a virus in a video storage network could be. Central storage represents a single point of failure if not rigorously designed with security and disaster recovery in mind.

Broadcast storage architectures

In the days of tape, it was common to use two qualities: a broadcast format for the transmission path and VHS for viewing. File-based video storage can be far more granular, with a mix of resolutions and storage media. (See Figure 1 on page 36.)

An optimized system could have six or more layers, including:

  • Uncompressed video on high-performance RAID for nonlinear editing
  • Lightly compressed video, again for editing, Avid DN×HD, DV25, Apple ProRes
  • I-frame MPEG for transmission masters and archiving, stored on nearline disk and data tape
  • Long-GOP MPEG for playout files for efficient use of playout servers
  • Time-code indexed browse for rough cut editing
  • Low-resolution browse for general viewing.

These different video formats are stored on a mix of disk and tape systems. High-cost servers provide ultra-reliable frame-accurate performance for playout. High-speed, high-bandwidth arrays serve editing functions. Lower cost SATA arrays can provide nearline storage of work in progress, and cost-effective tape provides the archive, and at a remote site, disaster recovery.


File-based video storage is fundamental to achieving the cost savings necessary as broadcasters try to create more programs for less money. It must be clear that any real broadcast video storage system is going to be a hybrid that uses different subsystems to meet the various requirements of postproduction, playout, archive, new media distribution and back-office browsing.

Several vendors now offer storage designed for the needs of the broadcast community, so it is becoming possible to build a tapeless station that provides the reliability and resilience once provided by videotape.