Since the invention of the hard disk there have been predictions that linear media, including video tape, will eventually disappear. Not even a decade ago, Tektronix heralded the arrival of what we now call video servers with a demonstration of the Profile. The Profile was a platform that Tektronix intended others to use for their own software and hardware, with the device acting as the video recorder in a variety of uses.
Initially, the video server market was severely constrained by limited capacity and the high bit rates necessary to produce quality reproduction of video. As a result, the applications that were the first targets were quite limited as well. In an interesting twist of technical progress, the first video server applications were often as cache recorders on the outputs of large robotic linear tape libraries. Until disks attained high capacity and became cheap, it would be a narrowly applied technology. Early servers could only store a couple of hours of quality video, about the same as videotape.
Soon 4GB drives arrived, followed quickly by 9GB drives, 18GB drives and larger. Today, servers are delivered with at least 50GB drives in arrays that can quickly reach terabyte sizes. Within the next year, drive sizes will exceed 100GB in commercial use.
Clearly, servers have penetrated areas that few would have thought of 10 years ago. Professional video servers play music videos in clubs, and they are used in homes to timeshift programming under the control of sophisticated automation systems. The last 18 months have seen the introduction of the PVR (personal video recorder), a consumer video server with embedded applications that allow timeshift recording of hours of media MPEG encoded and stored on a hard disk in what can best be described as an appliance.
A review of the technology quickly shows that servers are simply computers with:
- relatively conventional operating systems (NT, Unix, VXWorks and Mac);
- relatively conventional hard drives from a range of manufacturers;
- disk I/O controllers for RAID and non-RAID arrays (or single drives); and
- I/O cards supporting analog and digital interfaces.
There was a sentiment openly touted by more traditional server manufacturers that the computing industry did not support the unique requirements of real-time, isochronous delivery of video that broadcasters need. In truth, it may well have been true to a degree at one time. Now a number of familiar names in the computing industry are supplying hardware with superior networking capabilities and the deterministic delivery that replicates the performance of a VTR or a more traditional video server. It remains to be seen how fast these computer industry companies will learn how to market to the video industry, but their scale will enable them to be formidable competitors.
At one time there was a distinct division between video servers and editing systems. Now, as the market has begun to recognize that the need to store final product on videotape is less important for some applications, the lines between the two principal market segments that use digital video on disk have begun to blur. Server manufacturers have begun to supply editing applications for news. With cross-platform compatibility between the compression used in editing systems and that used in video servers, it is entirely practical to look at the interconnected web of servers and edit stations as a seamless whole. Unfortunately, there are still barriers to implementing such cross-application systems without significant expertise and likely a healthy dose of assistance from the manufacturers involved or specialists in integration.
As part of an editing environment, video servers can offer some features that simple editing systems cannot easily replicate. Servers are optimized for recording and playback of high-quality pictures. But if two copies are recorded, one at high bandwidth and one at low bandwidth, it is easy to see how both full-quality playback and low-quality browsing can be done on one integrated database of video clips. If the server ingests remote feeds, for instance, producers can view those assets immediately from network-attached workstations. Decisions about cutting a news story can be passed on as an e-mail or cut list to an editor who can assemble the cut story in a nonlinear playlist without destruction of the original media. It is likely the media does not need to be copied but rather played from the original clips in fragmented nonlinear media. Perhaps a cut copy will be made, and a low-resolution proxy copy, so that the large quantity of ingested but redundant material does not need to be kept for archive purposes. Just as the outtakes from a motion picture may well be 12 to 20 times the final cut of the movie, broadcast news creates a similar shooting ratio. In broadcast sports production, it is common to make a "melt" reel with the few minutes of truly important archive material copied off to a source tape for future use. In the same manner, the archive in the tapeless facility may only contain a few minutes of useful material, and delete the hours of video shot on the courthouse steps without any useful action happening.
Servers consume storage space voraciously. There are a number of methods of attaching storage to the I/O engines. One could build a disk array that is separate from the operating system's storage and let a special purpose controller have exclusive access to the media disks. Or, as is sometimes the case with DV I/O cards, the media may well be part of the space the operating system directly controls. Giving the operating system access may increase the range of capabilities for archive and networking of stored media.
Many manufacturers have created shared storage concepts where the media is placed in a pool of network-attached storage. The media drives might be on the same network as other network resources and workstations or on a separate and distinct media-only network where only appliances that need to access the media files directly are resident. Making the media available to a broad range of applications and hardware enables play-to-air and editing stations to share the resources in a peer-to-peer configuration. Management of the media may well become complex in such an environment.
When the media is no longer needed in the near-line and online operations, it can be pushed to an archive device under control of software that keeps track of the location of the assets and facilitates calls to the media library. Often called asset management, this process is one of the least understood of the pieces of the growing universe of server and media management devices. In an interesting twist of fate, the cheapest storage for the data that represents the media is again a linear tape-based media, but not one designed for linear playback. They are cousins of linear videotape but distinct in that they offer none of the stunt modes, editing features, control panels or monitor outputs of more conventional digital video recorders. Rather, these computer "back-up tapes" are intended for random access of file-based media. Indeed it is a useless device unless under the control of software needed to load and access files.
Another topology for server storage is termed by one manufacturer as clustering. In this approach, media and I/O are distributed in a homogenous network that allows a server network to extend into the wide area network (WAN). Control of the network of clusters can allow local automation of an I/O and network manipulation of the location and use of the media at the same time. The output in one city could, over the WAN, well be playing media located in another city.
With the ability to build huge and inexpensive arrays of disks with protection from the failure of a single drive, we now see server implementations with hundreds of hours of high-bandwidth video online. Archives have dropped in cost and are rising in storage density as well, with tens of terabytes quite normal in small archives (a terabyte of disk space would be just twenty 50GB disk drives, costing perhaps $25,000). Petabytes are commercially available (a petabyte is a thousand terabytes, or a million gigabytes). At 8Mb/s, a terabyte could represent over 200 hours of content, and a petabyte of video would play for 22 years. It is not difficult to see how this impacts the decision to build facilities with huge online libraries. It is also not hard to see how this moves video into the province of the computer manufacturers, who have vast experience in controlling these complex arrays of devices, which provide access to data online.