Selecting a digital media platform for the capture and storage of moving image content has moved well beyond the task of simply picking a “format” and assuming it will universally satisfy the majority of your media needs. Not too long ago, when videotape was the mainstream medium, there were only a handful of choices available. Today, as digital media storage has moved further away from videotape and closer to solid-state, optical and magnetic spinning media, users find themselves continually having to make new and evolutionary choices in the media storage platform, which their businesses depend upon.
With the multitude of solutions available, consideration must now be given to every element in the technical and production workflow chain. Beginning with the camera (ie, the image capturing device) and its associated storage media, the decision process takes on a multidimensional perspective whose outcome may be unpredictable. For example, coupled with the choice in the imager are decisions related to the codec selection, bit-rate and depth, whether to use long-GOP or I-frame only encoding, and a desire that the wrapper and coding formats will be extensible to the workflow that is most closely associated with both the operation’s production and the distribution requirements.
MORE COMPLEX CHOICES
The growing proliferation of media storage products is evidence that change is continual and choices are becoming more complex. Users faced with making storage platform decisions should first determine where and what that media storage platform is to be used for. If for electronic newsgathering and editing purposes, then the harmonization of the capture, edit and near term release systems are critically important to establishing a smooth and consistent workflow. Components of the camera’s media storage system (eg, P2, XDCAM or DV) must match the ingest system. The ability to quickly transfer image files, metadata and proxies (when available) from the original storage media to a secondary storage platform is essential to minimizing production workflow bottlenecks. File-based workflows typically depend upon a tight integration of these elements early in the production process. Files associated with the near-term editorial platform solution, once completed, then move from a pure production domain to a distribution domain, and later to long term preservation of the assets on an archive platform.
Establishing a baseline coding strategy is crucial to the extensibility of the media content throughout the enterprise. Decisions on image quality, speed of access throughout the system, storage capacity demands, and usability of the ancillary information (ie, metadata and proxies) are made here. While arguable that most file formats can be transformed—that is unwrapped, recoded to another format, and then rewrapped—this requires time, storage, hardware and software. Any extra steps that inhibit the ability to move rapidly from one stage of production to another bring on additional unwanted costs. Therefore, examine each step and every element involved in the entire process prior to deciding on the camera capture, short term media storage platform, and before choosing a file-based nonlinear editing system.
THE PRIMARY FOCUS
When the media storage platform has negligible interaction with other storage platforms such as news editorial or live production, and will mainly be employed for ingesting live feeds, videotape transfers or other baseband video/audio signals—then the primary focus becomes server input and output channelization, the coding of that content, and the capacity including bandwidth (or “throughput”) of physical storage media itself. In this form of ingest/playout application, it is assumed that the videoserver system will be used mainly for real-time baseband playout. Here, storage capacity must be sufficient to handle the library of content used in routine broadcast transmission applications.
However, the storage system bandwidth must be capable of exchanging data among many sets of encoders and decoders, often simultaneously. Storage bandwidth governs the flow of these very large sets of high data rate signals on a completely random basis. And, yes, it is indeed possible to architect a system with sufficient storage capacity but insufficient bandwidth to deliver multiple sets of high-definition data streams into or out of the system.
On opposite ends of the platform spectrum, single-purpose dedicated videoserver platforms typically offering one to four channels of I/O that employ integrated low cost RAID disk storage are being offered by various manufacturers. These “VTR-replacement” solutions generally include HD and SD I/O with VTR-like control, but storage is often limited to the integrated disk drives mounted in the chassis. Any expansion will more than likely be restricted to a limited number of external NAS devices. File interchange may be at best an FTP-like function often between like products. These systems are not intended to perform the duties of large scale server deployments sharing a centralized storage architecture.
At the other end of the media storage spectrum, a shared or centralized storage environment amalgamates all active content served from local appliance-like servers, modular encode/decode units, or devices on its own dedicated media server network. In this environment, a few to several dozen sets of encode/decode engines exchange file data either on a Fibre Channel based SAN (storage area network) or on a NAS (network attached storage) model. This architecture is more in line with what one finds in large broadcast or MVPD centers that share content among many services, and rely entirely on a tapeless environment once the content is ingested into the system. Because this architecture is modular in structure, this provides for better expansion in both I/O and storage capacity; a much wider bandwidth enabling high data rate transfers between storage and coding engines; and a practical extension and usability of the entire media storage platform through the overall enterprise.
ON A WHOLE OTHER LEVEL
Tackling the storage and serving requirements for live applications, production and post production takes the media server and storage platform concepts to a whole other level. The demands placed upon these platforms are quite different than a dedicated ingest-only or play-to-air (otherwise known as a transmission server) system. While production serving systems can certainly be built out of the mainstream transmission server components, many find that the issues surrounding compatibility between ingest, editorial/NLE systems, file-based graphics and release platforms create workflow problems that restrict basic production-level performance. Furthermore, unless all the components are matched precisely, bottlenecks in the production process will occur that cannot be tolerated, particularly in a live production application.
When the media platform is for on-demand release as in NVOD (near-video-on-demand), true VOD, or when there are no requirements for baseband audio and video, the tide changes even more dramatically. VOD platforms will handle anywhere from a single stream to thousands of simultaneous streams. Service providers’ VOD equipment must be capable of delivering pre-encoded IP-based files conformed to specific formats that enter a network and then no longer see elements of a baseband audio/video stream. Bandwidth management, backchannel ordering instructions, traffic and billing subsystems all become integral elements in what is essentially a unicast or multicast streaming media model.
With a steadily growing number of “white box” computer solutions being offered as alternative to purpose built videoservers, a looming question remains: “Will these differences in media servers and storage platforms become indistinguishable from mainstream data centric servers?” Some believe so and are already implementing them in VOD, transmission and disaster recovery systems.
Caution should be raised—using conventional data centric platforms for handling moving media content is not for the faint at heart. Today, most of the systems deployed for production, news and mainstream call letter broadcast environments still use purpose built specialty videoserver and storage platforms which seem to remain their first choice. We’ll have to see where the trend goes as video and IT systems continue their convergence.
Karl Paulsen is a SMPTE Fellow and an SBE Life Certified Professional Broadcast Engineer. Contact him at firstname.lastname@example.org