(click thumbnail)Content Lifecycle Storage WorkflowMedia servers, storage and networking technologies for television systems historically follow in the footsteps of IT systems. The latest global revolution in IT storage is now driving content flow for television systems. As such, our industry finds itself continually exploring new dimensions for physical storage and storage management philosophies.
Increasing deployments of server technologies across all aspects of the broadcast business are spurring new growth in how media is captured, transferred, manipulated, played out and stored. Most recently, storage and media management is tasked with addressing the inevitable march toward high-definition programming and content. HD puts new demands on servers, editing system, storage and network infrastructures. This newest evolution will again change the dynamics of the industry as those early adopters once again become the guinea pigs for workflow and process change.
To validate these up-and-coming solutions, one must understand the workflow and the systems that enable those processes. How content makes its way through a system—referred to as the “content lifecycle” is affected by how material is captured, on what media, where it is stored during the production and editorial process, and finally, on to playback and long-term retention.
The content life cycle begins at the moment of image capture and ends when all instances of the content is erased or destroyed (including those database records related to the content). All the places in between contain the variables that make your operations simple or complex.
Television news is one particular segment of the media industry that is moving away from videotape capture and on to solid state, removable spinning magnetic disk, or optical media formats. As they do, their storage requirements and associated media management becomes a little more challenging. The content lifecycle must now be modified to handle the random access nature of these field storage devices, altering workflow from field capture to editing to play-out.
In the new domain, field-acquired content must move as quickly as possible to the next storage medium, forcing new demands on different resources. One impact is on the storage systems, now tasked with a universal scheme of accessible storage focused on a common centralized storage model.
To news people, turn around is of high importance. Thus, a media transfer mechanism that can ingest, move, catalog and store field-acquired content from “field memory” to the “central store” is essential to avoiding the higher costs of field memory chips or discs. Such transfer systems must be readily available, allow memory to dump to stores in a rapid fashion, and must be of sufficient size and quantity to handle the volumes of content for what is, today, an undefined period of time.
In the historical context of television news, the rapid fire agenda of “shoot—edit—air” puts many complications on the content lifecycle. Solid state and direct removable media aids in making media move faster, but requires another set of intermediate systems to support the stages between shoot and edit.
While prescreening of raw content helps to narrow down what is transferred, it isn’t always practical. Pretransfers of raw edit decision lists to the central transfer point, coupled with prescreening, can alleviate some of the transfer bottle necks, but once that raw content begins its assimilation into the production process, another set of storage and media management requirements takes its place.
Central servers now need to accommodate not only the raw footage, but eventually all the finished stories and substories. Transferring content from a central store to a local workstation is time consuming and impractical. The ultimate goal is to provide sufficient connectivity and bandwidth so as to edit from the central store, online and collaboratively (i.e., multiple editors accessing the same raw content to create two or more stories).
Media management tools required to manipulate content must be intuitive and effective. Content must be available to everyone, so networking becomes critical factor because at rush hour, you cannot afford to have the servers, storage silos and the network slow to a crawl.
Faith and trust in the reliability and accessibility of disk-based storage platforms, and their associated networks, still forces organizations to offload to videotape for airplay, backup or protection. At peak times, overloading the server responsible for play-out with editors rushing to complete the stories that air in the next five minutes can be disastrous. Hence, finished stories are either cached to another server for play-out, or copied to videotape and rushed to the dusty (or is it trusty) old videotape transport.
At the other end of the spectrum, on top of storage capacity and media movement issues, is the management of backup and protection procedures. The practices for addressing what content is backed up, where and what it is backed up to, and for how long it is retained seems to have no consistency from one organization to another.
In the content lifecycle, managing completed stories so that content is retained for the inevitable return of that same subject is part of the process called “archiving.” As television journalism invariably creates some of the largest volumes of individual sets of content, each with their own merit to the organization at an often intangible perspective, this archive process is becoming an essential element for daily operations.
Editorial storage requirements grow in response to increased crews shooting more material for new and different shows.
Re-editing of stories for different purposes further gobbles up storage and network bandwidth. At a daily operations level, raw footage and finished stories are sometimes managed solely upon the physical amount of available space on the servers. Once that storage reaches capacity, news departments return to the old standby—videotape—which allows them to offload material and make space for the next sets of stories and raw material. Then the cycle starts again.
New systems in the marketplace alleviate the dependency on legacy technologies to satisfy long term storage requirements. In our next installment we will look at some practical methods for retention of content at a global level for the facility. These new bridges between storage systems, including news and play to air, are helping make the content life cycle more manageable.