How Networking Changes Storage Structures

The end of the analog broadcast era signals opportunities for broadcasters to add services that will force infrastructure changes beyond those just recently made to support digital television.

Enabling new program services while maintaining a lean operating environment requires advances in the content delivery chain-from ingest through transmission.

Video servers built around dedicated codecs in multiple frames have served the television industry well for at least a full decade, but a new trend is changing all that. The most significant technological driving force that is enabling this change is the evolution of the network.

Networking technologies continue to change the video server model. Recent video server offerings tend to be compact systems comprised of one to several "channels," each housed in a self-contained chassis-some with integral storage and some with external storage. The channel may be an input or an output port, or both; and is reconfigurable per the server's application at a given time.

A channel may contain at least an encoder/decoder pair whose aggregate group is called a "server I/O engine." Server channels are connected to a networked set of expandable storage subsystems as nearline, online or archive libraries.

A growing trend is storage housed in a separate environment from that of the server I/O engine. The element behind this trend is larger, faster and more affordable data pipes that enable higher performing networks. The predominant networking path is now Gigabit Ethernet, which is quickly becoming a technology that will change media server and storage architectures for some time to come.

FASTER GIG-E

As Gig-E develops from 1 to 2 to 10 Gbps (with 100 Gig-E no longer just a pipe dream), the ability to move large contiguous sets of media files between devices is forcing the functionality of media content distribution into a new dimension.

With connectivity headed beyond warp speed, what might this again do to the server engine itself? In fact, video server technologies are adapting to these new highways by reducing their complexity, increasing performance, shrinking size, and migrating from fixed-in-silicon solutions to reconfigurable, software-based adaptive codecs housed on commercial-off-the-shelf IT-centric server platforms that often include integral processors fed via external video I/O interfaces.

(click thumbnail)Fig. 1: Media server system conceptual block diagram
Fig. 1 depicts elements of how the new media server of this mid-decade transition might look once these enabling factors are readily available.

NEW INTERFACES

On the left is a conventional SMPTE 259/292 baseband A/V input adaptor, with all the metadata encapsulated into and embedded into the SDI data stream. The signal may be baseband, content packets over SDTI or some other undefined raw data delivered over IP.

Current legacy architectures for real-time video will force BNC-based interfaces for as long as live, isochronous delivery of high bandwidth/bit-rate SD and HD video exists.

However, the A/V interface of the future could predictably change from an I/O inside-the-box to something more aligned to a plug-in transceiver, like that of the Gigabit interface converter or host bus adaptor.

Where this model system deviates from what is principally a mainstream video server architecture is shown in the left-hand block of the ingest cache server.

The video interface essentially takes digital bits through an IP or data formatter and translates them to the native storage format for the integral cache disk. It is here where the Material eXchange Format (MXF), could preserve interoperability between the storage platform, the play-out server and the outside world.

This integral cache acts like a holding tank, a short-term depository while the network is readied for a transfer from the ingest cache to the larger central store (shown as common NAS storage). Transport of data occurs over a Gigabit Ethernet connection as FTP.

Policies set in the content movement management profiles establish how fast the data needs to migrate to and from the central store. Transfers can happen at any speed over the network, depending upon demand for the content or the traffic on the network at a given time.

This model essentially changes the real-time nature of transporting video content (one of the more complex issues in server design) and leaves it at the edge of the network. Data can trickle to storage or traverse the network at many times real time-it all depends upon when or where that content is next needed, and is set per the operational policies established by the trafficking and scheduling systems.

Since the requirement for an all high-performance storage system is no longer needed in large quantities, costs and physical sizes of the servers could be radically reduced. And as new encoding parameters develop, such as with H.264, VC-1 and JPEG2000; the software nature of the ingest cache server should easily adapt well into the future.

Furthermore, should new technologies render the present ingest server less effective, the upgrade path becomes more affordable and the server itself more extensible.

This ingest cache model becomes the front-end of the video-media server system, providing a variety of additional uses whether directly connected to a near-term store, a large central storage system, a WAN or even a direct-to-library archive.

On the right side of the diagram, a similar COTS server with its I/O-interfaces stages ready-for-transmission material in another form of short-term cache.

SHORT-TERM STORAGE

Data from the central store is migrated to this cache over the network in sufficient time for a scheduled playback from the play-out server. The silicon buffers on this server handle the anomalies in the network transmission path, receive the data, cache it to a mirrored set of modest-sized hard disks and tell the management systems that the transfer is complete and ready for transmission.

Content may remain on this play-out server for only a short period of time, depending upon when it is needed in the program schedule, and then purged.

The play-out server could disconnect from the high-speed network and operate in a standalone configuration, provided timing and scheduling information has been properly sent to the system in advance.

Conventional video disk control protocol would cue, start, stop or recue without further interaction. Network-based automation control protocols could ultimately replace VDCP, reducing physical connections to RJ45-like wiring structure and a BNC cable for real-time video.

When, not if, this concept is fully developed, additional feature sets could certainly be added to the ingest or play-out servers. Applications could be real-time video-based content processing, format translators or other forms of compressors for different delivery platforms; increasing the capabilities of the media server and reducing the complexities in the broadcaster's delivery or transmission chain. The reality is here, the acceptance is coming, and the future of the video/media server is once again about to change dramatically.

Karl Paulsen

Karl Paulsen is the CTO for Diversified, the global leader in media-related technologies, innovations and systems integration. Karl provides subject matter expertise and innovative visionary futures related to advanced networking and IP-technologies, workflow design and assessment, media asset management, and storage technologies. Karl is a SMPTE Life Fellow, a SBE Life Member & Certified Professional Broadcast Engineer, and the author of hundreds of articles focused on industry advances in cloud, storage, workflow, and media technologies. For over 25-years he has continually featured topics in TV Tech magazine—penning the magazine’s Storage and Media Technologies and its Cloudspotter’s Journal columns.