Here are two approaches to shared file-based workflow storage.
Whether you are managing operations at a call-letter station with a half-dozen edit stations, or at a broadcast giant with hundreds of editors, architecting the storage supporting your facility’s file-based workflows is a business-critical task. Video editing places higher demands on storage than any other file-based application, requiring streaming performance for large files starting at 3.5MB/s for SD to 165MB/s per stream for uncompressed HD. And, with today’s higher-resolution formats, streaming video data demands even more performance from storage systems, with 4K requiring 1210MB/s per stream — 7.3X more throughput than HD.
Traditionally, this level of performance could only be met by high-performance disk storage directly attached to the editing workstation. The downfall of directly attached storage (DAS) is that it silos content on to individual computers. (See Figure 1) Sharing large media files between editors or moving the content to the next step in the workflow requires manually copying files across the network or resorting to the “sneakernet” solution of copying content on removable media to move it along the workflow. The result is expensive: Duplicate copies of large files double the storage capacity, and waiting for file transfers reduces the productivity of highly-paid editors.
High-performance shared storage systems were designed to solve this problem, and today there are two principal options: shared Storage Area Network (SAN) file systems and scale-out Network Attached Storage (NAS) file systems. (See Figures 2 and 3.) Based on fundamentally different architectures, each offers advantages and disadvantages that should be carefully considered before choosing a storage system for your workflow.
Block-based storage over Fibre Channel
With a SAN, a pool of high-performance storage is divided and allocated to individual servers. Users and applications can only access storage through allocated servers. This works well for databases, but not for media workflows where files are shared by teams working on different workstations.
Shared SAN file systems break the silos by adding file system functionality without adding a file system layer. (See Figure 4.) Access to data on the shared volumes is carefully controlled for data integrity, often by a separate server that manages file locking, space allocation and access authorization. Placing this server outside the data path, instead of between the client and the storage, eliminates a potential bottleneck and improves overall performance of the storage solution.
To deliver blocks over a network fast enough, SANs also use a storage-specific network standard, Fibre Channel, with its own dedicated switches, cables and protocol. The Fibre Channel protocol delivers SCSI commands between the server and the SAN’s disk systems just as it would a locally attached disk.
File-based storage over Ethernet
NAS devices are purpose-built file servers designed to make sharing files between individuals and groups more efficient and secure. Since a NAS is connected to all the desktops, workstations and servers on standard Ethernet network, the NAS controls data access by managing user privileges, file locking and other security measures.
NAS devices provide data access to clients running different operating systems through a file system layer such as a Network File System (NFS) or Common Internet File System (CIFS). Because NAS devices use the existing Ethernet network, instead of requiring a special Fibre Channel storage network, deploying NAS storage is generally less expensive than a comparable SAN solution.
The downside of presenting data through a file system layer is that NFS and CIFS, the most commonly used protocols, are not optimized for large files of streaming data like video files. Also, the file data is transferred over Ethernet, a packet-based protocol that allows for latencies in the delivery of files, unlike Fibre Channel, which was designed to deliver blocks of streaming data with low-latency. Newer standards like 10Gb Ethernet approach SAN network speeds, but do not improve latency.
Faster file-system performance
One approach to boost NAS performance is clustered or distributed file systems, often combined with a building block storage architecture referred to as “scale-out” storage. (See Figure 5.) The clustered file system distributes data across the nodes in the scale-out storage, which spreads the data access load across more processors and more I/O connections.
By aggregating I/O across a cluster of nodes, each with its own network connection, scale-out NAS greatly improves performance over traditional NAS. However, even the industry-leading solutions can only reach up to 400MB/s for a single data stream, whereas SAN solutions can provide the 1.6GB/s performance required for editing streaming video files at resolutions at or greater than 2K uncompressed.
Best of both worlds?
Another approach, often referred to as “unified storage,” is to support both file-based NAS and block-based SAN access from the same storage system. Unified storage allows facilities to consolidate their storage and eliminate storage silos, giving applications the choice of attaching to storage using either IP or Fibre Channel protocols, depending on the applications’ performance requirements.
Some unified storage systems layer a NAS file system on top of SAN storage, offering true SAN-level performance for block-level access. Other vendors have modified their NAS platform to allow block-mode access through iSCSI, an Ethernet-based protocol. (See Figure 6.) While iSCSI has made performance gains due to faster 10Gb Ethernet networks and beyond, it hasn’t been widely adopted for video editing workflows.
One unified storage approach that works well for file-based workflows is to layer a NAS file system on top of a SAN, but replace the standard NFS or CIFS protocols with one optimized for streaming media, such as the DLC proprietary format from Quantum. (See Figure 7.) With this architecture, an optimal media workflow that aligns the performance needs with the cost of the storage infrastructure can be created.
In an ideal storage strategy, a single storage pool can be shared throughout workflow, but accessed according to the performance vs. cost requirements for each workflow application.
Fibre Channel access
To meet the high-performance storage demands of full-resolution video content, a SAN with Fibre Channel connections should be deployed for video editing workstations, ingest and playout servers, and any other workflow operation that requires the 700MB/s per user read or write performance needed to stream files at 2K resolution or above. With a SAN solution that includes a shared file system, files can easily be shared between editors or amongst steps in the workflow.
High-speed Ethernet access
Facilities with farms of transcoding or rendering servers that require streaming performance at approximately 70MB/s to110MB/s should look to high-performance storage that offers high-speed Ethernet access, either through a scale-out NAS or block-level storage with a specialty NAS protocol optimized for streaming media.
Standard Ethernet access
Producers and other staff who primarily access low-resolution proxies, images, scripts and other text documents can connect to the storage from their desktops through the standard NFS or CIFS file systems over standard
Ethernet connections of 1GbE or less.
Finally, high-performing disk systems are sold in both NAS and SAN configurations. Often it’s the access mechanism — the network type, protocol and number of connections — that drives the overall performance. The smartest, easiest to manage, lowest-overall cost solutions offer shared storage that allows facilities to choose the network and protocol that makes the most sense for each workflow activity, from high-speed SAN over Fibre Channel to standard NAS over Ethernet.
—Janet Lafleur is the StorNext product marketing manager, Quantum.