Networking, storage and multiplatform production

Network and storage resources can be divided into those that support production and those that are used for playout. In production scenarios, real-time presentation of the accessed media is not a performance requirement.

Content transfer and asset management system performance is dependent on file compression efficiency, disk access speed, network bandwidth, router latency and the number of terminal nodes (users). A centralized storage area network (SAN) will make content management easier in some respects, but the demands on the supporting media transfer network will increase because higher data transfer rates are necessary for optimal storage.

A network topology designed for storage access for non-real-time single-platform production may fail to function as needed for parallel production in support of multichannel distribution. Trying to remove bottlenecks caused by multiple concurrent storage access transactions by installing high performance, high accessibility storage is useless if the network doesn’t support sufficient data transfer speeds. The solution is in the engineering; each production workflow must be analyzed to develop a storage and network architecture that can support parallel production needs.

Perhaps a distributed storage system of production continents is a better solution than a centralized SAN. It all depends on workflow. Platform-specific network traffic can be limited to clearly defined domains, but total storage requirements will increase because each individual continent that supports each distribution channel will have its own storage system. This brute force method requires the management of numerous copies of content located in each continent.

OSI layer issues

A three-level network topology consisting of core, distribution and edge uses Open System Interconnect (OSI) layer 2 full wire speed, bandwidth-connected switching for the core and edge layers. With non-blocking layer 2 switches, all ports run at wire speed. So, a GigE interface truly runs at 1Gb/s. Even non-blocking switches, however, will introduce variable latencies depending on how many packets or frames are being buffered, so there is no guarantee about when packets will arrive at a node (networked device).

But things are different when packets reach the distribution level. The distribution level between the core and the edge is at OSI layer 3 and is connectionless and dynamic. This is where packet routing decisions take place and transfer duration speed indeterminacy is introduced. Multiple protocols, routing table updates and heartbeats that are acceptable on normal IT networks slow down packet transfers and are unacceptable on broadcast media networks.

Every active routing protocol will slow down network transfer rates, so the router must be locked down by turning off, or not activating, all unneeded protocols. An alternative is to use static routes. They require manual intervention to add more devices, but give the design team, rather than the device vendor, control of the packet routing.

Media IT

Media network design requires special consideration and alternative techniques. Generic IT methodologies that plug devices into any physically convenient switch or router and rely on VPN, VLAN and other network defining techniques can result in non-deterministic data transfer.

The use of static routing will facilitate the highest data rates, and, if properly designed, will deliver near 100 percent deterministic performance. Multiprotocol Label Switching (MPLS) circumvents router latency with techniques that enable layer 3 routing to approach layer 2 switching in performance. In addition, Quality of Service (QoS) packet tagging allows routers to prioritize packet dropping, if necessary. This helps get critical content where it is needed when it is needed.

Convergence is the updating of routers and switcher routing tables when there is a change in the network, and is dependent on routing protocols. Many standard routing protocols are virtually useless in media routing. For example, Routing Information Protocol (RIP) sends out router updates every 30 seconds, so content transfers may be fine, until the next routing table update. Then network congestion may cause packets to be dropped.

Even link-state advertising protocols, such as Interior Gateway Routing Protocol (IGRP), that only send updates when there are changes in the network, may disrupt packet transfers when consistent performance is needed most — in the event of device failure.

The addition of storage nodes will require routing table updates across the entire network topology for each active protocol. Some protocols can avoid this problem. Open Shortest Path First (OPSF) divides the network topology into hierarchical autonomous systems (AS). An AS is a group of routers that exchange information using link-state protocol. As resources are added to these zones, or domains, only routing tables in these areas must be updated, which speeds convergence.

To the desktop

Another point to consider is that the convenience of centralized storage is offset by the higher network bandwidth required to meet the needs of multiple users. For example, centralized asset management use proxies and thumbnail images that can be viewed from anywhere within an organization. This implies that they will be transferred or streamed over the corporate network.

From a network bandwidth capacity perspective, allowing for collisions and assuming 50 percent bandwidth use, layer 3 GigE distribution level routing may support 500 1Mb/s streams concurrently, at least theoretically.

But keep in mind that all other network traffic, such as e-mail, the Internet, office applications and content file transfers, may be traveling on the network at the same time. As collisions occur, the network gets continually slower. Under these circumstances, GigE is not very fast, causing content delivery to sputter.

A network design may work fine under normal working conditions, but a standard process in commissioning a system is to stress the network until it fails. It’s better to do this before finalizing the network design than the infrastructure crashing when it is needed most.

Characteristics of a routable network system that must be considered when choosing a routing protocol include:

  • Optimality: Metrics used by a routing protocol must be relevant and produce acceptable file transfer performance.
  • Simplicity: Router overhead (adding additional bytes to a payload) and efficient use of routing resources helps maintain a stable and reliable network.
  • Robustness: The routing protocol must meet the file transfer requirements. A flat network, hierarchical network or combination of the two may be best.
  • Convergence: The time it takes to update routing tables and calculate new link metrics impacts meeting availability and service level requirements.
  • Flexibility: Algorithms used by the routing protocol must adapt to the changing dynamics of the network.

Don’t be satisfied if a network design initially appears to be performing acceptably. The capabilities of the design will not be tested until more storage, routing devices and workstations are added to the network.

A paradigm shift

Because storage and distribution are interdependent in a media network, the two are being merged into one structure. High performance storage is incorporating ASIC-based hardware routing functionality. Another methodology replaces Fibre Channel interconnections with storage blades that use rack backplane interconnection. Some vendors offer total solutions that support SDI and AES3 I/O connected to scaleable storage systems, yet these systems are designed for production that supports single-channel distribution.

Researchers and developers continue to explore methodologies to improve storage network systems. New technologies such as holographic storage, interface techniques like Inifiniband and network technologies like 10GigE are decreasing access time and file transfer latency in all media network topologies. One day, quantum computing will be the norm.

Perhaps what is needed is not just a better implementation of the same physical technologies, but a paradigm shift that will adapt to the new technologies on the commercialization-commoditization horizon.

People will continue to communicate and receive information through an increasing array of technologies. Soon, broadcasters will have more platforms than ever to maintain. Production networks installed today will have to be designed with enough agility to adapt to the needs of parallel production for multichannel delivery now and in the future.