Fibre Channel Switched Architectures

The power of high-speed interconnections for video servers continues to shape the architecture of modern network-based storage systems. As the use and deployment of these systems grow, of paramount issue is the protection of the stored assets from disruptive activities, an important part of meeting the needs of a continuous full time operation. A variety of techniques are available for protecting the components in a network-based storage architecture. Some of these approaches grew out of the natural course of storage expansion. Other methodologies came from the inherent need to increase system bandwidth, which grew from a few hundred megabits per second in the early SCSI only-based arrays, to thousands of megabits per second in the one- and two-gigabit Ethernet domain.

As the requirement for increased storage is met by adding groups of higher capacity disk arrays, the storage network architecture must also be increased to direct and handle the demands of the server engines themselves. The techniques employed in broadcast video servers are similar to those used in the data-centric arrays of storage systems, but with one critical, and often reminded upon, requirement -- the system must maintain the capability to deliver large continuous blocks of isochronous data to real time decoders without interruption or corruption. This is the key, and single greatest difference between streaming data for wide area networks and the delivery (and recording) of continuous moving video images from a storage platform.

Video server manufacturers approach storage, storage networking, and the encoding or decoding of the media in similar fashions. Setting aside for the moment the issues of file formats and interchange, this installment will focus mainly on the methods and means of connection between the server chassis or engine and the storage subsystems. Using carefully qualified openly available components, the video storage systems which comprise these ever-increasing in size (and use) video servers use similar interconnection methods, but with specialized applications that allow the manufacturer to tailor the drive arrays to the specific needs or architecture of the system they are delivering.

PROTECTING YOUR ASSETS

Facilities migrating from depending on videotape for their program content and commercial playback to a fully integrated video server architecture give considerable thought to the protection of their assets. In the non-video-disk days of videotape, broadcasters generally used a second or third method to protect their pre-recorded programming for the air chain; they would double-record the program and even dual-roll the playback from two transports at the actual time of air. Provided the media wasn't damaged, videotape transports permitted the physical exchange of media between them, a practical and relatively simple means for protection. Mission-critical time-delayed material was generally recorded on a similar backup transport, or in some cases on a secondary 3/4-inch or VHS transport. The costs for a second transport were reasonable, and in most cases justifiable.

Comparatively, video server protection schemes are built on a different model. Fundamentally, it is impractical to physically change the media between a set of arrays or move the disk drives from one server platform to another when something fails. Furthermore, moving a single hard disk drive in an array from one storage chassis to another is a worthless exercise due to the general incompatibility of the file structures on each set of independent arrays. Therefore, as the principles of networking come into play in the video server system, you realize the benefits of redundancy and protection.

The first and most basic form of protection in a storage system is handled by the Redundant Arrays of Independent (or Inexpensive) Disks configuration, whether that be a RAID 1, RAID 3 or RAID 5 implementation. In RAID systems, the overall data protection is achieved by striping data over multiple storage drives and mapping the data protection (the parity data) among and between them. The loss of one drive, or in some RAID configurations even two drives, becomes less of a problem than the loss of a single transport or the destruction of a sole videotape master. In addition, given these basic principles of RAID for protection in the media server, other advantages are achieved, including increased bandwidth, greater system throughput and minimal requirements for immediate human intervention.

The converse is also true: As bandwidth increases, so does protection. To increase bandwidth, many systems parameters are extended. First, we must add faster drives in more efficient storage arrays. Second, we spread that data over multiple arrays to increase access while simultaneously increasing storage capacity. Third, we provide a means to exchange that data between both data stores and server engines at high speed transfer rates. Fourth, we add protective measures to handle instances where a key element fails and the balance of the system fails over -- or a secondary system automatically intervenes and provides a means for recovery.

With the introduction of Gigabit Ethernet and Fibre Channel switches, that third element -- "a means to exchange data between stores and server engines" -- has become a smoother and more universally accepted process in implementation. Again, it is the principles of networking which are playing an important and significant role in the development of the broadcast video server for multiple purposes and large-scale system architectures.

Fibre Channel, a channel/network standard, is now being utilized by nearly all broadcast video server manufacturers. Fibre Channel has extended beyond that of a single arbitrated loop, which moved data at faster than real time between one or more server chassis. Fibre Channel, and Gigabit Ethernet are both opening the neck of the bottle to allow for more data flow and better control of the data. As video server systems scale from medium to large systems, many are now utilizing 1 and 2 Gbps Fibre Channel switches as the data channel manager that connects between multiple server engines and their associated storage arrays. To manage the growing speed and amount of data flow between devices, one or more Fibre Channel switches may be employed. Using a managed switching system improves both the bandwidth management and provides for a higher level of system protection.

Once thought of as expensive, difficult to configure and manage, the Fibre Channel switch is rapidly becoming a mainstay element of the broadcast video server architecture.

For scalability of storage and system bandwidth, the FC-switched architecture provided much better overall performance, especially when 2 Gbps Fibre Channel products are utilized.

(click thumbnail)Fig. 1: Switched Fibre Channel Protected Server Architecture
Fig. 1 depicts a multiserver, multi-I/O design utilizing a scalable storage array structure interconnected through a pair of redundantly configured Fibre Channel switches. The components consist of two server engines (identified as A and B), which can be operated in either a mirrored or independent structure, a set of storage arrays (F), and a pair of Fibre Channel switches (C and D).

When the facility operates with a protected set of video outputs, then the outputs of server B will be configured to match that of server A. In detail, a primary "A" playout channel and a secondary "B" playout channel both playback the same data to two different decoder outputs (in two separate chassis) whose baseband outputs feed digital video routers or master control switchers. In operation, two decoders in two separate chassis are working in parallel, each pulling the same data (video clips) from the same set of common storage arrays (shown as F). The "A" playout channel is synchronized with "B" playout, allowing an instantaneous switch to the opposite output from the other server engine should the primary server output fail. In Fig.1, only a single common, but well-protected storage system is used.

Operations that do not require a protected playout could also utilize the same decoder/server arrangement, but instead of parallel, synchronized operations in two chassis, there are now twice as many outputs available for the playout of different video streams to different program channels.

Each of the server engines (A and B) have a set of dual FC-ports, which are connected to the FC-switches, shown at points C and D. For protection purposes, should one FC-switch fail or a FC-channel data port from a server fail, there is a cross connection to the other switch (shown as E) which maintains a seamless "transfer of operations" between the storage systems, server chassis or FC-switches. The "cross-over" channel between the two FC-switches serves two purposes: First, it acts as a protective measure should either of the switches fail; and second, it is the means of connection between the two sets of server engines to the data storage arrays via redundant paths.

Each storage array also has two sets of Fibre Channel I/O-ports that connect to each of the FC-switches as shown. Should a port interface fail on any storage array, the opposite path immediately picks up the data and routes it to or from the respective FC-switches and on to the server engines. Management software regulates the flow of data and actions of the FC-switches in various demand or protection modes.

PEACE OF MIND

Providing this level of switched management produces an extra level of reliability, and it provides a path for serviceability while minimizing down time. Individual system components can be removed and/or replaced without unduly harming the overall operations. This concept can also be employed in complex news editing and program ingest/playout systems. Such extensions include another complete "mirror" of this system interconnected via FC-switches, redundant storage array, and streaming to remote location via a gateway for disaster recovery.

As broadcast video server systems are proposed for integration into the facility, their structure must be designed to meet the workflow methods of the operations in which the systems will be used. In order to achieve maximum performance from the system, several operating parameters must be optimized. The planning of these larger-scale systems is complex and specialized, and generally specific to the products and techniques used by the broadcast video server manufacturer.

Optimizing the performance and management of these advanced 2 Gbps storage area network fabrics is just one of the issues associated with the construction and integration of the modern broadcast video server environment. Other issues that need to be considered as these high-speed networking components are configured for the applications in which they are placed include: reducing congestion, increasing data availability, switched system aggregation (referred to as trunking), and the maximization of data transfer rates.

In a later issue, this column will further explore the rationale behind the selection and uses of the Fibre Channel switch; including how the WAN is being used for remote location and disaster recovery.

Karl Paulsen

Karl Paulsen is the CTO for Diversified, the global leader in media-related technologies, innovations and systems integration. Karl provides subject matter expertise and innovative visionary futures related to advanced networking and IP-technologies, workflow design and assessment, media asset management, and storage technologies. Karl is a SMPTE Life Fellow, a SBE Life Member & Certified Professional Broadcast Engineer, and the author of hundreds of articles focused on industry advances in cloud, storage, workflow, and media technologies. For over 25-years he has continually featured topics in TV Tech magazine—penning the magazine’s Storage and Media Technologies and its Cloudspotter’s Journal columns.