The High Performance Serial Bus Architecture

The emphasis on high-definition production, distribution and play-out is presenting another set of evolutionary challenges to the video server marketplace. Up until the last few years, the focus was on HD encoding methodologies and how bridge components in existing systems would support the transition. With the presumption that videotape is no longer the answer for HD, this leaves the world of files on disks in need of a solution that can address an order of magnitude increase in storage and in turn a requirement for a massive amount of bandwidth. Whether that server is a nonlinear editing platform or a mission-critical play-to-air system—storage and system throughput must now step up several notches in order to move material from disks to decoders and processors.

SYSTEM GROWTH


(click thumbnail)Potential architectures for InfiniBand allow specialized storage systems to integrate with Fibre Channel, NAS/iSCSI, and Ethernet—as well as other networking topologies. These systems can then co-exist in a diverse system.Server and drive array vendors have steadily moved towards architectures that can supply the appropriate mix throughout and storage capacity to meet the needs of the users. Disk drives have achieved those capacities and the data rates of these drives support the new endeavors; but that by itself is not enough to complete the equation. The systems must grow to support backplanes and interconnection schemes in the array chassis, as well as those network interfaces between the stores and servers. Consideration must be given to supply enough horsepower to permit multiple sets of 200-plus Mbps data moving between subsystems without waver.

So the manufacturers of video and media server platforms are once again looking at proven network-centric architectures as a solution to their issues. We’ve seen the growth in serial drive technologies so now it makes sense to look at another form of bus technology, also in serial form, as a solution. A recent, yet not brand new technology called InfiniBand—pioneered in the late 1999 time frame and introduced to the market place in 2001—is finding favor in addressing the mass storage requirements of large data stores, especially for transactional data and media.

InfiniBand is both an architecture and a specification for data flow between storage I/O devices and processors. Its application is not restricted to just storage and servers. It may also be found as an internal bus that could eventually replace the existing peripheral component interconnect shared-bus approach used in personal computers and servers alike. Storage system data interchanges, like the internal bus of a computer or server, face similar issues; that is, the amount of data flow between internal components has increased to the point where the existing bus system becomes a bottleneck. Internal bus systems typically send data in parallel, usually as 32 bits at a time, and in more recent systems at 64 bits, across a data path called a “backplane bus.” Data flow must be continually regulated and timed so all bits arrive at precisely the same instant. Placing multiple sets of data calls onto the same single bus forces wait periods, reclocking and wholesale traffic problems.

Enter InfiniBand: here the data path now becomes a bit-serial bus with a much higher bandwidth and data flow. This approach is not unlike what evolved in the 1980s as D1 and D2 digital video. For digital video to be successful, it has to move from an 8-bit parallel architecture and interconnect (using twelve and a half pair cables on 25-pin sub-D connectors) to as serial digital video over coaxial cable transport. This technology ultimately permitted the transport of bit serial data flow to reach upwards of 3 Gbps by employing serial, instead of parallel, methodologies. These principles are in part what make data interchange on InfiniBand, but here is where the similarities part.

InfiniBand operates much like a packet-based network architecture where its serial bus simultaneously carries multiple channels of data on or through the use of a multiplexing signal. Furthermore, InfiniBand can be described as an I/O network, one that views the bus itself as a switch because there it includes control information, which is used to determine the routing a message follows to its destination address. This approach allows nodes on the network to take the form of a star, one that emphasizes a broader reach to hosts and storage devices alike.

For storage systems, this approach allows systems to perform multiple reads and/or writes without having to wait for data to settle down or run slower, as on parallel bus architecture. Think of the bus as a set of eight lane highways that must cross at an intersection without a stop light. Each vehicle (i.e. the packet) tries to get through the intersection, but is hampered by the chance that one or more vehicles will collide with another as it tries to get across the intersection. Everyone must slow down to squeeze through the open slots between the vehicles. If you could line all the vehicles up in series, spaced equally and running at a prescribed rate, the likelihood that every vehicle can get through the intersection increases and you could now drive through the intersection at 55 mph—knowing they wouldn’t hit each other. As the volume of data moving through the system increases, the ability to transport the data in serial increases many fold compared with the outdated parallel approach. This is precisely why serial technologies are finding their way into all corners of electronics.

MORE ADVANTAGES

Another advantage with InfiniBand is that it uses a 128-bit addressing scheme via Internet Protocol Version 6. This allows an almost limitless amount of device expansion, and gives rise to multiple sets of disk drives operating on a single bus architecture. This results in much higher performance and increased bandwidth for faster delivery to peripheral systems.

InfiniBand supports multiple memory areas which can each be addressed by both storage devices (disks, arrays, solid state memory) and processors (servers, hosts and media ports). As described in the intersection example, InfiniBand supports the transmission of data in packets that form a communication called a “message.” The transmission of a message may take several forms. It might be a remote direct memory access read or write operation, a channel send or receive message, a reversible transaction-based operation or even a multicast transmission. This opens the doors for several applications all riding on the same serial fabric.

In similar fashion to mainframe computer applications, a transmission begins or ends with a channel adapter. In the system, each processor (i.e., server) employs a host channel adapter and each peripheral device (i.e., a storage system) has a target channel adapter, similar to the concepts in Fibre Channel. As the communication begins, these pairs of adapters exchange information that set up message routing, open the corridors for the data to flow, and ensure that both security and a given quality of service level are met.

The advent of powerful new multicore CPUs, blade architectures, and server virtualization increases the demand for storage, as well as the need to reduce power consumption. These factors continue to place increased demands on I/O solutions. InfiniBand offers 20 Gbps host connectivity with 60 Gbps switch-to-switch linking, with measured latency delays of 1 µs end to end. InfiniBand further offers an easy path for redundancy, featuring fully redundant I/O fabrics, automatic path failover and link layer multipathing enabling the highest levels of availability.

In the media and storage domain, InfiniBand is currently employed by recognized manufacturers that specialize in very large storage systems for nearline and online applications. The fabric can be found directly in the drive array as its backplane, and also in applications as an interconnection scheme between multiple sets of discrete data stores. InfiniBand is the high-performance fabric of choice for data store systems employing hundreds of independent drives, enabling both performance and masses of storage compliments.

Karl Paulsen

Karl Paulsen is the CTO for Diversified, the global leader in media-related technologies, innovations and systems integration. Karl provides subject matter expertise and innovative visionary futures related to advanced networking and IP-technologies, workflow design and assessment, media asset management, and storage technologies. Karl is a SMPTE Life Fellow, a SBE Life Member & Certified Professional Broadcast Engineer, and the author of hundreds of articles focused on industry advances in cloud, storage, workflow, and media technologies. For over 25-years he has continually featured topics in TV Tech magazine—penning the magazine’s Storage and Media Technologies and its Cloudspotter’s Journal columns.