Karl Paulsen /
Meeting the Demands of Networking Infrastructures
The business of managing information technology continues to spawn a number of tactical debates on all fronts. The emergence of so many storage and networking infrastructures demands that a plethora of new storage management tools and techniques be unleashed.
Today we might view Fibre Channel as the prominent method of interconnectivity between storage devices and compute subsystems. On the horizon may be a multitude of other implementation techniques better-suited to solving the cross-mix of connectivity that we will undoubtedly see coming.
Over the years of storage system development, we have watched the physical interfaces between devices evolve. In times gone by, only parallel SCSI channels connected a small number of SCSI devices through a short leash of 25-pin or Centronix connectors. Today, utilizing Fibre Channel technology, storage interconnections have expanded to hundreds of high-bandwidth devices supporting on the order of 100 MBps of traffic over great distances.
Supporting devices have expanded from a relatively small number of SCSI disk drives to complete storage subsystems, individual storage devices and server systems - built around the Fibre Channel topology into what is now becoming known as a storage area network, or SAN. The SAN is a focused effort to segregate server application and services from the task of storage-only related activities.
The foundation of the video and/or file server technology lies at the root of what is happening in information technology. The method and direction of growth in servers for media applications stems from what has or is being developed for compute technology.
As we've watched Fibre Channel and Gigabit Ethernet evolve, we're about to see IEEE-1394 (FireWire) and USB (universal serial bus) become players in the media server domain. Even now, both IEEE-1394 and USB have strong connections to the personal computer and to household interfaces (VTRs, camcorders, DVD, set-top boxes). We expect that these high-speed, low-cost interfaces will eventually find their paths into professional media servers of some flavor.
Before we move into our general discussion of backup and protection for storage management, let's take a look at some of the terminology associated with this relatively new technology as it is wrapped around storage networking.
The device whose fundamental purpose is the persistent storage of data and delivery is referred to as a storage element. Devices such as discrete disk drives, drive arrays (RAID or otherwise), tape drives, automated tape libraries, file servers and the like all fall into the category of storage elements.
From this definition, we need next to define the process of creating and using networks whose main purpose is data transfer between and among storage elements and compute systems. This process includes administration, installation, creation or setup as well as the use of these networks. We wrap these processes around a single, yet wide-in-scope, terminology called storage networking.
A network whose primary purpose is the transfer of data between computer systems and storage elements and among storage elements is called a storage area network, or SAN. The SAN consists of a provision for physical connections, called the communications infrastructure, and a layer that manages these connections between the storage elements, called the management layer.
The SAN can be comprised of different network topologies. For example, a Fibre Channel SAN and an Ethernet SAN are both possible. When defining the SAN, it is suggested that the qualifier (Fibre Channel, Ethernet, etc.) always be part of the phrase because both have different physical and interface specifications.
Furthermore, the definition of SAN is different from the "network," which connects just the computers together. Recall that the principal activity of the SAN is access and management of the storage elements, and not the means for communicating between the computers.
If you look back over the past few months of TV Technology, this column has focused on the hottest new topic for servers called clustering. One of the components that can be used to enable storage clusters is indeed storage area networking.
SANs make it possible to share heterogeneous storage resources with heterogeneous systems. SANs allow the consolidation of storage resources. SANs utilize the consolidated storage concept as a network, as opposed to the interconnecting of multiple independent storage elements operating discretely, through their respective server or computer systems.
One of the chief advantages of the SAN is its ability to separate the application traffic from the storage traffic. SANs focus attention on what storage elements do best - collecting, moving and storing data. SANs also permit the scaling of computer and storage resources independent of each other. For example, a large organization might incorporate multiple video servers - each with only a few I/O channels, and only a relatively small amount of local storage per server.
The entire system, however, is linked through the discrete servers, over Fibre Channel, to a master library or centralized storage array. Any individual server could be upscaled (more storage or I/O) without affecting the other servers. In turn, the centralized storage library could grow without affecting any of the remote servers.
This concept covers one of the principles behind storage area networking, but still relies on the server as the gateway between the enterprisewide storage system. There are two other principles associated with storage area networking. One refers to the storage element that connects directly to a SAN and is called the SAS or SAN-attached storage.
SAS AND ITS SUBSET
The SAS provides data access services in the form of files, databases or blocks to the storage subsystems. A subset of the SAS, which relates just to file access services, is called network-attached storage (NAS). When an NAS operates in a mode that implements file services - consisting of an engine and one or more devices on which data is stored - it is referred to as an NAS storage element.
SANs also enable centralized management of distributed storage resources - plus they provide fault-tolerant data access. The SANs extensions, SAS and NAS, provide for high data availability as well as shared data access and shared storage resources across heterogeneous computer systems.
Any of these SANs concepts may become elements of the enterprisewide media server solution over the coming months and years. One can visualize where SANs might be headed by looking at facilities already using multiple flavors of server-based media distribution.
Currently we find that broadcast automation systems and their peripheral components (including third-party software-based archive applications) are providing most of the applications and data management layers of the media infrastructure for on-air purposes.
Newsroom computer systems, coupled with server arrays and storage subsystems, are providing their own flavor of applications and data management, specifically to their product arena. Non-linear editors and other standalone systems work both independently and collectively within the facility - interfacing at both the data and baseband layers.
And finally, there are the larger-scale archive manager applications that are providing the gateway between many of these flavors of automation and computer systems, and their respective storage systems.
Today, the applications and processes behind data management on an enterprise basis are still, for the most part, generally device- and subsystem-independent. Each vendor has its own schemes of data management, which in turn need to manage each subvendor's particular peripheral device and interface.
Sharing resources on a network among elements remains a complex task - sometimes reduced to the lowest common denominator, which is the physical tape copy that must be "sneaker-netted" to another ingest point in the system. The long-term goal would certainly be to eliminate the dependence on physical tape, baseband distribution and human beings - and head toward digital media transport over a homogeneous connected network.
Enterprise backup and protection are two of the structures within data management that are addressed best by a systemwide storage area network. According to industry work groups focused on this task, for the first time storage networking technology will begin providing some of the toolsets necessary to implement enterprisewide management for backup and protection applications. Over time, the applications being implemented as a result of the development of SAN solutions may be directly equated into the broadcast media management domain.
While in broadcast we may never get away from the "I want my own protection copy" concept so common in videotape-based operations, manual implementation of server protection and backup is becoming a thing of the past. Organizations know that porting these "manual" concepts to the thousands of clips and programs that are or will be stored on media servers or their data tape backups just isn't practical from both physical and personnel perspectives.
The toolsets we will be discussing cover three areas strictly associated with data and data storage. These areas - the movement, classification and organizing of storage systems - when implemented properly and efficiently, will reduce the total cost of ownership for both small-scale and large-scale media management systems.
NOT A SIMPLE MATTER
The analysis necessary for cost-effective implementation is not a simple matter. Today, because of the variety of physical media storage devices and the lack of industry standards in place, each business must be looked at on a case-by-case, product-by-product basis. This is not only complicated, it - for the most part - can be completely inaccurate.
So far we have been describing the broad needs of the organization only in a general sense. One of the steps in determining how to implement enterprise storage management is to look at some specific areas within the storage management cloud.
Let's first examine what it takes for enterprisewide backup utilizing storage area network technology. So there is no misunderstanding, we will continue to use the terminology "data" to mean media in digital data form as it applies to video, audio, text and even metadata.
When users are actively sharing data, whether over a large-scale system or in small groups, protecting the integrity of that data for backup is not only important, it is also complicated. The level of complexity could be most easily controlled if all the data is kept on similar storage devices shared over common computer platforms or architectures, utilizing the same applications across the entire enterprise. But that is not the real world.
For the facility that starts from ground zero, it is certainly possible to find a single source manufacturer that can provide all the needed components for an entire system. Some organizations have taken this approach initially. However, dealing with the continual changes of technology and the propensity for subgroups within organizations to want to expand "outside the loop" forces the issue of a more robust and standards-driven interface between elements and entire systems.
The ability to share both resources and data on an enterprisewide basis is essential to maximizing efficiency and minimizing costs. With that in mind, we'll look inside the storage area network concept to see how sharing, copying, protection and device addressing are evolving.
While active data sets are being shared, they must also be protected without locking out other users and/or applications. There are numerous software applications that can provide for what is called snapshot/checkpoint capability. There are also storage subsystem manufacturers that are implementing this capability integrally with their RAID controllers.
Each implementation of snapshot/checkpoint utilizes a different application program interface (API), requiring either a translator or some other method to cross-platform-share each respective method over the enterprise. This is just one area that is being investigated to come up with a standard so that all RAID and backup software manufacturers can meld their particular approach universally across the enterprise
A more uniform approach to attaching devices (such as tape backups) to the storage area network is also needed. Currently, tape devices for backup and restoration are connected to Fibre Channel via a FC-SCSI bridge. Individual vendors each have their own independent means of addressing the SCSI devices attached to this FC-SCSI bridge.
Many of the devices use autoconfiguration schemes and third-party software management for control. In the search for a universal solution, a common device-addressing scheme is just another area that is being studied by working groups so that the connection of devices through bridges will be simplified.
As dependence upon data systems grows, CPU utilization and network traffic escalate. As this growth continues, fewer resources remain for user applications and system processes. Today, the tasks of moving data between storage devices and data tape for backup require that the data move through and under the direction of the server.
Most of these data transfers are copy functions that replicate the data from one storage element to another. A feature that is being explored for the universal implementation of this data replication requirement is called extended copy. This feature will intentionally bypass the applications and services server and work directly between storage elements. The SAN appears to be a logical candidate for this feature as it is specifically designed for storage activities -not for applications and services.
NORMAL NETWORK OFF-LOAD
When the duties of backup, copy and movement are off-loaded from the normal "network" activities and placed into the hands of the SAN, the traditional server would be freed for applications and services, opening up more capabilities on a systemwide basis. Furthermore, because data movement need not consume server resources, system efficiency will rise.
These are just some of the topics being explored by working groups within manufacturers' and other standards bodies - aimed at developing more universal approaches to device element connectivity. In the broadcast and media domain, working groups and standards bodies continue to advance in the direction of specific media needs related to this industry.
Universal server language or protocols, along with standardized metadata definition sets, are being developed today and will meld with the work done at the information technology industry level to create better and more-uniform implementations of media server systems.