Storage technology: SANs and NAS

Users always want more and faster storage
Author:
Publish date:

In the beginning there was SAN.

The two primary types of storage in broadcast and production environments are storage area networking (SAN) and network attached storage (NAS). Each provides unique benefits and drawbacks. Let’s first look at SAN technology.

SAN is a server technology that allows the separation of storage from processing and I/O. A SAN typically connects hard disks, tape drives and other peripherals to a host server. It also allows users to connect multiple servers to the same storage peripheral. SAN software can provide elaborate monitoring, backup and load-balancing functions.

SANS are often used to create a pool of virtual storage that a server treats as though it were local. A SAN can comprise local storage on a number of machines, centralized storage or a combination of both.

Unlike a traditional network, a SAN does not involve file transfer; nor does it involve connecting to a remote drive on a server. Instead, a SCSI channel is mapped across the network to the remote device, making the control device think that the storage peripheral is directly attached to the server. For this reason, the server treats the storage just as if it were hard-wired to the peripheral interface. A SAN typically operates separately from a local-area network (LAN) so storage-related functions do not slow LAN traffic.


Figure 1. A SAN separates computing and I/O functions from storage itself.

SAN basics
A SAN consists of three basic components: an interface, interconnects and a protocol. The interface can be the small computer systems interface (SCSI), the enterprise system connection (ESCON) or Fibre Channel (FC). The interconnects can be switches, gateways, routers or hubs. The protocol, like IP or SCSI, controls traffic over the access paths that connect the nodes. These three components, plus the attached storage devices and servers form the SAN. While the SAN supports a number of interfaces, Fibre Channel (both Fibre Channel Arbitrated Loop (FC-AL) and Fibre Channel fabrics) dominates SAN implementations due to its flexibility, high throughput (up to 2Gb/s) and inherent fault-tolerant paths. See Figure 1.

Think of a SAN as a high-performance network on the “other side” of a server. Networks provide the connectivity between a server and remote workstations. A SAN provides connectivity between servers and storage. The purpose of a SAN is to separate computing and I/O functions from the storage itself. Once the storage is separate from the processor, multiple processors or servers can access a pool of common storage, and additional disk storage can be added without having to add processors.

A key benefit of a large SAN system is that users can access the same data at (almost) the same time. This improves workflow and efficiency. In a news environment, multiple editors can access the same raw footage to create different packages. In broadcast playout applications, the same content can play out of multiple servers to multiple channels simultaneously.

Layer by layer
Figure 1 illustrates a simplified SAN solution employing Fibre Channel. The application makes storage requests of the operating system and the operating system handles the details. When an application makes a storage-related request, the operating system communicates with the RAID controller through a Fibre Channel-switched network, typically referred to as Fibre Channel fabric, using standard SCSI commands.

The SCSI drivers are responsible for generating SCSI software commands, not actual SCSI physical connections. This is an important distinction. While SCSI commands are still sent across the network, using Fibre Channel-switched fabric eliminates the limitations of needing to physically connect SCSI hardware to the remote storage.

A Gigabit Linking Unit (GLU), Fibre Channel switch and Fibre Channel RAID controller comprise the SAN. The GLU is similar to a network interface card (NIC) in an Ethernet system. It provides the physical and electrical interface to the Fibre Channel fabric. Once the SCSI commands reach the RAID controller, the controller saves or retrieves the data from the storage system. From this point on, communication between the controller and the physical drives is typically SCSI. Because the controller is usually co-located with the disk drives, SCSI connection limitations are generally not a problem.


Figure 2. NAS storage makes the same files available to different operating platforms.

Network attached storage (NAS)
A key benefit of NAS is that users can share files on a common server, even if they are using workstations with different operating systems, which is common in broadcast environments. Figure 2 illustrates a typical NAS configuration. In the past, it was hard to find a storage server that spoke several different protocols. Now these devices are almost commodity products, and are available at amazingly low prices.

Parlez-vous protocol X?
If NAS device is to interface with different kinds of workstations, it must emulate different protocols and network file systems. Example protocols include TCP/IP, NetBEUI, IPX and AppleTalk.

TCP/IP stands for Transaction Control Protocol/Internet Protocol, which is the de facto standard for network communications in most facilities. A wide variety of devices speak it, and it is used over the Internet.

NetBEUI, pronounced net-booey, is short for NetBios Extended User Interface. This is a protocol used by Windows systems, frequently in peer-to-peer networking environments where routing and direct connection to the Internet is not required. IPX stands for Internetwork Packet eXchange, and was first widely deployed with Netware networks. AppleTalk is a networking protocol frequently used with Apple and MAC computing platforms. With these terms out of the way, let’s look at how a NAS can be used to create a familiar operating environment for your users.

Network file systems
Users, your editors, journalists and BOC operators, are accustomed to seeing common interfaces with folders that represent directories representing storage. However, behind the scenes, a file system has to tell the computer how those directories are organized and where files are located — and do it for multiple operating systems.

In order for a workstation to be able to read and write files on a remote system, the workstation and the server must have a common understanding about how directories and files are organized. Protocols provide a conventional set of rules for this organization. Some common network file systems include NFS, CIFS and AFP.

Network File System
NFS, or Network File System, is UNIX based and it allows you to attach the shared portions of a disk at an NFS server to your local disk. Users can navigate to these directories just as they would with a local directory. NFS is a client/server system and the server grants access to its local file system by answering queries and executing commands from the client.

The client makes the remote server look as if it is attached to the local file system. After all, the users don’t care where the file is physically stored, they just need to be able to access it.

NFS uses remote-procedure calls (RPCs), and every RPC has a parameter that can be used to authenticate the sender. The server administrator can add an additional layer of security to the system by requiring the use of a particular authentication system such as Kerberos. Kerberos is an open-system network authentication protocol designed to provide strong authentication for client/server applications by using secret-key cryptography.

Common Internet File System
Common Internet File System, or CIFS, is based on Microsoft’s Server Message Block (SMB)protocol. It is used by Windows to share files and printers. CIFS specifies access to shared files and directories using the convention file://myserver.com/home/ftp/pub. A server that is parsing this request would understand that the client is asking for access to the directory /home/ftp/pub on the server myserver.com.

Apple Talk Filing Protocol
Apple has its own way of doing things and if your applications require access to or from Apple equipment, you have to speak its language. AFP stands for Apple Talk Filing Protocol. A non-Apple network can only access data from an AppleShare file server by first translating into the AFP language.

Okay, with two tutorials on networking under our belt, next time we’ll step back and see just how we’re going to implement all this digital technology in our facilities. The digital transition begins.

Want to learn more? Check out these additional resources

Network-attached storage in an Exchange environment
By Windows IT Pro
www.win2000mag.com/Articles/Index.cfm?ArticleID=9018.

Building a Storage Area Network
By Network Computing magazine
www.networkcomputing.com/1109/1109ws1.html.

Comparing Storage Area Networks and Network Attached Storage
By Brocade www.brocade.com/san/white_papers/pdf/SANvsNASWPFINAL3_01_01.pdf.

Back to the top