Skip to main content

Storage Architecture Continues to Evolve

In the information age of the 21st century, one of the fastest-changing elements seems to focus on the means and methods of how information is stored and distributed. A revolution is under way in storage architectures, marked by a continual growth in both physical bandwidth and data speeds for both the storage devices and the network topologies.

The evolution of the network over the past decade began in earnest in 1992 when shared 10baseT topology data rates seemed to hover at just around the sub-5 MBps range. Network speeds stayed somewhat flat between 1992 and 1995, even with the rather disappointing 1994 promises of FDDI, until 100baseT switched networks came to life.

By 1994, data throughputs had just barely moved up to around the 10 MBps range. After 1996, network architectures began to change much more aggressively, and by early 1998 the gigabit switched network catapulted data throughputs to the 100 MBps area.


The need for faster networks became most evident as the development of faster storage interfaces addressed the demand for more storage. In 1992 the predominant storage interface was SCSI, with speeds at just slightly under 15 MBps. By 1994, fast-wide SCSI had emerged moving the disk bandwidth forward the 20 MBps.

Ultra SCSI in 1996 doubled that bandwidth to 40 MBps and within 2 more years, Fibre Channel emerged matching storage bandwidth and network data rates at just under 100 MBps aggregate data rates.

(click thumbnail)It is at this point in time we observe that network speeds have and will continue to increase at a faster rate than storage bandwidth – that is unless some revolutionary technology we’ve not seen comes in to make liars out of all of us. We recognize now that the next step for storage interfaces over Fibre Channel will be 200 MBps; while in similar time frames, network architectures have already shown close to 1,000 MBps possible over 10-gigabit Ethernet (10 GbE).

The evolution of storage architectures is pushing the storage model out of the application server domain and onto its own networked storage domain (see storage architecture diagram). In earlier times, the model was referred to as server-centric, whereby the file system resided on the application server and the data was stored away from the server/file system on JBOD (just-a-bunch-of-disks).

The storage interface method of choice was either SCSI or Fibre Channel. More recently, within the past 2 to 3 years or so, the storage-centric model has taken over, whereby server and file system remain commingled and the SCSI/Fibre Channel storage interface is coupled to the larger storage system, typically under RAID control.


Under the storage-centric model, usable brute force storage can still grow quite large, but the processes of moving, copying and protecting that data still depend too greatly upon the application server/file system for manipulation and processing. The result is that server performance suffers whenever file system activities become intense.

As we reach forward, the newest model in the continual evolution is the network-centric architecture. Finally, the file system can decouple itself from the application server and move closer to where its real importance is, on the storage system proper. Now, in the newest of the three models, we find the application server connecting to the file system and RAID storage system over faster more-efficient topologies. The application server, or more properly, multiple application servers, communicate to shared storage over a modern high-speed network at from 100baseT to GbE rates.

At this point, the network becomes the platform and in turn, independence between servers and storage types is possible.

(click thumbnail)The goals of a networked storage architecture comprise an open platform approach including file-level access (as opposed to block I/O), intelligent systems (one that is not device-dependent), true "plug-and-play" protocols (eliminating complex integration) and intelligent data management and content delivery preformed at the protocol level (not brute force block I/O). The promoters of open storage networking state that once the concepts become standards, are fully adopted and finally deployed, we will realize such benefits as unlimited scalability, better accessibility and ease of recoverability in files, heterogeneous single-copy data sharing and flexible network topologies.


We will be hearing shortly of a next-generation storage networking concept. Already, in the compute-centric world, we have been introduced to Storage over Internet Protocol (SoIP). The high-speed native IP SAN (storage area network) is now here. IP SAN promises performance equal to or better than Fibre Channel SANs. With 10-gigabit Ethernet on the horizon, a tenfold increase will also be expected, with chipsets already produced and products due in the first quarter of 2001.

Proponents have already demonstrated the claim that when utilizing IP, the complexities of deployment and scalability will be significantly reduced. IP SAN further retains compatibility with the installed base of SCSI and Fibre Channel devices.

When the idea of SoIP takes hold, and hardware is deployed, the existing private network will realize untold advantages in distributed storage and server topologies. For media-centric facilities to realize the true values in centralization of assets and operations, a networked approach to storage may be necessary.

Advantages, such as changing the scale of the location of assets to a central location, with a secondary location for disaster recovery and backup, will potentially reduce the dependence on the facility’s necessity of multiple recordings and in turn reduce the need for redundant or troublesome legacy hardware.

The storage network architecture is already being implemented in campuslike organizations. There are service providers that have taken the concept to the metropolitan area network level as well. Extending this model over a wide area network may not be particularly cost-effective today, but as the hardware costs reduce and economic business models force operational cost reductions, the interconnection of facilities may eventually make much better all-around sense than it does today.


Centralized asset management and operations needs a high degree of case-by-case analysis – not only in hardware and operations costs, but also in the philosophical implementation.

The technological developments for extending this model to media and video servers still have a number of hurdles to cross. When the network storage concepts move out of the facility and closer to the point of use, principles such as edge servers, regionalized store-and-forward, and interactive video-on-demand servers will become more practical and make better economic sense.

It is no surprise that some cable company MSOs who are already invested in the topology see the concepts of network storage as a potential for these advanced services. With fiber-optic interfaces in place between distributed operations centers, some dedicated and some shared, one of the more costly segments of the distribution model is ready and primed for the next generation of media delivery.

Regardless of who actually implements these technologies and for what purpose, one thing for certain still seems to follow: We continue to observe how the computer-centric operations’ technologies find their way into the media-centric world of the broadcaster and direct-to-home distribution services. Don’t be surprised to see some of the concepts we’ve discussed employed in real media-related products in the next year to year and a half.