Omneon MediaGrid

The fundamental thinking behind MediaGrid makes excellent sense: by splitting the file locating and storage/retrieval functions, Omneon appears to have solved the access delays and bottlenecking common to many video archiving systems. Meanwhile, the file-slicing approach is an elegantly simple way to enhance system robustness and increase access speeds, while making the inevitable swapping-out of dead HDDs as painless as possible.
Author:
Updated:
Original:

For years, media storage and archiving has dawdled along on technology’s trailing edge. While video acquisition and editing zoomed into the nonlinear digital universe, broadcasters’ archives remained on tape, stored on whatever shelf space was at hand. As for finding archived clips after the fact? Good luck with that: in stations where time was tight and resources even tighter, the reference library might be nothing more than a three-ring binder with handwritten notes.

In recent times, video storage and archiving has started to catch up, first with tapes being stored in computer-accessed robotic cabinets, and then with the development of hard disk drive (HDD) systems such as Avid Unity ISIS (Infinitely Scalable Intelligent Storage). However, with a top non-infinite storage limit of 64 TB as of press time (really 32 TB since ISIS mirrors its files for redundancy’s sake) even this innovative product hasn’t solved the issues of ongoing video archiving and access.

Which brings us to Omneon’s MediaGrid Active Storage System, a radical new approach to digital media storage and access that is being unveiled at NAB2006. Currently being installed by Turner UK to store video for its 15 European/Middle Eastern channels, Omneon MediaGrid is designed to allow true infinite scalability for increasing video storage capacity on the fly; reliable, redundant and fast access to video files by simultaneous multiple users even during individual HDD failures; and the ability to hot-swap new HDDs into the system without time-wasting rebuilding and integration.

Seriously: based on its design and architecture, Omneon’s MediaGrid should be able to fulfill all of these promises, and more.

MEDIAGRID’S ARCHITECTURE
“Omneon has become well known for its media servers,” says Geoff Stedman, the company’s vice president of marketing. “But over the years, many of our customers have been asking us for an archiving product, to pick up where our media servers leave off.

“So we decided to enter the digital media storage market, but not just by making a copy of what already exists,” he continues. “Instead, we wanted to develop a solution that solved the problems currently associated with storing video, such as poor or nonexistent metadata for finding desired clips, slow access through tape-based systems and network bottlenecks caused when more than one person tries to find the same video file. That’s what led us to develop the MediaGrid approach.”

The reason Omneon’s MediaGrid approach can deliver where others fail is due to a number of design features. First and foremost is the company’s decision to separate the function of searching for content (using MediaGrid ContentDirectors) from the process of storing and retrieving it (using MediaGrid ContentServers). By doing so, Omneon has not only sped up the process of storing and retrieving files, but also eliminated the need for RAID storage arrays and made its storage system self-healing.

It’s time to drag a metaphor out of a hat (which is itself a mixed metaphor!) to explain how MediaGrid does this. In conventional computer-based video archiving systems, retrieving a file is like going to a store where the sales clerk retrieves bread, milk and eggs for you after you request them. Because the clerk is involved in the entire process, time is lost as he races around the store grabbing one item after another, then brings them to you. Imagine that the clerk is a video archiving system’s search/ retrieval engine, and you get the picture.

“In contrast, at the MediaGrid store there are multiple clerks, all taking a portion of your order and retrieving the items you need,” Stedman says. “Each clerk takes your shopping list, checks the store’s database to see which aisles the bread, milk and eggs are stored in and then runs directly to the aisle to get what you need. There’s one clerk for each aisle, so getting items located across three aisles takes a third the time as it would for one person to get all three items. In this way, overall service is sped up, especially because you are getting three clerks to do your work for you.”

To use the proper terms, the MediaGrid ContentDirector is the database that stores the metadata on your station/network’s video library. When you look for a file, the ContentDirector knows where it is, and connects you to it before moving on to the next request. The file itself resides on a series of MediaGrid ContentServers, each server holding 2 TB of data using four 500 GB SATA HDDs (a second option is also available that holds 12 TB of data using twenty-four 500 GB HDDs), and able to move data in and out at 2 Gbps via two 1 Gbps Ethernet ports. Your workstation connects directly to the Content Servers using the instructions provided by the ContentDirector, thus eliminating the bottleneck that would occur if the ContentDirector acted as an intermediary.

A SLICE IS NICE!
Here’s where the Omneon MediaGrid approach gets really cool: rather than storing each file as a single unit with it mirrored on another drive for redundancy, MediaGrid automatically cuts the files up into equal-sized data ‘slices’ of 8 MB, these slices are automatically distributed across the system’s ContentServers, as are identical slice copies. Since the copies are stored on different HDDs, the result is automatic file redundancy and massive file access bandwidth.

“Great,” you may say. “But why does file-slicing make a difference?”

Reason One: let’s say you’re searching for a video clip comprised of Slices 1, 2 and 3. For some reason, whether because someone else is accessing the file or the HDD it’s stored on has been pulled, the original of Slice 2 isn’t available on ContentServer 1. No problem: The ContentDirector will automatically route you to the Slice 2 copy on ContentServer 2, or even the second Slice 2 copy on Content Server 3. Of course, all of this will happen in the background, so that you won’t even notice it happening. All you’ll know is that the complete video file turned up on your workstation as requested.

Reason Two: by increasing the ‘Replication Factor’ for a specific video file, the MediaGrid system will produce more copies of each slice automatically. This means that there will be many more slices available for simultaneous users to access, should the file be in high demand. By allowing multiple ContentServers to work together to serve up data to a client’s workstation, the client reaps the benefit of all that aggregate bandwidth. And when the story truly becomes yesterday’s news? Again, no problem: simply lower the Replication Factor, and the additional slices in storage can be overwritten with new content.

Reason Three: slicing-and-dicing video files eliminates the need to use RAID arrays. Instead, when an HDD fails, you just replace it with another, and the MediaGrid ContentDirector does the rest. This happens because the ContentDirector is constantly ensuring that the specified number of slices of video file is always available on the system. Should the second copy of Slice 2 disappear because its drive has failed, the ContentDirector automatically generates a new slice copy on another drive. In this way, the system maintains itself.

As for adding new storage? “Simply add new ContentServers as required,” says Stedman. “The MediaGrid system will see them and use them. It’s as simple as that.”

Speed or Storage Capacity—Your Choice If you read George Orwell’s novel Animal Farm in school, you’ll remember the phrase, “all animals are equal, but some animals are more equal than others.” Such is true in the world of video archiving as well: all video files have value, but some are more valuable than others.

In recognition of this fact, MediaGrid can be configured with different ContentServers to either put file access bandwidth first, or storage capacity first. If access is what matters, then use the High Bandwidth ContentServer to store 2 TB of data and serve it up at 8 Gbps. But if the content is older and in less demand, then it makes sense to sacrifice speed for storage space by using the High Capacity ContentServer to hold 12 TB of data, and serve it up at 2 Gbps.

The Turner Installation As mentioned earlier, Turner’s UK operation has been installing an Omneon MediaGrid in advance of its introduction at NAB2006. “We just built a new transmission center to let us move away from a tape and server combination to completely server-based environment,” says Steve Fish, Turner UK’s vice president of engineering. “In the course of doing so, we decided that we needed to include archiving, since we’re moving about 300 hours of 16 Mbps video through our facility on a daily basis. The Omneon guys told us that they were developing the MediaGrid solution, which sounded right for our needs. So we agreed to work with them in developing it. Currently, we’re in the beta-test phase of deployment.”

When all the bugs are ironed out, Omneon’s MediaGrid will be providing video archiving and access for 15 Turner channels, Fish says. “It will allow us to provide comprehensive video storage without spending huge amounts of money, because we can afford to configure the system for maximum storage, rather than file access. As well, MediaGrid’s design allows us to get away from RAID storage systems, which have phenomenal rebuild time requirements and leave you quite vulnerable to HDD failures.”

James Careless covers the television industry. He can be reached at jamesc@tjtdesign.com.