Not for nothing do we use the word “store” as a synonym for “business.”
At least since the time when a pharaoh dreamt of seven years of fortune followed by seven years of famine and thus commanded the invention of the warehouse, figuring out new ways to store stuff has enabled wealth and power. This is probably truer of the TV and video business than of any other. Think of the first videotape machines, then camcorders, automated cartridge systems, nonlinear editing systems, video servers: they all enabled new business models, new winners, some losers. Without storage, our industry simply wouldn’t exist.
As storage grows ubiquitous, however, our industry also will not exist...at least not in its current form.
Storage capabilities are often said to grow roughly equivalent with Moore’s Law (which says that processor power doubles every 18 months). But this is such a major understatement that it can lead to huge underestimations of the implications. From 1971 to 2004, according to Intel, the number of transistors on one microprocessor has grown from 2,300 to 592 million, an increase of about 250,000 times. You can’t compare this precisely to storage growth metrics. Tape capacities have hardly changed, in comparison. And in storage there’s the related advance of compression technology to consider. But, to speak just of online storage media, the size of the typical business hard drive is now about 250GB, from 5MB in ‘71: growth of only 50,000 times.
More germane, however, is price. That 250GB drive today costs about $250; in the 1970s you’d have paid maybe $10,000 for a 5MB drive (most users rented them), for a cost per megabyte decrease of two million times. Over the same 34 years, the cost of an Intel CPU—mightier though it is now—has barely changed.
There can be no doubt that this trend will continue, and that it is augmented by similar improvements in optical storage and flash memory. There also can be no doubt that it will continue to matter more for video than for most other data, because video is very big data, and also because video—especially with audio attached—requires extremely rapid and dependable access. Also undoubtedly, storage will matter more for video than it has in the past. Growth in hard drive capacity, and declining costs for same, didn’t matter at all to video until the media crossed a threshold of usability for our purposes.
Morrow Designs brought out a 26MB drive for only $5,000 in 1980; the five-fold increase in storage for half the price did practically nothing for video: it was still too little for virtually any use. But now think about the inevitable quintupling, in a year or so, of the capacity of your iPod Nano to 20GB of flash, probably for about the same $250 the entire 4GB device now sells for. Using the same quality compression as TiVo, that would hold 20 hours of video. Or maybe you prefer four or five hours of HD. Samsung, Hynix, Toshiba and the new Intel/Micron co-venture are rushing to build those chips ASAP.
That’s to see the implications from the point of view of the consumer device. Almost all consumer devices can, however, also be seen as edge nodes on a network. Many of their processors, harnessed to act in concert, can be set up as a particularly groovy implementation of massively parallel processing such as SETI (the “search for extra-terrestrial intelligence”). And parallel processing has been a solution to the problem that Moore’s Law isn’t really fast enough.
Seymour Cray was able to begin building supercomputers in the 1970s because his genius was uniquely well adapted to the challenge of continually building faster processors than anybody else. By his untimely death in a car crash in 1996, his original approach had been bested by networks of inexpensive processors, all harnessed to work together on the same task. In other words, by massively parallel processing. In other words, as Sun Microsystems put its motto in the 1980s, “the network is the computer.” Set up a network that can scale—ideally one that can scale infinitely—and share work even somewhat inefficiently among infinite small, cheap processors, and you can beat the computational power of any individual processor.
Set up the same sort of system to cooperatively serve and swap files, and you can beat the performance and cost of any individual storage device. The coincidence that such a public distributed storage architecture also enables music and video bootlegging, as well as voice over IP, assures that it grows faster than kudzu and will be even tougher to eradicate. But we’re getting ahead of the history here.
As massively parallel processing came on the scene to solve computational problems in weather modeling, high-energy physics, seismic data analysis, aerospace, image analysis and, sometimes, computer animation and compositing, users began to understand that they now needed far more rapid-access, online storage than ever before, to deal with all the data generated in these tasks. And they also began to realize that they might use the same massively parallel kind of structure to attain all the online storage they needed.
Back in 1994, just after his first start-up had really taken off, Avid Technology co-founder Eric Peters started thinking seriously about building a massively parallel server. He began designing what was introduced 11 years later—October 25th to be precise—as Avid Unity ISIS (for “infinitely scalable intelligent storage”). He took inspiration from flocks of birds flying together in formation, and the project was code named “Flock of Servers.” He also took inspiration from economics. “To do a massively parallel system right, it’s like a free-market economy,” he says. “If you try to control it, you’ll ruin it.”
Peters aimed to create a system that would make use of the very cheapest commodity hard drives, the cheapest controllers and networking, and link them up with software that would allow them to be scaled infinitely, achieve very rapid access for infinite streams of real-time playout of audio and video, and mask all the problems inherent in using inexpensive components and achieve bullet-proof performance. He and Avid have a few patents on their extremely elegant concept, and they believe they’ve met those goals.
Avid has also, “Made it much more bulletproof than our original approach,” says Peters. “The approach they’ve taken is to put it into premium hardware. It’s tough as nails and it probably needs to be to convince the big broadcasters to buy the first units. But it should work with really cheap disks and other components and probably eventually that’s how it may be sold for some other applications.”
Until that time, however, by going “tough as nails” with pricing to match, Avid has left the lower-priced video storage-network business still to the next-to-latest generations of innovation, from the likes of Omneon, Thomson, et al.
By now, however, not only is Avid’s not the only approach to massively parallel storage, it is a decidedly minority, nonstandard approach. There are working groups within the Storage Networking Industry Association and the ANSI T10 (SCSI) Technical Committee developing standards for “object-based storage,” and at least a half-dozen companies developing product. (None of which are aimed at the video business, but there are rumors that Omneon has a project in development. Also, the industry’s gorilla, EMC, is collaborating on standards and developing its own next-gen systems, and EMC storage networks are at the heart of a number of video servers sold under other vendors’ brand names.)
“At the core of this architecture are storage ‘objects,’ fundamental containers that house both application data and an extensible set of storage attributes,” says Brent Welch of Panasas Inc., which makes Linux storage clusters. The idea is thus to create a distributed storage architecture in which allocation activity is offloaded from the file system layer, allowing extremely fast access to data; Panasas claims read/write speeds of well over 150 MB/second for eight or more simultaneous channels.
Whose approach to massively parallel storage is the most efficient and will win out? Avid’s has the wild conceptual beauty of storage over the Internet, caged just enough to make it fully reliable and legal. The object-based storage guys can be seen as part of an inevitable wave towards object-based computing, fitting conceptually within the seven-layer Open System Interconnection model for network protocols. Both are likely to be at least partial winners in some form.
Meanwhile, data tape cartridge systems are getting cheaper, faster and more customized for video: Quantum has just introduced a 300GB cartridge unit that sells in stand-alone configuration for $10,000 (and they’re soon coming out with a similarly priced 6x800GB cartridge system) with a transfer rate of 60MBps and built-in MXF file structure. And holographic storage may finally be coming to fruition after 20 years of development: lead innovator InPhase has its first 300GB device beta testing at Turner Entertainment. And you might just be able to build server farms of Blu-ray or HD-DVD consumer DVD recorders in a couple of years for extremely low costs per gigabyte.
So what does this all mean for content owners, creators and distributors?
For one, VoD servers will soon scale to meet volume demand at reasonable cost. The movie studios that are now considering working with Internet file-sharing companies may wish to do so to infiltrate them and turn their users into customers, but it will not be necessary to put their files on a BitTorrent in order to meet volume demand. They can keep more control on their own servers at reasonable cost.
Combine server affordability and ubiquitous cheap storage in consumer devices, and a telco that wants to build out video services, but worries about its skinny pipes or potentially inadequate wireless bandwidth, can count on substituting memory for network capacity both at the edge and in multiple caching servers. It could implement file sharing between cell phones over its controlled network without worrying about piracy. Online storage capacity grows cheaper faster than network capacity, so it will need to find such a solution if video services are to succeed. Even for many “live events,” a few seconds delay while video is stored/forwarded is acceptable in an environment of short clips (perhaps mixed with much longer obviously “non-live” downloads). The wireless or wired telco threat to cable and traditional broadcasters is real.
But broadcasters not only can also benefit from more ubiquitous storage, they are seen as the most likely early market for it, at least by Avid, Omneon and others. The early buyers of Avid Unity ISIS are planning to do some pretty innovative things with it. A Dutch post services company uses it to offer NLE as a remote application service. CBS News plans to use it to maintain its entire news archive in online storage with instant access; they’re thinking internal use only at first, and eventually perhaps a consumer service similar to one being tested now by the BBC.
Perhaps the biggest challenges of the coming era of big storage will be trying to avoid having one’s content lost in the big store, and trying to compete against other big stores. Not only do individual product suppliers have little pull with Wal-Mart, but even Wal-Mart itself isn’t making very good profits lately: everything gets commoditized in the end-stage of Big Storage. Still, Costco and Lowe’s and Target are doing pretty well lately. So can you.
Neal Weinstock is editor-in-chief of Weinstock Media Analysis and can be reached through www.weinstockmedia.com.
Not for nothing do we use the word “store” as a synonym for “business.”