Unstructured data, including media files, is growing at a phenomenal rate. Organizations faced with addressing this growth, and who have implemented NAS (network attached storage), are now having to rethink their strategies.
Between 2008 and 2012, expect to see a 61.7 percent CAGR (compound annual growth rate) for unstructured data [source : IDC market research] compared to transactional data, projected to increase by only 21.8 percent. To address this expectation, high-performance NAS systems are employing data reduction and hardware accelerator technologies to reduce the amount of data that must be stored. New NAS-centric products now include cloud data storage and scale-out NAS, with each continuing to evolve as scalability and performance demands increase.
With the advancements in gigabit Ethernet, especially 10 GigE coupled with the just recently ratified 40 and 100 GigE standards (IEEE 802.3ba); converged networks employing bandwidth intensive applications, such as those for video-on-demand and social networking, may depend more on NAS than fibre channel-based SAN to meet storage requirements. This paradigm shift will force improvements in NAS management, file virtualization and even storage in the cloud.
ENGINEERED STORAGE SOLUTIONS
Traditionally, NAS systems were bound to a capacity limit based upon the file system space that they could address. Additional constraints on NAS are affected by the controller cache size, the number, and the type of processors employed in the NAS-head. When addressing these issues, especially when huge volumes of data consisting of enormous file sizes are accessed by multiple applications, powerful specifically engineered storage solutions are necessary. Clustered NAS is just one solution being offered by entities that had heretofore focused on IT-centric systems but now look to the evolving editing and animation rendering requirements driven by motion picture digital intermediate (DI), 3D-stereoscopic, and releases in upwards of 20-30 formats per segment for every form of media created.
Smaller enterprises have been adopting NAS solutions for years, driven by the simplicity and relatively inexpensive costs of implementation and growth. Not surprising, providing the wrong NAS selection can—and will—result in unexplained performance degradation issues. For one, a bandwidth-starved network infrastructure will affect the ability to take advantage of the less costly NAS solution, virtually crippling the system even though the proper NAS storage components were utilized.
For primary storage (i.e., main memory accessed by the processor or active storage as in NLE or online videoservers), a NAS solution may be insufficient if the bandwidth and/or IOPS (input/output operations per second) cannot be properly moderated. For applications where content is not accessed for editing, such as finished, to be transcoded or in process rendering, NAS may be the right solution depending upon the file volume sizes and accessibility requirements of the content delivery platform.
For data that is not compressed—noting that most media files are already compressed as MPEG or JPEG—one of the techniques for reducing the volume of active primary storage is to use an inline compression appliance. Primary storage has little duplicated data; unlike that of archive or backup data sets. Thus, for this instance, the technologies utilized in data deduplication have limited or no value. Of course, the most effective data reduction technique for the overburdened primary storage is to purge inactive data through the archive process.
Tiered storage is another technique utilized in managing the primary storage overflow problem. Here, the NAS solution may lend itself well to keeping relatively inactive data on a second "near-line" storage tier. Moving stagnant data off primary to secondary storage not only returns storage capacity to the primary, it improves system performance by not having to continually manage the store itself.
THE ROOTS OF THE PROBLEM
As the number of NAS devices on the network increases, the headache of managing the overall storage structure gets quite complicated. This problem has become known as "NAS sprawl"; and can be handled by employing scale-out NAS, file virtualization, adding more sophisticated management tools or moving to cloud storage services.
The role of the storage administrator is burdened by the growing demands of increased storage requirements. Not unlike what the approaches used by systems administrators to control "PC sprawl" (which ranged from ignoring the problem to military scale lockdown), the roots of managing the problem begin with NAS consolidation and branch out to NAS virtualization. It may further include data protection and backup policies aimed at protecting and preserving the NAS solution investment.
Clustered and parallel access systems are yet another approach to addressing NAS sprawl. A further technique includes NAS virtualization, otherwise known as NAS aggregation or global namespaces. This technology is analogous to block storage virtualization where the data migration and management are transparent to the storage infrastructures seen by the operators.
In short, the issues facing NAS implementation must be addressed as enterprise storage continues to grow. Don't get trapped into an unmanageable situation that is often misunderstood by the money provisioning side of the organization.
Karl Paulsen is a technologist and consultant to the digital media and entertainment industry. Karl recently joined Diversified Systems as a Senior Engineer. He is a SMPTE Fellow, member of IEEE, SBE Life Member and Certified Professional Broadcast Engineer. Contact him firstname.lastname@example.org.
Future US's leading brands bring the most important, up-to-date information right to your inbox