Avoiding a money pit

Tom Hanks and Shelley Long starred in the 1986 movie “The Money Pit.” For those of you under 40, it was a movie about a young couple who purchase what at first seems to be the perfect home. Only later do they discover all the many hidden problems that must be fixed. The movie leads our couple through long list of home repair disasters and their accompanying costs. Hence the name, “The Money Pit.” If you’re a homeowner, the movie is certainly worth a rental as you’ll quickly identify with Tom and Shelley as they attempt to manage an unending stream of contractors, repairmen and bills.

So, what does this movie have to do with content creation and broadcast facilities? I’ll be willing to bet you have your own money pit, but just may not realize it. Your money pit could be your facility’s storage and business computing systems. If your station or production house is using a business computer or video server more than five years old, you may be staring at a business disaster just waiting to happen.

Incompatible solutions

Broadcast and content facilities typically operate a divergent mix of business and production computing systems. Each is optimized for a primary operating function (business or production). Communication between the two platforms is limited. Historically, it wasn’t necessary that these systems communicate with each other except perhaps to exchange traffic and as-run logs. The business side didn’t care what was in any archive or on the ingest server. And the newsroom automation system didn’t know that a client wanted to purchase a news clip for use in a commercial.

Times have changed. Increasingly, these incompatible systems need to be able to effectively communicate, exchange information and enable business decisions. Unfortunately, one computing system may operate on a RISC platform and speak UNIX. The other might speak Linux or Microsoft Windows. One stores business data, the other production video. The bottom line is these systems seldom communicate well — if at all. Even worse, these underlying platforms may be as reliable as the Oakland Bay-San Francisco bridge. You know, the one that unexpectedly failed, causing days of traffic snarls.

If your computing systems are based on Sun SPARC technology, lightning has already struck; you just haven’t yet heard and felt the thunder. Anyone using a three-year-old or older SPARC server should seriously evaluate its operation. According to David Reine, director, enterprise systems for the Clipper Group, those customers have three basic choices: “they can do nothing, retaining the performance they have today … or they can invest in an upgrade to current SPARC offerings … A third alternative is to migrate to an open system today, even sooner than might have been considered a few months ago.”

Reine suggests that if your computing systems are already reaching the limits of capacity in terms of storage, power, electrical consumption, cooling or floor space, it may be time to cut and run to another solution.

The money pit

Why change? After all, the “if it ain’t broke, why fix it?” mentality is ingrained within the engineering mindset. Let’s start with total cost of operation (TCO).

Older servers require much more power than do today’s servers. (For a detailed examination of server power considerations, see the four-part series “Reducing facility power costs by turning green.”

Fortunately, for most chief engineers, the power and cooling bills are paid by a different department. This means the chief doesn’t care how much electricity or air-conditioning is required to cool the servers. It’s a case of NMP — not my problem.

However, when IT merges with video, managers are often challenged to take a larger viewpoint about TCO. Now, electricity, cooling costs along with maintenance must be considered. As TCO goes up, pressure to make changes in a currently operating storage/computing/business platform increase.

The proprietary deep hole

Broadcasters are especially clever and often they build their own solutions. Groups and networks sometimes develop their own applications with custom storage architecture. Armed with in-house IT expertise, custom code is mashed against proprietary hardware and interfaces, and a working graphics or monitoring solution is created. In fairness, those solutions may still be working well. However, as the edges of those proprietary solutions begin to fray, finding compatible replacement hardware may be difficult.

It is at this point IT managers begin to look for solutions.

Some points to consider in the redeployment of in-house solutions:

• Do you have sufficient software/hardware experts on staff?

• Do you have the original documentation?

• Can the current application be easily ported to a new OS and hardware?

• Do you have time to build your own solution?

• What performance gains are being lost because the current system can’t be upgraded?

• The longer you wait, the more difficult and costly a solution will be.

Benefits of open architecture

Now look at the upside of using an open architecture solution. On a per core basis, newer Intel Xeon processor-based servers have more than 3.3 times the performance of SPARC-based servers. But, there’s more benefit than just operational speed. These new chips require less power and cooling.

Open systems provide increased reliability. Example performance improvement includes EEC (extended error correction), DDR3 (double-data-rate three synchronous dynamic RAM) memory, hot-swap disk drives and redundant power supplies and fans, dual embedded NICs with automatic failover and load balancing, RAID support with battery-backup cache. Do your current systems provide similar performance?

Now, consider the operating systems. There are currently two versions of Linux: Red Hat Enterprise from Red Hat and SUSE Enterprise Linux from Novell. There are more than 16,000 supported configurations of servers and storage for Red Hat Enterprise Linux alone. A Microsoft OS is no longer the only solution.

Choose one

Some facilities think it’s best to retain both options. They may decide to move to a new open system architecture for new applications, but retain the old data architecture for current applications. Consider the factors:

Option one:

• The performance of the old architecture is already at maximum.

• Depreciation may be complete, and the hardware may be at end of life.

• The old hardware may not support new features. Imagine hearing, “I can’t access the old files from the new system because, blah, blah …”

• TCO for the old hardware will increase. And, will the vendor continue to support the platform in one year? How about five years?

Option two:

• An x86 architecture affords increased speed, lower cost, longevity, higher efficiency and upgradeability.

• The low-power Intel Xeon 5500 processor series consumes only 60W. The high-performance cousin consumes only 90W.

• Cooling and administration costs are lower.

• Widespread open architecture affords more options and solutions.

The bottom line is: Don’t let a “it ain’t broke, don’t fix it” attitude delay the inevitable. A tight economy may in fact be the perfect time to upgrade to new computing platforms.

For an excellent in-depth discussion of moving from proprietary to open architecture solutions, see “Migrate from a Proprietary Server Architecture? Open Systems Provide Way to Exit Money Pit”, by David Rein, analyst, The Clipper Group.

Other resources: "Going off the grid: A look at green backup power technology," Part 1 and Part 2.