Video Servers Go Above and Beyond

Users of video server products are continuing their dependence upon them beyond just spot or interstitial playback of media. Video servers, like their data counterpart the network server, show great promise as the principal medium for capture, storage, playback and other functionality including video editorial production.

Today, the broadcast facility's server implementation tends to break into two broad categories. The most basic is the standalone island concept where an individual server is used for a dedicated function. The second category includes servers that employ shared storage, are highly integrated and may be made up of a dispersed network of various flavors of video and nonvideo servers.

Both concepts present advantages and disadvantages, depending upon the size and scale of the facility and the organization's ability to manage the many tasks associated with a distributed network of complex subsystems.

MULTIPLE SERVERS

The trend in our industry is moving toward multiple video servers for a variety of services. Not unlike the computer data model, we expect to find that multiple "media-based" servers will make their way into all corners of the facility. These will include island systems (such as the newsroom computer and editing system) and tightly integrated on-air programming systems (spots, program and delay) - plus the multitude of off-line/on-line video editing systems. All have potential for sharing information and media between them.

This installment will look at the integration and service of multiple computer platforms within the facility. We will focus our discussion by examining the second concept - where multiple servers are dispersed throughout the facility for a host of varying tasks.

Today we find dedicated hardware platforms and video file servers acting in roles as video editor controllers, stillstores, graphics or video manipulation devices, spot/interstitial playback devices, VTR replacement for time-shifting, browser systems for archival purposes and the list goes on. We also find a similar complement of "personal computer" platforms performing myriad services, ranging from controlling the EAS system, the teleprompter, the news edit scripting system, the transmitter controller interface and even as a simple protocol translator between one computer's data system and another media server system.

Interfacing all these devices to one another is a complex and never-ending task to say the least. Keeping versions of software straight, changing hardware to keep up with the software changes, scaling the capabilities of the host device to keep in step with the interface or protocol converter and the like are all taxing the resources of both the manufacturer and the station operating personnel.

Even the simple connection between one device and another over RS-232 can be a difficult and perplexing process - especially if the host device needs or expects to receive data from the source/server that this device cannot or will not supply.

INTERFACE ISSUE

Manufacturers certainly recognize the compatibility or interface issue is one that needs more attention, but addressing this takes a considerable effort in terms of time and resources in order to develop a simple uniform solution that can be applied to the masses. Add in the abundance of hardware solutions available and you quickly realize it might make more sense to select a single source vendor for all the systems that will be integrated into the facility.

Manufacturers are also beginning to rely heavily upon third-party VARs and integrators to make their systems perform and to act as the first line of contact in the event of system failures.

Video servers are already being brought into this frustrating loop of compatibility and interface. To appreciate this, just look into the complexities of interfacing a server to a master control system under control of station automation.

It can be fairly straightforward to put a single-channel's video server into a system under automation control. Adding a protection or mirror server isn't terribly difficult as a secondary step. Yet once you add a library or archive system, a browse network, a store-and-forward delay server and a couple of on-line ingest stations - the system-level interface complexity issues begin to soar.

When a system grows to this level, it is expected to perform at this level, continuously and reliably. Managing a system as described can become a full-time job for one or more people - with a skill set that is only learned by on-the-job training and experience.

ELEMENT OF COMPLEXITY

Video servers are planned to be expanded and to be upgraded. This fact throws another element of complexity at the broadcast facility operator. When a facility relies on a single server and it fails or is downed short term for maintenance or is in need of expansion or upgrade, this is not a terribly difficult task to manage.

Upgrades can be accomplished fairly straightforwardly by reverting to older concepts of building a dub reel of all the day's commercials and using videotape while the server is being worked on. This works fine for scheduled maintenance, but if the only server fails without notice, it might not be possible to simply shift back to the older ways of doing things.

Once a second mirrored or protection server is added, reliability increases. This statement is true for systems that are self-contained, i.e., they depend upon a tightly integrated master-slave relationship for replication of data over both servers.

Even when a third-party software system manages the two servers and its inventory database, reliability stays fairly high. However, the real issue for the operator is in keeping both the server's software and any third-party's control and management software in synchronization with each other.

SERVER UPGRADES

We have heard from past experience that upgrades to servers should not be performed, at any level (software or hardware), until that version or release of software or hardware has been qualified by all other third-party software.

This can be a frustrating and confusing experience - especially if the software upgrade from the server manufacturer was a small patch designed to repair a bug in its current release. Chances are the bug has already been reported to the third-party software vendor (either by the server manufacturer or by the end users) and that work has already begun to qualify the release or patch.

However, the more complex problem comes when there are servers on the system that are not under control of the third-party vendor's software. In this scenario, fixing one problem on server three can and usually does open up problems on one or more of the other servers.

Going down this path is both risky and dangerous. Installing and then fixing any problems can be extremely time-consuming to the facility operator or engineer. It can lead to lengthy downtime and loss of revenue during that period.

As more servers are placed into service, as more chassis are hung on the Fibre Channel backbone and as more third-party implementation or application software is desired - the end users should be keenly aware of the potential for hidden or undiscovered - just waiting to be sprung.

The problem could be subtle one that doesn't adversely affect operations, or it could be a significant one that renders total system operation to a crawl or worse yet, useless.

One method of protecting the facility from potential disaster is to implement a systematic approach to software systems on all broadcast-related equipment and systems. In similar practice to whatever method you might have taken for Y2K-compliance testing, a well-documented and carefully constructed program should be planned and implemented from the beginning.

Be sure that all equipment is covered, including computers and broadcast hardware. Anything that has a software release should be recorded. Even firmware or microcode versions, if you are a company that likes to add third-party peripherals, including hard drives or board upgrades, should be logged.

When the time comes for an upgrade, do an audit of all equipment related to the upgrade. Check that your previous records confirm that the current operating systems and applications are the same ones that you have on the log.

Call the major vendors' tech support people before starting the upgrade to ascertain known bugs or cross-system incompatibility. Understand what will be affected by any upgrade, be it software, replacement hardware or system expansions.

SYSTEM BACKUP

Back up as much of the current systems as possible. Be sure you have all the correct installation software, both for the present configuration and for the new configuration. If there are Web downloads available, get them and have them ready. If you can stage the upgrade in sections, without taking the entire system off-line, do them. Test the subsystem upgrade thoroughly before moving onto the next install.

All this takes time and careful planning. The more organized the facility is in keeping documentation and software tracked, the better off you will be. And as more servers are added to the facility, the groundwork done in the beginning will certainly pay off in the long run.

This is the new form of preventative maintenance for the information age. This is truly the replacement for head cleaning and meter reading. Those that aren't comfortable with the task of software implementation probably shouldn't undertake it.

Karl Paulsen

Karl Paulsen is the CTO for Diversified, the global leader in media-related technologies, innovations and systems integration. Karl provides subject matter expertise and innovative visionary futures related to advanced networking and IP-technologies, workflow design and assessment, media asset management, and storage technologies. Karl is a SMPTE Life Fellow, a SBE Life Member & Certified Professional Broadcast Engineer, and the author of hundreds of articles focused on industry advances in cloud, storage, workflow, and media technologies. For over 25-years he has continually featured topics in TV Tech magazine—penning the magazine’s Storage and Media Technologies and its Cloudspotter’s Journal columns.