From direct attached storage (DAS) and network attached storage (NAS) to storage area networks (SANs) and beyond, broadcasters have a variety of choices for IT plant infrastructure. Each approach has its advantages and disadvantages, but each solution must handle large files with low latency — and no excuses. Broadcasters do not have the luxury of latency often encountered on the Internet. Whether it is a live shot from across town or footage for an upcoming package, a station's IT infrastructure has to deliver every frame every time.
The move to an IT-based infrastructure for broadcast facilities coincides with the continuing trend toward file-based content acquisition. Tape is still a dominant media for acquisition, but its days are numbered. File-based workflows that begin with acquisition and continue through distribution are becoming the norm.
When it comes to designing a plant, it's about creating a workflow first and then developing an IT system that will accommodate that workflow. Before you start building a digital pipeline, you need to know what you're going to do with it. What will the file format be? What compression will you use? Will collaborative storage for active projects be separate from play-to-air servers? How will you archive? Who will need to access the archive? Once you've established an operational plan for how you will use and manage the data, an assessment of the necessary bandwidth can be made.
Developing a broadcast IT infrastructure in a facility requires familiarity with specific layers of the Open Systems Interconnect (OSI) model. While a complete discussion of OSI is beyond the scope of this article, understanding how to elevate the priority of the broadcast packets within your network is necessary to create a working broadcast system. Three elements of the network are particularly important: virtual local area networks (VLANs), Internet Group Management Protocol (IGMP) and quality of service (QoS).
VLANs are needed to isolate the critical broadcast traffic from traffic that is not time critical. VLANs allow segmenting of the physical LAN into multiple virtual networks, which gives the ability to isolate business networks into their own groups. However, VLANs cannot communicate directly with each other, so they must transit through the core IP router or a bridging device to allow cross communication.
Next is the IGMP, which separates the multicast packets into their own domains. IGMP is used to isolate talkative protocols into their own groups. Think of this as being able to separate high-traffic broadcast multicast packets from everyday point-to-point office system unicast packets.
Network success or failure is totally dependent on the proper load management and programming of the core IP router. QoS can limit the bandwidth needs of traffic and prioritize known IP addresses, such as the connection between the traffic server and automation client. It also allows prioritization between application types such as FTP or RTSP. Generally, QoS is set in the core IP router, though some advanced managed switches have the ability to prioritize traffic to and from the core router.
Based on the total available bandwidth of the network, QoS is used to limit the amount of bandwidth available to noncritical systems so that critical systems always have instantaneous bandwidth available for their needs. Not properly configuring QoS will potentially starve the video receiving devices and will cause macroblocking, dropped frames, total loss of video, etc.
To save money, some broadcasters have built IT infrastructures around commercial off-the-shelf (COTS) equipment. Generally, these systems don't perform as well as IT systems specifically engineered for the bandwidth-heavy requirements of broadcast applications. However, the initial difference in costs can be dramatic, attracting some broadcasters to accept performance from a COTS system that is “good enough.”
Are the upfront savings worth the potential problems and slower performance? Think of a plant's IT network as a sort of interstate highway system for data (specifically video and audio footage, as well as graphics, animations and additional materials). A COTS solution can get data from one location to another, but data will be stuck in the slow lane dotted with all kinds of potholes, some of which are large enough to delay or even stop traffic.
Generally speaking, IT managers are accustomed to handling typical business IT traffic, but they don't necessarily understand the need for prioritizing video assets. For example, as a cost-cutting measure, some IT departments will share one network between the business and video production departments of a broadcast facility. We've even seen IT departments try to configure the video department's needs as a subnet on a house intranet. The results aren't pretty. Video files require significant bandwidth and, depending on your workflow, very high bit rates. Sharing an IT system between business and video production users can slow down file transfers, which means moving content to the play-to-air servers can be delayed significantly. (See Figure 1.)
For example, let's say a broadcast newsroom operation has made a commitment to a particular NLE solution, such as Apple Final Cut Pro, and has also invested in a compatible centralized storage server, such as Apple Xsan. Problems can develop when attempting to transfer files to a commercial-grade play-to-air server. COTS hardware is not necessarily tested and certified to handle the network loading required for HD video. Large files can stall in transit and create bottlenecks for other traffic.
Although traditional IT systems were not designed for the demands of video transport, some enterprise-class systems are getting closer to providing solutions that offer legitimate competition to traditional broadcast equipment manufacturers. Eventually, the limitations of video transport over traditional NAS and SAN systems will be eliminated, and the market will be condensed. At this time, though, systems designed specifically for broadcast applications are the better choice.
Some on-air servers can even be partitioned to serve as a collaborative server, which eliminates transport between servers in the facility. This type of solution is more expensive initially, but broadcasters will be able to provide finished packages on the air faster with no worries about bottlenecks in the system.
Reliability is another important reason to avoid COTS solutions. With daily newscasts and a constant flow of video for editing, broadcasters don't have the luxury of downtime. Video servers specifically designed for the broadcast industry are more reliable and certified for longer mean times between failures (MTBF).
Continue on next page
Systems in the broadcast world also need to manage failures gracefully and continue to operate. The main goal, of course, is to keep the station on the air at all times. To adopt a file-based workflow, you need a backup plan. There are several ways to achieve redundancy within a broadcast facility,from a fully redundant backup system that completely mirrors each area of the system to less expensive options. The budget will more than likely determine a plant's disaster recovery system.
While business and production IT systems should be separate for the sake of efficiency, they should not be completely independent. A gateway will allow certain business systems to communicate with video systems. A traffic system on the business side, for example, will need to interact with automation software on the video production side. Separate but accessible systems are the ideal configuration for a broadcast facility. (See Figure 2.) That's why it's important to develop a video production IT system as a collaborative effort between the IT department and video engineering department.
Internal bandwidth does not need to be a significant issue for facilities; once a workflow has been determined, build a system that fits the workflow's needs. But external bandwidth can be problematic. Based on the size and speed of the connection that is feeding video and audio data, it can be an expensive proposition for broadcasters. There are a number of bandwidth business models that broadcasters can explore, from pay-per-use to pay-as-you-go, but the expense of external bandwidth can directly affect news coverage.
In addition, beyond the walls of the station is where QoS becomes a huge issue. News managers may be willing to accept inferior video quality to get the story to air, but there is a significant difference between airing low-res amateur video for a particular story and broadcasting a consistently poor video signal from across town. These days, it is still important to be able to get the story live, not hours later. Expanding bandwidth and looking at nontraditional content acquisition methods will allow news operations to go live from almost anywhere. What we have to do is balance the cost of bandwidth versus the utility we gain.
The ability to transmit high-quality audio and video data over IP has blossomed over the past few years. In fact, it has become one of the ubiquitous transport options in the broadcast industry, offering a less expensive alternative to microwave and satellite, two traditional transport mechanisms that have been used over the last 25 years. While video over IP has not replaced more traditional content delivery and transport options, it is being used in conjunction with older technologies.
When it comes to external bandwidth, the two big questions are how much is available from local providers, and how much can you buy? For example, can the local telco provide a 100Mb/s data stream that will be sufficient for live video, or will you be limited to a 1.5Mb/s connection at your location? How much will these services cost? More content requires more bandwidth, which costs more, and broadcasters don't have an unlimited operations budget. Unlike the internal IT network, the availability of external bandwidth can directly affect the workflow.
Mark Warner is vice president of sales at Advanced Broadcast Solutions