When moving professional video over networks, consider these defining characteristics: large amounts of data; sensitivity to errors and loss; sensitivity to delay; the efficiency multicast issue; and maintenance personnel mind-set.
Large amounts of data
One of the defining issues concerning the use of networking technology for professional video is that it involves moving huge amounts of data. How much data? (See Table 1, bottom of next page.)
Of course, these are uncompressed rates; modern compression techniques can dramatically reduce the required bandwidth. But even at 100:1 compression, a two-hour movie at 525/29 represents a file size of about 2.4GB. If you transfer this file over a single 100Base-T network that has a policy to use only 70 percent of network capacity, it will take more than 30 seconds to transfer one file. And this one transfer takes up virtually all of the available space on the network for that entire time.
There are ways to reduce the impact of video on the network, but they all amount to the same thing — an increase in network capacity. Gigabit Ethernet (GigE) is becoming ubiquitous, so the transfer that took more than 30 seconds on your old 100Base-T network will only take three seconds on a GigE network; and 10GigE is on the horizon. Also, it is possible to bond multiple Ethernet connections into a single virtual connection. This allows you to combine several GigE connections together, which is not practical for workstation connections, but it is a reasonable option for connections between backbone switches or between a switch and a large server.
Sensitivity to errors and loss
Professional video users are sensitive to errors and loss during file transfer. If you use conventional FTP to move video files and the transfer fails somewhere in the middle, you will have to start over. Some FTP clients can resume a file transfer at the point where the transfer failed. But if you are moving files larger than 2GB, or if you move large files on a regular basis, investigate special software packages and protocols that will accelerate these transfers far beyond what conventional FTP can deliver.
Broadcasters are sensitive to problems during the file transfer, but go ballistic when errors are incurred during live transmission. There are several reasons for this.
First, assuming that the transmission is going out on the air, errors are visible to end viewers, and there is no opportunity to fix the problem in post. Second, depending on where the error hits and on the technology in the encoder/decoder chain, loss of a single packet can produce a series of errors that could last for more than one second. Things do not look any better when considering that IP networks were designed to lose packets when the going gets rough.
You can do things to help with live transmission of professional video over IP. Generally, these fall into two categories. First, try to prevent errors before they occur. Second, protect against errors after the fact.
To do the first, ensure that the internal plant is configured properly so networks that move video prioritize that traffic above other services. Or build separate networks dedicated to only moving video traffic. When working with wide area networks (WANs), make sure quality of service (QoS) agreements are in place so video arrives intact.
To protect against errors after the fact, add forward error correction (FEC), which allows users (up to a limit) to reconstruct missing information using extra bits sent as part of the transmission. Of course, nothing is free, and FEC will deduct from the total bandwidth available for video transmission. Furthermore, typically, the more FEC introduced in a circuit, the longer the latency — the time between when video enters one end of the link and when it exits the transmission system at the far end. In live interview situations, large amounts of latency are unacceptable.
Sensitivity to delay
In some cases, such as a live interview, delay can be a bad thing. Fortunately, most on-air talent and home viewers are used to dealing with satellite delay. As long as broadcasters stay within the limits of what someone would normally encounter in this environment, it's okay.
But long-distance IP networks have another interesting characteristic that could prove extremely disturbing. Unless a large IP network has been engineered to control this problem, the delay can change from one moment to the next depending on which route packets take from a sender to a receiver. If the route is constantly changing over the network, a problem known as route flap, the delay experienced over the network will constantly change. Proper engineering of the network will help avoid this situation, but note that human beings hate nonconstant delay when trying to communicate.
As you may know, standard Ethernet frames are a little more than 1500 bytes long (1538 to be exact). The Ethernet payload is 1500 bytes, with the rest taken up by Ethernet headers. Typically, video over Ethernet uses UDP over IP. UDP/IP headers consume 28 bytes of the 1500-byte Ethernet payload. So typical video transport, ignoring collisions, other network traffic and a host of other factors is around 96 percent efficient. (See Figure 1.)
While this may appear to be a small amount of overhead, when you are sending hundreds of thousands or millions of packets, decreasing the overhead seems like a good idea. And of course, engineers cannot resist making things better. Many years ago, the idea of Ethernet jumbo frames was introduced. The idea was simple — to allow Ethernet payload sizes to be increased for large payload types, which would make networks more efficient.
Bill Fink, the author of nuttcp — a networking test tool — has calculated that the throughput of GigE with jumbo frames set to 9000 bytes instead of 1500 bytes is about 99 percent efficient. On the surface, using jumbo frames seems like a great idea, especially for video applications moving a huge amount of data.
There is no denying the math. Jumbo frame networks are more efficient. The problem is that they may not be supported by the switches and routers in your network. And while most equipment supports jumbo frames, it only takes one switch somewhere in the network to disrupt the jumbo frame transmission. If you decide to use jumbo frames, test the network before relying on it.
Other issues to consider
There are several other issues related to moving video over networks that bear special attention. One issue is that when streaming video, everyone watching the video requires a separate connection back to the originating server. To tackle this, people have built content delivery networks (CDNs), which deploy many servers throughout the world, capable of replicating streams in the network themselves. This reduces the overall load on the originating server. CDNs coupled with multicast technology allow the delivery of a large number of streams in a role similar to over-the-air broadcasting.
Another issue that relates to the transmission of video over networks is the mind-set of the people who maintain the networks. For people who deal with packetized networks, minor service interruptions are the norm as they go about their maintenance tasks. But video users are extremely sensitive to outages, so it takes a partnership between the maintenance people and those using the networks to keep interruptions to a minimum.
Moving video over IP networks is done successfully every day. But having an understanding of the issues and potential solutions concerning networked video will help you do a better job as a broadcast engineer.
Brad Gilmer is president of Gilmer & Associates, executive director of the Video Services Forum and executive director of the Advanced Media Workflow Association.
Table 1. Data rates and file sizes for typical TV standards TV standard MB per frame MB/s Size of 30-second file 525/29 1.126MB 33.75MB/s 1.0GB 720p60 3.093MB 105.58MB/s 5.6GB 1080p60 6.187MB 371.22MB/s 22.3GB
Send questions and comments to: email@example.com
The latest product and technology information
Future US's leading brands bring the most important, up-to-date information right to your inbox