Last year I discussed some of the issues surrounding bandwidth management inside television facilities. This month we will look at the issue of bandwidth management between television facilities.
Let's say, one day the boss walks in and says to you, "We need to be able to move video the same way we make a phone call. I want to be able to call our affiliate in New York, send a feed and when I am done, hang up." No problem you think. Then he says, "I also want to be able to call Chicago... and Atlanta... and Seattle..." Your head begins to spin a little bit - this won't be a simple router feed after all. Just for good measure, he adds, "in real time, and inexpensively" as he heads out the door.
As you begin to evaluate your options, you may consider telecom service providers, dedicated satellite, dark fiber (if available), existing private network (if you are extremely lucky) and so on. Each one of these has its advantages and disadvantages.
One question you may need to answer is whether you are looking for point-to-point connectivity, a broadcast path to multiple recipients at the same time or whether true dial-up capability is required. Point-to-point and broadcast technology has been around since the AT&T Long Lines days. Frankly, the most intriguing case is the last one, where you would like to establish connectivity between a number of locations.
The concept is simple - and appealing. Imagine that a local television station receives a call from a production house in town. The news promo graphic elements are finished. A tape operator in the production house dials a VTR at the television station. She initiates recording at the station and then plays back the graphic elements from a digital disk recorder. In a few minutes the transfer is complete. She hangs up the connection and moves on to her next task. You can imagine how this simple capability could be used at station groups and large television networks as well.
You might be thinking that this is all a dream, but in fact, dial-up connectivity has already been demonstrated at 270Mb/s across 2500 km of fiber using various manufacturers' switches and network components, and with machine control. Tests were conducted using the National Transparent Optical Network (NTON) established by the NTON Consortium. (You can find out more about NTON at www.ntonc.org.) These tests proved that the example above could work.
There were three key components to the NTON test - SONET, ATM and MPEG. Synchronous optical network (SONET) is the core technology for most of the telecommunications infrastructure in the United States. Asynchronous transfer mode (ATM) is the predominant method for moving switched data across high-speed interfacility networks. MPEG stands for the Moving Picture Experts Group and is one of the predominant compression formats for transmitting video. Figure 1 illustrates that each component - SONET, ATM and MPEG - contributed to the total protocol stack. This is a great simplification; but SONET provided raw bandwidth, ATM provided transport and switching, and MPEG provided compression.
In deciding on a technology for interfacility links, one of the choices you must make early on is whether you will use ATM. ATM has many advantages for interfacility connections. First, it is a known quantity to most service providers. Second, equipment is now readily available to provide video connectivity across ATM. Third, some service providers have adjusted their ATM pricing to be more competitive. You may remember that this had been a big issue in previous articles.
There could be a bonus here - if your interfacility network is designed correctly, it may be possible to transport MPEG over ATM one minute and IP over ATM the next. In fact, most of the world's Internet (IP) traffic goes over ATM/SONET/SDH transport. You may have heard the buzz word "IPOA" (IP over ATM). Given the right architecture, it is possible to get ATM transport, then configure that bandwidth on the fly to support either real-time or FTP of content. It is possible to design a system such that voice, data, Internet access and content transport (real time and FTP) all move over the same backbone. The architecture includes Gig-E NICs in machines that will create and move content through OC3/OC12 access facilities to ATM network transport to OC3/OC12 egress facilities to Gig-E NICs in other machines.
The perfect world Extending the NTON example, here is how a connection between two "MPEG compatible" devices might work. (See Figure 2.) An editor in New York needs to get a dub of a tape that is currently in Los Angeles. He talks to a tape room operator in Los Angeles, asking her to dial up an ATM connection between Los Angeles and New York.
The Los Angeles tape operator enters the ATM "phone number" into the ATM network adapter. The adapter then begins a dialog with ATM switches between these locations, attempting to find one with enough bandwidth (50Mb) and the right kind of connection (CBR). When it gets a commitment from one switch to provide the connection - in this example, let's say Chicago - it then moves on to the next switch. Once the switches along the way commit to the bandwidth (and other QoS parameters), the circuit is completed and the transfer can begin. When the transfer is complete, the tape operator in Los Angeles hangs up the connection, and the bandwidth in the switches is released.
In ATM language, the connection between Los Angeles and New York is a switched virtual circuit (SVC). The process of setting up an SVC using ATM is like laying a 3/4-inch pipe from one location to another. The pipe is capable of carrying a certain capacity from one place to another. To expand the analogy a little further, you can think of the SVC as a pipe inside another larger pipe, perhaps a SONET OC-12. (See Figure 3.) Although there may be other traffic flowing through the OC-12, the data flowing through the SVC "pipe within a pipe" is unaware of this other traffic. A switched virtual circuit between two cities will be totally unaware of other traffic that is being carried alongside it.
Also, like a pipe, once the path is established, all packets sent from Los Angeles to New York will take the same path. This means that the latency between Los Angeles and New York will not change. This is very different from the way the Internet works. One IP packet may go from Los Angeles to New York via Chicago; the next packet may get routed via Alaska. With conventional IP networks, there is no way to know what path the packets will take from one packet to the next. ATM networks do not suffer from this problem. Once an ATM circuit is established, its configuration remains stable until the circuit is torn down. Of course, if you are using FTP to move completed video clips from one place to another, then traditional IP works just fine. Streaming video is another matter.
There have been some developments in IP technology that are changing this. Using multiprotocol label switching (MPLS) and IP V6, a customer can get functionality that is roughly equivalent to constant bit rate (CBR) in ATM transport. But to do this you will need a network interface device that can handle ASI input from an MPEG or other encoder and map the ASI to IP. Then if one configures the IP network properly with MPLS or IP V6, then transport over IP at speeds in the range of 50-80 bits per second is possible.
The reality Frankly, while the technical feasibility of this example has been demonstrated, there is some way to go. One of the challenges we face is in the provisioning of circuits by the service providers. A service provider typically builds a circuit based upon an order. Once the user is finished with the circuit, it is dismantled. Most service providers are not organized to allow the user to configure their own circuits. What this means from a practical standpoint is that, in most cases, it is not currently possible for an end-user to implement the example described above. This is unfortunate - broadcasters can intuitively feel that having the power to make high-bandwidth connections on demand would be very compelling, if the price were right.
That said, there are some service providers who do allow the customer to have bandwidth on an ATM network "on demand." The customer is provided with a software application for making the call, a network interface device, a local access channel (E3/DS3/STM1/OC3), an ATM switch port and ATM bandwidth on demand. With this configuration, a customer can dial up other similarly equipped facilities and exchange files, IP traffic and even streaming video. All classes of service AAL1-AAL5, constant bit rate, variable bit rate, and available bit rate can be selected along with bandwidth required.