The Omneon SPECTRUM system in place at KLCS allows the Los Angeles Unified School District to expand I/O ports, bandwidth, capacity, and redundancy, either simultaneously or independently, without interrupting system operation.
A recent survey indicated that more than 50 percent of U.S. TV facilities were still primarily analog centric. And most others still use analog in some key portions of their operations. With this inaugural issue of the Transition to Digital newsletter, we begin a series of tutorials on moving analog-based television facilities into the digital era. This twice per month newsletter can serve as your guide to improving your facility’s workflow and efficiency through digital technology. If you are one of the few totally digital facilities, you will still find the newsletter valuable. We’ll cover some of the breaking new technologies and products available to improve your workflow and operations. Our first series of tutorials will focus on basic network terminology and sub-systems for both internal and external networks.
Local area networks: LANs
Networking can be generally broken down into two forms: internal (Local Area Network, LAN) and external (Wide Area Network, WAN). Inside a facility, Fibre Channel (FC) and Ethernet (typically Gigabit Ethernet) are the primary technologies used for networking. Many video applications use Fibre Channel networking to interconnect storage devices so that the remote storage looks like it is physically connected to the local computer via an SCSI interface. Fibre Channel provides high data rates (up to 1Gbs and can provide a highly reliable network topology. However, it is often not the best choice for long distance networking, even within a facility. We will cover different forms of storage technology (NAS, SAN) in future newsletters.
Gigabit Ethernet (Gig-E) has a throughput of about 700Mb/s. 10Gig-E and 100Gig-E provide even higher bandwidth, but are currently less common. Ethernet switches are now commodity products, allowing network designers to provide redundant links and to easily expand network capacity. Furthermore, 100Base-T has become a commodity with an attractive price point due to huge volumes sold worldwide. You can expect Gig-E prices to continue to fall rapidly so network bandwidths will become even less expensive.
Wide area networks: WANs
When moving material between two facilities, the predominant technologies are Asynchronous Transfer Mode (ATM) and Internet Protocol (IP), both running over synchronous optical networks (SONET). Through the use of permanent virtual circuits, providers can create virtual point-to-point circuits that are capable of delivering 100 percent of their rated bandwidth 100 percent of the time.
Think of ATM as big data pipes. These circuits are often much larger than required for video streams, so bandwidth in the ATM environment is generally not a problem. You might think that ordering excess capacity from a local service provider is a waste of money. After all, why pay for such a big pipe when occasional video traffic is only going to take a portion of this bandwidth?
There are several reasons you might want more guaranteed bandwidth. First, realize that even if you have a big pipe, the service provider is only going to transmit the active payload. For this reason, you may only have to pay for the portion of the pipe you use. Second, the extra bandwidth is almost immediately available if you need more capacity. Finally, the circuit can be used for other services besides video, such as voice and data.
It is also possible to order wideband width IP circuit pipes from a service provider. While these circuits are capable of handling video, there can be severe limits on the ability of IP-based networks to carry real-time video. Also, for some facilities, the "last mile" problem still exists. That is, the service provider's central facility in town has plenty of bandwidth, but getting a circuit from the central facility to where the broadcaster “lives” can still be an issue.
Trouble’s leaving here okay
I recall my days working with the local phone company. As I connected my stations into the sports network, it wasn’t unusual to hear the phone man at the central office, in response to my complaint, say, “Trouble’s leaving here okay.” This meant that yes, he knew there was a problem, but it wasn’t his problem.
Today’s TV engineers often operate on the assumption of "implied quality of service." So even though Quality of Service (QoS) is not a term used to specify analog television circuits, TV broadcast engineers expect a certain level of performance, delay, jitter and noise for their digital links. They order the transmission service that is most appropriate to the current need and expect the appropriate quality of service.
As these new digital circuits are implemented, broadcast engineers typically bring with them this innate sense of QoS. When television engineers or computer network designers try to map this implied analog QoS onto digital networks, especially IP networks, there is a strong possibility that a misunderstanding will occur between parties. This can result in non-beneficial finger pointing.
One example where this can occur is with the issue of signal to noise. In an analog circuit, it’s a paramount parameter. Yet, with a digital link, S/N isn’t typically a specified parameter. This means TV engineers need to change their terminology and understanding when discussing digital links.
Understanding IP networks
Now, let’s consider the challenge of carrying real-time video over IP. Because IP is a connectionless network-transport protocol built around an any-to-any environment, it does not require provisioning of individual circuits to connect each network site. In other words, there is no conditioning (equalization) of the lines.
IP networks - in particular IP network backbones - use two routing protocols. This enables the distribution of routes to all routers connected to the network, and makes them accessible through other connected networks. Interior-Gateway Protocols (IGP) such as the Open-Shortest-Path-First (OSPF) protocol and the Border-Gateway Protocol (BGP) control how traffic flows from end-to-end by distributing routes within a particular backbone IP network and, externally, to peer backbone IP networks. We won’t cover these design parameters as they are not key to understanding how to use IP links.
If a link fails, traffic may be forced to switch quickly from one path to another, causing packet loss or a change in packet delay. For data, this may not be important. For synchronous video and audio files, it is critical.
Because IP networks handle traffic on a per-packet basis, not a fixed-cell-size basis, latency through any one port can dramatically vary depending on the size of the packet, the link and other traffic transiting the link. In other words, the transmission delay over a public IP network can change dramatically from one packet to the next. This is a key reason your media player buffers the incoming content prior to playing it out, even though this means a delay in starting the program. The player knows that once playback starts, the decoded output must not stop and the buffer provides insurance against short dropouts, delays or other stoppages from affecting the output program.
Next time we will continue our examination of networking by looking at how storage ties into networks.
QoS in a wide area network, by Cisco systems
Cisco -Deploying Data Services over SONET/SDH , by Cisco Systems