System network core concepts structure overall performance

Understanding how networks function at a fundamental level is vital to understanding how overall systems perform. In 2013, we are going to take time in this column to focus on some basic networking concepts that I hope will provide you with the critical technical foundation you need to be a successful engineer in the professional broadcast media industry.

Key assumptions

As we start looking at networking, it is important to understand the history of packetized networks and some of the assumptions behind their development. Today, just about everything in computer networking flows from some important basic assumptions. As we go through this discussion, there may be cases where you might say that certain problems have been addressed, such as the “best effort” problem. This is true as networking has evolved. But in almost every case, these are adaptations that have been made to address or modify some of the initial assumptions. So, for this article, we are going to focus on the basics, realizing that there may be techniques or technologies that could be employed to modify some initial network behaviors.

Nuclear war

It may seem strange to start our discussion on networking with a discussion on nuclear war, but if you really want to understand how networks are designed, this is a good place to start.

The period after World War II was a difficult time. The U.S. fought the Korean War, which was a proxy war between the U.S. and China, and at the same time entered into the Cold War with Russia. A nuclear arms race ensued, and some of us practiced “duck and cover” drills at school. Many in this country took the threat of a nuclear attack extremely seriously. It was in this environment that modern computer networking was born.

The country needed a military command and control technology that could survive a “smoking hole” scenario — where one or even several cities were reduced to smoking holes in the ground. The technology could not rely on centralized switching centers or a central control system. Initially, designers considered traditional systems with backup switching and control systems in several locations, but the threat of multiple successful “smoking holes” during an attack rendered these traditional designs unacceptable. It fell to DARPA (the Defense Advanced Research Projects Agency, part of the U.S. Department of Defense) to figure out a solution to this problem. This is probably the most significant key assumption; most other assumptions fall directly out of this.

Packets

Now, everyone takes it for granted that if an application wants to send information over a network, that information is broken down into small parts, loaded into the payload section of a packet and launched over a network. But remember, at the time this technology was being developed, paper punch tape and teletypes were the order of the day. These systems operated over wire-line or radio networks and required a continuous carrier in order to work. Breaking the information to be sent into smaller packets was a fundamental concept, and it is a critical assumption behind modern network design.

Best effort

Networks are “best effort,” meaning packets may get to their destination, or they may not. No assumptions are made that the network absolutely will deliver any particular packet. This may seem like a crazy assumption. After all, the whole point of the system was to get absolutely critical information transferred from one place to another, possibly during a nuclear attack. But, freeing designers from the constraint of having to guarantee that the network was responsible for ensuring messages made it from one place to another actually allowed a number of creative solutions to the problem, many of which are employed with professional video today.

If a packet is lost, there are many options: The receiver could request retransmission, the receiver could mask the error without actually having the original data, or the receiver could reconstruct the missing information from additional error correction data sent separately. All of these options are solutions to resolving the fact that a packet did not arrive. The key, remember, is that they work without having to somehow ensure that the network remains viable 100 percent of the time.

Autonomous and decentralized

Another key assumption is the network does not have any centralized control system or centralized routing function. Designers wanted to ensure that, even in the case of a successful nuclear attack, the remaining portions of the network could continue to operate. Packets make their way from source to destination without a “router control system,” a different approach from what we are familiar with in the video router environment.

Not only are network operations autonomous, but they are decentralized as well. For example, the Domain Name System (DNS) is a distributed database that helps computers find each other. Without it, we would not be able to use domain names such as Google.com. Instead, we would have to rely on IP addresses such as 98.223.42.21. Remember also that having a central database would violate the “smoking hole” assumption. Instead, DNS works by having tens of thousands, perhaps millions, of DNS servers available. An entry is created in one database, but this entry is then replicated across the entire Internet as different users look up the same destination entry.

Self-routing

Each packet contains information necessary to get the packet from the source to the destination. Network routers read this information and react accordingly based on internal tables or on queries made to other routers and servers. This entire process is a critical part of what the Internet is and how it works, and I will be talking much more about routing, route discovery and Domain Name Resolution as we move into the future.

Layers and abstraction

Networking technology is separated into layers, with each layer focused on performing a particular task. This fundamental assumption allows different parts of networking technology to evolve separately and also allows manufacturers to quickly adapt to new technologies without having to re-write applications. It also allows network engineers to organize computers into logical network groups such as news, post production, traffic, etc. while still allowing each computer to maintain a unique network address. In addition to those described here, there are many other benefits of layering, and we will also explore this topic in much greater detail in a future article.

Transmission lines and RF engineering

We might think of an Ethernet cable as, well, a cable. But, actually, it is a transmission line. In fact, the original Ethernet ran on RG-11 coaxial cable, which is almost the diameter of your thumb. Later generations ran on RG-59. That Cat 6 Ethernet cable connecting your desktop to a wall jack is actually a twisted pair RF transmission line. If you do not believe me, try using a flat telephone-type jumper cable in place of the Cat 6. It will not work. This is because of cross talk and attenuation caused by the lack of twist in the cables. When you start running into hardware-related reliability issues with network connections, remember that an awful lot of current Ethernet technology operates on RF transmission lines. Pay attention to cable quality, workmanship and the use of proper terminations.

Shared network

Computers communicate across a shared network using the same bandwidth. There will not be a “nailed up” full-time connection from a sender to a receiver. There should be enough bandwidth for the network to function well, but that does not mean that bandwidth will always be available when it is needed. When two computers try to talk at the same time (a collision), they will both back off for a random amount of time before making another attempt.

Well-behaved citizens

A central assumption behind Ethernet networking is that applications will be well-behaved. By this, I mean applications will observe the rules of the road and will not hog all of the available bandwidth.

When Ethernet was created, the assumption was that most of the data transferred across the network would be small. (Think of file transfers of small documents, short network control messages and so on.) When you put heavy, continuous loads on

Ethernet networks, they start to collapse. This is because network designers assumed that there would always be some gaps in transmission, and that everyone could find a time to talk on the network even if things were pretty busy. But, if you load a network with professional video traffic, for example, a single transmitter can quickly suck all of the air out of the room, leaving no time for others to get a word in edgewise. Similarly, using User Datagram Protocol (UDP), poorly behaved clients can dominate a network, which destroys communications for everyone. This is important since most professional video-over-IP applications use UDP.

We will explore many of these assumptions in more detail over the coming months.

Brad Gilmer is executive director of the Video Service Forum, executive director of the Advanced Media Workflow Association and president of Gilmer & Associates.