Networking hardware

Various claims can be found on the Internet regarding the earliest network implementation, but one thing is for sure: It was not long after computers were invented that people began hooking systems together. Early networking relied on serial ports and dial-up connections. But it was not until Ethernet and TCP/IP became dominant that networking really took off.

Hardware fundamentals

Broadcasters may be surprised to learn that the precursors to modern Ethernet networks ran over RG-8 coaxial cable. This early system, called thicknet, was standardized under the nomenclature 10BASE-5.

Figure 1 shows a typical vampire tap in a thicknet network. The tap gets its name because it “bites” through the RG-8 cable to make contact with the outer shield. The installer uses a special drill to cut away the shield and dielectric, and then threads a probe into the tap to make a connection to the inner conductor of the coax.

The tap is bolted to a medium access unit (MAU). Signals from the MAU are sent to an attachment unit interface (AUI) over a 15-conductor cable. Figure 1 shows the DB-15 AUI connector on the side of the tap. In PC applications, the AUI cable is plugged into an AUI card, which is installed in an expansion slot on the motherboard. Software drivers enable the computer to send and receive signals on the network.

10BASE-5 was not around long; the cable and the taps were heavy, hard to work with in an office environment and expensive. Yet, 10BASE-5 showed that computer networking was viable, even in electrically noisy factory environments. 10BASE-5 saw limited deployment in broadcast facilities. I know of only a few systems that were built using this technology.

10BASE-2, also known as thinnet, was the next hardware advance. Thinnet uses fundamentally the same technology as 10BASE-5, but sends signals over RG-58 coaxial cable instead of RG-8. It was not long before manufacturers combined the MAU and AUI into a single device, known as the network interface controller (NIC). This arguably was the first mass-produced Ethernet card. If you have any old Ethernet cards lying around and wondered why they have a BNC connector, now you know the answer. Thinnet was the first technology widely deployed in the broadcast environment. While some early automation systems employed other networking technologies, by the mid-1990s, most of this had been replaced by thinnet.

While RG-58 was much easier to work with, and while advances in technology allowed manufacturers to create inexpensive interface cards, the cable was still awkward to work with in the office environment. It required a BNC T and a 50V terminator on the back of every card, and you had to shut down the network to install or remove computers from the network. Implementers, especially in the broadcast environment, needed a more flexible, less complex and more reliable solution that could be maintained on the fly.

10BASE-T and modern networking

10BASE-T is the grandfather of all modern wired Ethernet technology. It established several important innovations that remain influential today. 10BASE-T was the first Ethernet standard that made use of unshielded twisted pair (UTP) cable. You probably have surmised that networks operate using transmission lines (thus the early use of RG-8 and RG-58 coax cable). But coax was always expensive and difficult to work with. It has been known for a long time that it is possible to create a transmission line by twisting two cables together using a uniform twist-per-foot. So designers knew that they could use twisted pair wire rather than coax. But the Ethernet standard requires two cable pairs — one for send and another for receive. Typically, this would require shielding between the two pairs to avoid cross-talk, increasing the complexity and cost of cabling. Engineers discovered that they could meet the signal isolation requirements of the standard by putting two twisted pairs in the same jacket as long as the two pairs were twisted in opposite directions — one pair with a left-hand twist and one pair with a right-hand twist. This eliminated the need for shielding and significantly reduced the cost of the cable.

The next major innovation in the 10BASE-T standard, which continues today, is the use of the RJ-45 modular connector. This connector is actually designated the 8P8C connector, but consumers were already used to the telephone RJ-11 and RJ-45 connectors, and when the 8P8C connector came out with similar physical dimensions to the RJ-45, the name stuck. Although these connectors appear to be the same, broadcasters should pay special attention to the connectors, especially when using them at high network speeds. Old 10BASE-T connectors may not work.

10BASE-T and 10BASE-2 co-existed for quite some time, because broadcasters and many other users already had a large installed base of 10BASE-2. As the photo above shows, manufacturers addressed this situation by making NICs with both 10BASE-T and 10BASE-2 connectors.

The last major innovation from the 10BASE-T era that is still with us today is the advent of the network hub. Neither 10BASE-5 nor 10BASE-2 required central network equipment. All you had to do was to string cable between a group of networked computers, connect them, and you were ready to go. But 10BASE-T requires the use of a network hub. (Hubs are no longer in use; switches predominate now.) Every computer attached to the network requires a “home run” back to a centralized switch. At first, this was a significant obstacle to the adoption of 10BASE-T because hubs were expensive. But as demand increased, and as technology improved, the cost of hubs dropped to acceptable levels.

100BASE-T and 1000BASE-T, sometimes called Gigabit Ethernet (GigE), are evolutions that build upon the original 10BASE-T. The new technology is faster, smaller and less expensive than the original, and there have been many important technical innovations that have allowed broadcasters to access their networks at ever-increasing speeds. But 10BASE-T over UTP is the fuel that fired the networking revolution.

The advent of hubs brought to light a problem. But actually, this problem existed since the invention of Ethernet. The problem is that the hub, like the shared RG-58 and the RG-8 before, is a shared medium. In other words, every conversation happening on the network is transmitted to every computer on the network segment. Any message going into a hub is relayed to all the rest of the ports on the hub. (Note that switches do not behave this way.) This raises the obvious question: What happens when more than one device wants to talk at the same time?

The inventors of Ethernet anticipated this problem and came up with a solution as part of the original design. The solution is called Carrier Sense Multiple Access with Collision Detection (CSMA/CD). CSMA does just what it says; it allows multiple computers to access a network. NIC cards implement CSMA by listening before they send. Detecting a carrier on the network indicates that the network is in use and that a collision would occur if the computer tried to send at that moment. When the NIC detects that the network is busy, it backs off for a random amount of time and then tries again.

Network collisions

CSMA/CD works amazingly well as long as the network is not too heavily loaded. But as users began to demand more from their networks, especially broadcast users moving very large files, CSMA began to show its limitations. If the network started to run much over 75 percent capacity, collisions would go up rapidly, and throughput would grind to a halt.

One way to visualize this is to think of a dinner party. If you have a few guests with important things to say, it is no problem for one person to wait for another to finish. But if you have a large dinner party and you played by the same rules, it would not be long before a large number of people were waiting for their turn to talk. And if they had tried to talk but had to wait, tried to talk again but still found that others were talking, pretty soon they would get frustrated and leave. The same thing happens on heavily loaded computer networks.

This problem has continued to plague network users to this day. It can be dealt with through sound network design, proper network segmentation, deployment and proper configuration of modern networking hardware, and vigilant monitoring of the network by trained engineers. Unfortunately, moving, high-resolution video is one of the most demanding types of network traffic around, and broadcasters need to pay special attention to all these factors if they want to have reliable networks in their facilities.

Brad Gilmer is president of Gilmer & Associates and executive director of the Advanced Media Workflow Association.

Send questions and comments to:brad.gilmer@penton.com