Computers & Networks: Networking for post production

Most post-production facilities have been moving pictures over networks for years. Why did post get an early start? Because in post it was possible to use conventional networks to move single images or graphics backgrounds from one place to another. As the speed of networks and computers has increased, network designs in post production have expanded to take advantage of these capabilities.

This month's column explores Ethernet network switching components. Understanding how these components work together will help you build the high-performance networks you need to support your post-production department.

In the earliest networking days, simple Ethernet hubs were used to connect a small number of devices together on a network. The hub provided the physical connections and voltages necessary to allow devices to communicate. They were completely dumb, meaning that they had absolutely no awareness of the packets being exchanged between nodes. They worked well for very small workgroups in office environments. Hubs could be daisy-chained together to increase the total number of computers on a network but, as many of us found out the hard way, adding computers to the network increased congestion and quickly caused the network to fail.

Bridges were developed to connect disparate local-area networks (LANs), and were the first devices used to connect a local network to the Internet. They also allowed network engineers to control access to a part of a network on a case-by-case basis. In the early days, bridges had limited processing power. But these days, they have evolved into complicated devices.

Routers are similar to bridges, but they allow computers on one network to talk to computers on another network. Sometimes routers are referred to as gateways. The router takes packets on one network and forwards them to another. Routers are intelligent devices; they learn where a computer is. From then on, they send packets along the appropriate route without having to rediscover where the receiving computer is located. A cable modem or DSL box is a router — it connects your computer, running on a local network, to the Internet.

Switches are similar to hubs in that they are used to connect a number of computers together. But in a switch, when a computer on Port A wishes to talk to a computer on Port C, Port A is effectively connected to Port C for the duration of the packet transfer. And, if at the same time, a computer on Port B wants to communicate with a computer on Port D, it may do so without having to wait for the transfer between Port A and Port C to finish. Put another way, the big difference between a switch and a hub is that a switch provides multiple, simultaneous bi-directional connections at full wire speed, just as if you had hooked the computers back-to-back. Before switches, all connected devices shared the available bandwidth in the hub. Installing Ethernet switches can increase speed by a factor of 10 to 20, compared to conventional hubs.

Switches can be much more than hub replacements. Routing switches combine the functions of both a router and a switch, connecting different networks together at very high speed and low latency.

Switching, routing and layers

When a router receives a packet, it looks at the header to determine where the packet should go. In the early days of networking, the media access control (MAC) address was key. A MAC address such as 00:03:53:d3:bc:56 is assigned by the manufacturer from a range of addresses they have obtained from the IEEE. This address is hard-coded into the network-interface card and cannot be changed. A bridge looking at this address checks its tables to determine the disposition of the packet. If it does not know where the device is, it sends a broadcast packet to all nodes on the network in an attempt to find it. Bridges and switches that use MAC addresses are called Layer-2 devices because they do their switching at the Datalink or Layer 2. Layer-2 devices are very fast, they can be built with low latency, and they are not expensive.

Layer-3 routing operates at the network layer. (See Figure 1.) In this layer, machines are identified by network address, which can be set by the user. TCP/IP is the most commonly used networking protocol today. IP addresses are written in “dot” notation, with four numbers between 0 and 255, separated by periods (Example: 127.0.23.41). An engineer can assign a group of computers to a logical network, sometimes called subnet, by assigning them addresses within the subnet range (Example: 127.0.23.0 to 127.0.23.254 with a subnet of 255.255.255.0). Layer-3 routing allows the user to route separate logical networks on the same physical wire. Why would this be important, especially to the post community? The answer lies in how Ethernet handles collisions (see sidebar). When two nodes talk at the same time on the same network, a collision occurs. If you build a facility with a routing switch, connecting each node to its own port on a switch will avoid having two devices collide at the physical level. However, if these devices are all part of the same logical network, collisions will still occur. By using a Level-3 routing switch, you can limit the number of devices talking on each logical network, and thereby limit the number of collisions. The devices can still all talk to each other because the router part of the router switch will route packets across the various networks. This solution is particularly important to the post community, where users are moving large files in and out of server farms.

Layer-3 router switches are more expensive than Layer-2 router switches. The Layer-2 device is sufficient for most applications. The Layer-3 device gives you the flexibility of assigning devices connected to the same physical network to different logical networks.

Let's look at an example to see how a simple hardware upgrade can improve network performance. Suppose that a post facility has a server farm with 10 servers, all on the same network, capable of pumping 8 Mbit/sec. per server. You decide that you will hook these all together using a 10-port, 100Base-T Ethernet switch. So far, so good (you think). You have a maximum of 80 Mbit/sec. (10 servers at 8 Mbit/sec. each) from the servers running through a 100Base-T switch.

There are at least two problems with this scenario. The first problem is throughput. If each of the servers actually delivers 8Mbit/sec., then theoretically the network will be 80 percent saturated. However, Ethernet has a fairly high overhead, somewhere in the neighborhood of 20 percent to 25 percent. If you load this network beyond about 75 Mbits/s, it will fall over. Most network design engineers like to see networks running at 70 percent capacity or below. The second problem is with collisions. Since all servers are aggregated on one logical network, when one server starts talking, all the others have to wait until it is finished. With a large number of users, this could cause network lockup.

Fortunately, with the advent of routing switches, these problems are easily resolved. One possible solution is to replace the 100Base-T switch with a 100Base-T routing switch. Then split the servers into two separate logical networks and configure the router to connect the networks together. This is shown in Figure 2. Finally, add two more connections from the router switch output to the client network. What do you get from this reconfiguration? First, by replacing the switch with a routing switch, you provide a 100Mbits/s pipe from each server to the backbone. With two direct-duplex, full-bandwidth connections to the client network, you will never run short of bandwidth between the servers and the clients. Second, by grouping the servers into two separate logical networks, the collision domain is cut in half. In other words, the number of computers competing for a chance to talk is cut in half. This reduces the likelihood of a collision and increases network efficiency.

Understanding how network devices work can help you to design better, more efficient networks.

As a closing comment, consider the example shown in Figure 2. If you expand your server farm, you can add two more links from the router switch to the in-house network. But is that the best way to go? Consider the alternative of purchasing a 100/1000 router switch. This would allow you to run a single Gig-E connection from the output of the router switch to the in-house network instead of four 100 Mbit connections (assuming a Gig-E backbone). The single Gig-E is less expensive than the four 100 Mbit connections, and offers about 20 percent greater throughput because the overhead of the multiple 100 Mbit connections is four times higher.

Brad Gilmer is president of Gilmer & Associates, executive director of the AAF Association, and executive director of the Video Services Forum.

Send questions and comments to:brad_gilmer@primediabusiness.com

Collision avoidance

Carrier-sense, multiple access/collision detect (CSMA/CD) is the protocol for carrier-transmission access in Ethernet networks. On Ethernet, any device can try to send a frame at any time. Each device senses whether or not the line is idle and available. If it is, the device begins to transmit its first frame. If another device tries to send at the same time, a collision occurs and the frames are discarded. Each device then waits a random amount of time before attempting to send the frame again. While Ethernet networks may have up to 255 computers on a single segment, most networks become heavily loaded before that many computers are connected. As loading on the network increases, collisions become more frequent. Once the network nears saturation, adding just a few more nodes can cause the network to fail completely. Most Ethernet cards have a collision LED to indicate that a collision has occurred. If this light is constantly illuminated, it may indicate that the network is overloaded. It is important to note that collisions are inevitable even in very lightly loaded networks. Collisions are not a problem per se. But when there are large numbers of collisions happening all the time, the network ceases to function.

Do you have a comment about this article? To tell us your thoughts, click here.

Back to the top