Many broadcasters find that they are expected to maintain specialized broadcast equipment and be well-qualified in a wide variety of IT technologies, including computer architectures, operating systems, networking, routing, system design and security. This is a pretty tall order, and many broadcasters need to update their knowledge on a continuous basis. This month's column begins a series of tutorial articles that will focus on these areas.
Computer architecture is the field of internal computer design. Understanding this field is important for the broadcaster because video applications, especially real-time HD applications, push computer technology to the limit.
Personal computers of the mid-1980s and early 1990s were relatively simple. They are a good starting point for understanding where we are today. Actually, many of the design decisions and naming conventions today (serial ports, for example) were derived from older mainframe designs that evolved in the 1960s and 1970s.
Figure 1 shows a simplified drawing of an early personal computer. At the center of the computer is the central processing unit (CPU). At its base level, all the CPU can do is simple binary addition. Binary means that the values the CPU deals with are either zero or one. From this base level, computer designers have been able to construct computers that perform extremely intricate calculations.
The CPU contains registers, which are places to hold numbers used in operations. Operations are functions the CPU knows how to perform. One operation might be to compare two numbers to determine if they are equal. The idea of processor operations is at the heart of the performance measurement floating point operations per second (FLOPS) and is a measurement of the computational power of a CPU. At the moment, the fastest CPUs are rated in the PetaFLOP range. One PetaFLOP is 1 quadrillion calculations per second, or one followed by 15 zeros.
The relationship between the number of operations a CPU can perform and the clock speed of the CPU is not straightforward. Generally speaking, for the same CPU, if the clock frequency is higher, the CPU can perform more FLOPS. But if you compare two different CPUs, then the comparison needs to take into account many other factors.
Once a CPU has completed an operation, it needs to store that result somewhere. Registers will not suffice because they may be used immediately by the next operation. The CPU stores the results of operations in memory. Data is moved from the CPU to memory over a bus. As Figure 2 on page 24 shows, there are actually two buses. The first bus is called the address bus, and the second bus is the data bus. (One bus may serve both purposes in some CPU designs.) When the CPU wants to read a value from memory, it sets the memory address on the address bus and then reads the value in memory on the data bus. This is a parallel operation, because address and data buses are typically 32 or 64 bits wide. When the CPU wants to write a value into memory, it sets the write address on the address bus, sets the data on the data bus, and then pulses the write strobe, which causes the data to be written into memory.
Bus width is a key specification when purchasing a computer. You may have heard that a system is a 32-bit machine, or that you need a 64-bit processor to run a particular high-end graphics program. This number refers to the number of parallel lines on the bus inside the machine. There are several buses inside a modern computer. The address/data bus between the CPU and the memory is the one referred to in sales literature. A wider bus allows data to be moved more rapidly inside the machine.
Almost all modern computers have user interfaces — typically a mouse, a keyboard and a display. The keyboard and mouse connect to the CPU via a port. In early computer systems, the computer was connected to a terminal. In the really early days, this terminal was a teletype machine. The teletype machine connected to the first port was the main console of the computer. This convention carries through today on Linux machines, where /dev/tty0 can still be configured as the primary user interface. TTY0 stands for the first teletype machine connected to the system. The computer communicated with the teletype over a serial line. The data was serialized into eight-bit words.
The next advance in technology was to connect a video terminal to the computer. Data to be displayed was written to specific areas of computer memory. The display card then read the data out of memory and displayed it sequentially on the video terminal. The process was simple: Write information into a location in memory, and it will appear on the video display.
This display method was limited to displaying text. More complex systems were created, whereby graphics were made using line drawings and vector mathematics. As these techniques became more computationally complex, it quickly got to the point where the CPU was spending so much time on the graphics display that it did not have enough horsepower left over for core computational tasks. To address this problem, engineers created graphics processors so that graphics calculations could be offloaded from the main CPU onto a separate CPU. This is now the only way new computers operate. Even if you do not have a separate graphics card, your motherboard includes a separate CPU, which is tasked with processing graphics and sending them to the video screen.
The keyboard and mouse operation remains pretty much unchanged from the original design. When a key on a keyboard is pressed, this creates an interrupt signal that is sent to the CPU on a specific port. You may have seen the terminology IRQ, which is short for interrupt request. This is a request for the CPU to stop what it is doing and pay attention to a specific I/O port. The CPU reads the data on the keyboard I/O port and then goes back to what it was doing before the interrupt was received. Because the modern processor is so fast, it can execute thousands of instructions in between each keystroke, allowing the computer to continue with other tasks while you type.
Disk I/O is handled by a hard disk controller. In one configuration, the disk controller connects to the main bus of the computer. When the CPU wants to read from or write to disk, it simply hands a file name to the controller and then waits for the controller to retrieve the information from the disk and present it at the disk I/O port, or the CPU writes data to the I/O port. Computer designers quickly realized that there were performance problems with this approach, because the disk was, and still is, substantially slower than the CPU. When reading from disk, in some applications the CPU might request the same data several times within a few seconds or minutes. This results in several identical read operations, each taking a substantial amount of time. When writing, the same issue occurs. The CPU could end up waiting a substantial amount of time for a write operation to complete. For this reason, disk buffering became commonplace.
On current systems, disk controllers now contain a significant amount of memory. The CPU quickly writes the data to be stored on disk to the disk controller's memory and then continues on with its tasks. The job of getting the information on disk is left to the controller.
Similarly, the controller reads information from disk and puts it in a memory cache. From there, the CPU reads the data it has requested. If the CPU requests the same information several times in a row, there is a good chance that the information is still in the disk cache and that another relatively slow disk read operation will not be required.
There are a few closing thoughts that I would like to leave with you:
- Typical media applications put a lot of stress on computer systems.
- Architecture choices matter, especially when working with video.
- It pays to take time to learn about what is going on inside computers; decisions designers make regarding bus width, disk I/O and network I/O can all have a huge impact on how well a given piece of hardware will work for video applications.
- Understanding basic computer architecture will help you make sound decisions for your facility.
Brad Gilmer is president of Gilmer & Associates and executive director of the Advanced Media Workflow Association.
Send questions and comments to:firstname.lastname@example.org
The latest product and technology information
Future US's leading brands bring the most important, up-to-date information right to your inbox