Skip to main content

Multicore Processor Functionality

The implementation of servers that utilize multicore processors is continuing to escalate as the requirements for more computing power evolves. The world of file-based manipulation and conversion is generating an increased set of tools that now can live on commercial off-the-shelf servers. As we've all seen, the number of supplemental server elements in the media chain is growing inside the broadcast central equipment room.

Catching or edge servers that accumulate files from content providers, and their distribution platforms that transform those received files into common house file formats depend upon the performance of the servers on which the transcode software resides. The additional processing power necessary to satisfy the volume of transcodes, unwrap and rewrap processes and the migration of those files between edge, main broadcast servers and archives is governed by how much throughput the servers can muster.

Dual-core and quad-core processors have helped reduce the physical size and power requirements for servers. Other benefits are also realized, such as permitting software to work in a single environment, as opposed to spanning the operations across multiple servers as one might find in parallel processing applications.

This month we'll take a quick peek into the history, and some of the fundamental code level functionality of the computer system. In a later installment, we'll dig into the structure and uses of today's multicore processing environment. So let's start with the traditional set of definitions.


A multicore microprocessor implements multiprocessing functions in a single physical package. Instead of having two or more processors combined onto two or more chips, a multicore microprocessor will generally be contained on a single piece of silicon, reducing duplicated elements and allowing for time sharing other operations that may be idle some or much of the time. Multiprocessing, at its root definition, is the use of two or more central processing units (CPUs) within a single computer system.

The word "Core" (when capitalized) actually refers to an Intel brand name. However, in the more generic sense, any portion of a processing system or for that matter, any system architecture (e.g., a broadcast plant's central equipment room) may indeed either be referred to as the "core" or may actually have a "core" section within it. In the case addressed in this article, we will refer to "core" as it applies to the processor within the server only.

The Intel Core micro-architecture, previously called the Intel Next-Generation Micro-Architecture, is a multicore processor micro-architecture introduced in early 2006. The reference to micro-architecture, which may be abbreviated as "µarch" or "uarch," describes either the electrical circuitry of a computer, the CPU itself, or even a digital signal processor. The concept dates back to the early 1950s when the term microcode was part of the description of the microprogramming portion of the processor.

Microprogramming, which differs from a macroinstruction, relates to the root level control of the logic elements in a computer's processor. In the early days of computer science, microprogramming was taught so students could appreciate the complexities as well as basic incremental steps necessary to control the logic of the processor. Since microprogramming lives only at the silicon level and is the fundamental intellectual property behind the CPU, this level of coding is seldom studied in most circles and is thought of as the "black box code" within the CPU.


At a somewhat higher level is the macro-instruction set which refers to a level of coding that is still far more generic than any of the early human readable programming languages. Macro-instructions are short register level commands such as MOV, ADD and STORE. They guide the movement of bits between registers and arithmetic processors inside the CPU whose principle components are a data path and a control unit.

Control signals identify the micro-operations required for each and every operation (i.e., a register transfer, shift, load) and are supplied by the control unit, the responsible party for generating the control sequences. A complete macroinstruction is executed by generating an appropriately timed sequence of groups of control signals (i.e., micro-operations).

Sometimes called "machine code," sets of instructions are formed into groups of routines that are in turn organized into the more intelligent operations which get their orders from high-level languages such as Basic or C++. Modern operating systems and software products, such as word processors, contain literally millions of lines of code that if broken down to the macroinstruction level would be orders of magnitude greater than the higher-level languages, which were used to develop our most familiar software applications.

If you extrapolate the capillary-like functions of micro-operations to microprogramming or even macro-instructions, you would end up with too many components on a single chip. Intel and others had to fundamentally change the approach to processor technologies to achieve the needed improvements in computer power—and they used this core-concept to meet that objective. The depth of this discussion can get much deeper; however at its base level, we hope to have revealed the "chapter one" understanding of how a multicore processor functions and how, or if its application is suitable to particular tasks or needs.

Multicore processors may not be necessary, or may not have the right fit for the operations desired. Adding more cores may not add appreciable computer processing as any improvements in benefits depend highly on the software applications' abilities to utilize more than one core at a time.


Still, multicore processing systems offer obvious advantages in terms of processing power. While you may be able to double the amount of compute cycles for some applications, not every application is suited to a multicore environment. Some applications are written to take advantage of multithread processes, that is, applications that have multiple paths (threads) and processes and can therefore run simultaneously by using more than one core. This is where some of the file-based processor applications for transcoding and encoding are headed, but that capability may not be available for all the features of a given application.

Another end user awareness issue not well understood is that software vendors may charge customers a license fee for each server or processor upon which an application is installed on. Buyers should be aware of this, particularly when they've spent the money for a quad-core server and found out the application they are using is only licensed for one core on a single machine.

Industries are adapting and will reach a point where most applications can take advantage of these technologies. We've already noticed that while quad-core servers have been on the market for more than a year, to date adoption isn't as widespread as expected, or at least when compared to dual-core processor servers. And according to recent long-range product road maps, there are plans to pack as many as 80 cores into a single chip; effectively emulating a massively parallel processor farm on a single CPU-chip.

Karl Paulsen is the CTO for Diversified, the global leader in media-related technologies, innovations and systems integration. Karl provides subject matter expertise and innovative visionary futures related to advanced networking and IP-technologies, workflow design and assessment, media asset management, and storage technologies. Karl is a SMPTE Life Fellow, a SBE Life Member & Certified Professional Broadcast Engineer, and the author of hundreds of articles focused on industry advances in cloud, storage, workflow, and media technologies. For over 25-years he has continually featured topics in TV Tech magazine—penning the magazine’s Storage and Media Technologies and its Cloudspotter’s Journal columns.