Multiplexing is a technique for carrying multiple channels of information within a common signal. Although usually thought of as a digital process, the method was actually pioneered many years ago as a way of carrying multiple analog signals simultaneously.
In its simplest form, an analog multiplexer (or mux) is a switch that alternates between input signals at a high rate. (See Figure 1.) The multiplexing system used throughout the world to carry FM stereo signals is based on this idea. There are several requirements for faithful separation of the original signals.
First, the switch must operate at a high rate. Even in a pure analog system, the switch creates a sampled signal, so our old friend Nyquist applies. The switching must happen at a rate at least twice that of the highest frequency component present in either of the signals.
At the receiving (or demultiplexer) end, a complementary switch is used to separate out the original signals. (In the case where analog signals are sampled, a low-pass filter is also needed to remove the switching components, i.e., the repeat spectra.)
The second requirement is that the bandwidth of the channel carrying the multiplexed signal must be sufficient to carry the multiplex. In principle, the number of carried signals is limited only by these requirements. Of course, the synchronizing signal must also be carried or must be resynthesized at the receiving end.
Because the input signals are switched in time, the above scheme is known as time-division multiplexing (TDM). However, analog signals can also be modulated onto carriers, with the carriers placed at different frequencies. Such a scheme is called frequency-division multiplexing (FDM).
One well-known example of FDM is the method used to encode color information (chroma) in the NTSC and PAL video formats. In fact, the entire RF spectrum can be thought of as one huge FDM system. A different case of FDM is the diplexer, where multiple RF signals (or DC and RF signals) are combined in a single cable, often transmitting in opposite directions.
One inherent problem with analog multiplexing is the crosstalk between signals that develops from the use of practical systems or channels. For this reason, as well as that of efficiency and flexibility, most of the multiplexing systems we use today are digital.
First conceived in the 1930s, pulse code modulation (PCM) samples an analog signal and creates a digital representation of the signal. The first practical widespread use of this was in the United States' public switched telephone network in the 1960s. Designed for the 3.3kHz bandwidth of voice circuits, an 8kHz sample rate with 8-bit quantization gives a data rate of 64kb/s (known as the DS0 signaling rate). Twenty-four DS0s multiplexed together constitute a DS1 signal, also known as T1 when carried over copper wire. This multiplexing, however, is done after the individual signals are all digital. Hence, there is zero crosstalk in a properly functioning system.
The SDI and HD-SDI digital video interfaces are also capable of multiplexing embedded audio, closed captions, time code and other data. The signal can carry up to eight 24-bit embedded stereo audio pairs, at 48kHz sampling, which is directly compatible with the AES3 digital audio interface.
Modern multiplex systems carry much more than the pure signals (data essence) themselves. By grouping data into packets, multiplexers can achieve a high level of sophistication by adding information such as headers, sync fields, timing and metadata (which is data about the data).
The MPEG transport stream used for DTV transmission is an example of such a multiplex, where video, audio and ancillary data are all combined into one transmission channel. (See Figure 2.) With the approach of mobile DTV broadcasting, additional services can also be multiplexed at the physical layer so that the mobile service has its own unique RF reception characteristics.
Simple multiplexing can accommodate the situation where the input sources have fixed data rates. However, when the data rates vary, a more sophisticated scheme must be used to ensure efficient use of the communications channel. Statistical multiplexing provides this efficiency by continuously varying the individual input data rates so that a total target rate is achieved.
Transport streams vs. program streams
MPEG-2 can carry programs in one of two container formats: transport streams and program streams. The difference is essentially that of error resilience and program multiplicity. Transport streams can carry more than one program, each with its own time base, and are designed to allow for recovery from channel errors. Program streams can carry a single program and are designed for lossless transmission channels. The former is therefore well suited to RF transmission, and the latter is usually used in fixed media, such as DVDs.
A packet is the basic unit of data in a transport stream and consists of sync, a packet ID (PID), various flags and related data, an optional adaptation field that carries additional stream information, and the payload. PIDs allow decoders to select from various programs and provide the means for transmitting major and minor channels in a DTV multiprogram service.
Each program also has a program map table (PMT) that lists all of the PIDs associated with the program. This allows decoders to quickly parse the stream and decode only the elements needed to deliver one (or more) particular program at a time.
In ATSC and DVB, the packets also contain Reed-Solomon error correction, which is in addition to the trellis coding in the channel modulation.
Another important TS element is the program clock reference (PCR), which is used to synchronize the decoder and the display to that of the original encoder. The PCR can be thought of as a snapshot of the master clock used to generate the original stream. Through the use of PCRs and presentation time stamps (PTS), it is possible to ensure correct playback, even if the encoding and decoding are done at different points in time. A properly designed decoder can also use these to assure correct audio/video synchronization.
Other forms of multiplexing
Orthogonal frequency-division multiplexing (OFDM) is a multicarrier modulation scheme used in many applications, including broadcasting, DSL and Wi-Fi communications. With OFDM, a large number of closely spaced orthogonal subcarriers transport data from multiple parallel data streams or channels. Here, there are multiple levels of multiplexing. The modulation system itself uses multiplexing even if there is only one program.
Code division multiple access (CDMA), another form of FDM, uses spread-spectrum signaling for some cell phone services and for GPS. A private code synchronizes the transmitter and receiver by frequency hopping among numerous narrowband RF channels. With trillions of possible frequency-sequencing codes, CDMA communications are both secure and robust.
Not to be overlooked is the network router, a special form of multiplexer/demultiplexer that steers IP packets on a LAN, usually over Ethernet. This kind of multiplexer, however, usually has little interaction with any of the information within the packets, other than possibly reassigning IP addresses.
Thankfully, most of the complexity of muxing is handled by equipment designers. For practical use, the majority of installations allow a set-and-forget attitude — that is, until systems are upgraded. Then, it's back to the user manual to find the elusive setting that will get one more program into the mux. For the greatest flexibility, have a good stream analyzer that will show you the constituent elements of your mux!
Aldo Cugnini is a consultant in the digital television industry.
Send questions and comments to:firstname.lastname@example.org