Transition to Digital: Understanding multiplexing

Understanding multiplexing

By Michael Robin

Since the beginning of telecommunications in the early 20th century, telcom carriers and broadcasters have striven to transmit as much information as possible while conserving the limited bandwidth of the media they use — that is, to maximize throughput. They have succeeded in this effort by developing various techniques that can be grouped under the term “multiplexing.”

The analog world: frequency-division multiplexing

Analog multiplexing has been with us since the 1930s, when the rapid increase in telephone traffic required the development of techniques allowing for the simultaneous transmission of multiple channels on a single telecommunications medium.


Figure 1. Details of NTSC FDM spectrum around the chrominance subcarrier

The method used was frequency division multiplexing (FDM). One of the most successful applications of FDM is the frequency interleaving of the chrominance and luminance information resulting in the composite NTSC video signal. This process allows the simultaneous transmission of luminance and chrominance values in a shared 4.2 MHz bandwidth. The system takes into account the discrete spectrum clusters of the luminance signal with a spacing of Fh (horizontal scanning frequency) and inserts equally discrete but Fh/2 displaced spectrum clusters of a suppressed carrier quadrature modulated signal conveying the color-difference B-Y and R-Y information. The chrominance spectrum displacement by Fh/2 is achieved by using a chrominance subcarrier whose frequency is a multiple of Fh/2. The chosen subcarrier frequency is Fsc = 455 Fh/2 ≈ 3.58 MHz. The result is an interleaved spectrum with chrominance and luminance clusters spaced Fh/2 as shown in Figure 1.

While frequency division multiplexing is a relatively easy task, the demultiplexing is relatively difficult to achieve. A perfect decoder requires complex filtering and separation of the luminance and chrominance spectral components. Unavoidable design compromises result in chrominance-to-luminance and luminance-to-chrominance crosstalk. In addition, less than ideal transmission channel characteristics result in high frequency delays, resulting in chrominance versus luminance delays, and nonlinear distortions, resulting in differential phase and differential gain, which affect the accuracy of the color rendition.

The digital world: time-division multiplexing

Digital multiplexing uses the concept of time-division multiplexing (TDM). Here, several signals (related or unrelated) are sampled at a rate high enough to ensure that no information is lost. The samples are shortened as required and are time-division multiplexed for sequential transmission through a common medium. The digital multiplexer interleaves a number of lower-speed signals to form a higher-speed signal.

The advent of digital signal processing in professional video and audio equipment has led to time-division multiplexing of various data in the studio environment.


Figure 2. Time division multiplexing of digital 4:2:2 data

Figure 2 shows how the TDM technique is applied in CCIR 601 4:2:2 digital video. The first step in this process is the sampling and quantizing of gamma-corrected analog luminance (E' Y) and scaled color-difference (E' CB and E' CR) signals. (The latter are sometimes referred to as P B and P R.)

  • The E'Y analog luminance signal is low-pass filtered at 5.75 MHz. Then it is sampled at 13.5 MHz, with a precision of 10 bits per sample. This results in a bit-parallel digital luminance signal (Y) with a data rate of 13.5 MWords/s. The words have a duration of 1/13.5 MHz = 74 ns. There are 858 Y samples per total scanning line, numbered Y0 to Y857.
  • The E'CB analog blue color-difference signal is low-pass filtered at 2.75 MHz. Then it is sampled at 6.75 MHz, with a precision of 10 bits per sample. This results in a bit-parallel digital blue color-difference signal (CB), with a data rate of 6.75 MWords/s. The words have a duration of 1/6.75 MHz, or 148 ns. There are 429 CB samples per total scanning line, numbered CB0 to CB428. The CB samples are colocated with odd Y samples (Y0, Y2, Y4 ….).
  • The E'CR analog red color-difference signal is low-pass filtered at 2.75 MHz. Then it is sampled at 6.75 MHz, with a precision of 10 bits per sample. This results in a bit-parallel digital blue color-difference signal (CR), with a data rate of 6.75 MWords/s. The words have a duration of 1/6.75 MHz, or 148 ns. There are 429 CR samples per total scanning line, numbered CR0 to CR428. The CR samples are colocated with odd Y samples (Y0, Y2, Y4 …).

Now, here's where the multiplexing takes place. The three 10-bit bit-parallel data words are sequentially clocked out, starting with CB0. The sequence is CB0, Y0, CR0, Y1, CB1, Y2, CR1, etc. The last sample of the line is Y857. The result of this time-division multiplexing of the data is that the outgoing data rate is the sum of the incoming data rates. This requires only one multi-pair cable for signal distribution.


Figure 3. Time division multiplexing of two AES/EBU audio data streams mapped as a sequence of three words — X, X+1 and X+2 — into the horizontal ancillary data space of a 4:2:2 data stream

The time-division multiplexed bit parallel data rate is 27 MWords/s, and the duration of each sample is 1/27 MHz, or 37 ns. But distributing 4:2:2 multiplexed digital data using a multi-pair cable is costly and cumbersome, especially when using a routing switcher. To facilitate signal distribution, the bits can be read out sequentially and fed to a single coaxial cable, resulting in a bi-serial digital signal with a data rate of 270 Mbits/s. This simplifies signal distribution at the expense of increasing the bandwidth.

There are 1716 samples per total line (858Y, 429CB and 429CR) and 1440 samples per active line (720Y, 360CB and 360CR). The horizontal blanking duration is equal to 1716 - 1440, or 276 samples. The horizontal sync is not sampled. Instead, two 4-word timing-reference signals (TRSs) are sent: one identifying the end of active video (EAV) and the other identifying the start of active video (SAV). This leaves an overhead of 268 horizontal-blanking-interval samples available for transporting other types of information called horizontal ancillary data (HANC).

HANC

HANC data are formatted in packets consisting of a header, followed by the ancillary data and ending with a checksum (CS). In the absence of a header, it is assumed that no ancillary data are carried. The header consists of six words. The first three — 000, 3FF, 3FF — are values that cannot be assumed by other data, and they signal the presence of ancillary data. The last three header words are data identification (DID), data block number (DBN) and data count (DC). After the header, a maximum of 255 ancillary data words are permitted. Figure 3 shows details of the digital 4:2:2 horizontal blanking interval and the manner in which two AES/EBU digital audio data streams (four individual audio channels) can be formatted to fit into one ancillary data packet. SMPTE Standard 272M defines ways to multiplex (embed) up to eight AES/EBU data streams (16 individual audio channels) in the HANC data space.

This is achieved by grouping the eight AES/EBU data streams into four audio groups. The HANC capacity of the 4:2:2 digital format is on the order of 42 Mbits/s. This figure is obtained as follows:

268 Words/line × 525 lines/frame × 29.97 frames/s × 10 bits/word ≈ 42.16 Mbits/s.

Certain exclusions, such as lines 10 and 11, reduce this value by 10 percent to 20 percent. Given an AES/EBU data rate of 3.072 Mbits/s (before BPM encoding), eight AES/EBU data streams would require 8 × 3.072 Mbits/s = 24.576 Mbits/s, so there is ample HANC space for other ancillary data.

Ancillary data also can be embedded into the vertical blanking interval (VANC). Among the VANC data are error detection and handling (EDH), as well as vertical-interval time code (VITC). Audio is usually embedded only as HANC.

Michael Robin, former engineer with the Canadian Broadcasting Corp.'s engineering headquarters, is an independent broadcast consultant located in Montreal, Canada. He is co-author of Digital Television Fundamentals, published by McGraw-Hill.

Send questions and comments to:michael_robin@primediabusiness.com

Do you have a comment about this article? To tell us your thoughts, click here.

Back to the top