The Care and Feeding of 3D Signals

May 3, 2011

ORANGE, CONN.—If transporting live stereoscopic video streams for 3D television was as simple as using two standard video circuits, then the decision to broadcast live 3D would only be a matter of economics. Producers could just book two contribution circuits in place of one, and deliver their content to the production studio.

(L to R) Fig. 1A: Dual Stream Method, Fig. 1B: Frame-compatible Method
The reality, unfortunately, is much more difficult. 3D video signals require a great deal of care throughout the transport network to ensure that they arrive in a usable form at their destination.

To create the illusion of 3D using stereoscopic video streams, the human eye/brain combination must be persuaded to see the two different images shown to each eye as being from the same original scene, with just a slight horizontal offset. Maintaining this illusion requires that essentially all other aspects of the video signal are identical—the exposure, the color balance, the resolution, and of course the synchronization between the two sets of image streams. Transmission systems must be designed to avoid any changes that would change the appearance of one stream relative to the other.


One of the most important issues in transporting 3D video is maintaining frame-accurate synchronization between the two video streams. If the left eye image gets out of timing alignment with the right eye image by even one frame, the 3D illusion can be lost. In transmission, loss of synchronization can occur for a number of reasons, including:

Different Routes: If the two video streams take different routes through the network, they may arrive at the destination at different times. This can occur if one route is longer than the other, or if one stream traverses more devices.

Data Loss: If one stream is affected by bit errors or other loss of data during transmission, then transmission can be delayed by error correction techniques such as ARQ (Automatic Repeat reQuest).

Queuing Delay: IP routers commonly use packet queues to manage the loads on telecom channels. Delays can occur if one stream passes through a congested router or is usurped by higher-priority traffic.

Compression can also cause impairments in 3D video streams. For example, the two streams could be compressed using MPEG encoders that are not synchronized with respect to their GOP (Group of Pictures) structure. This would allow, say, one frame of the left eye stream to be encoded using an I-frame when the corresponding frame of the right eye stream is being compressed using a B-frame. This mismatch could potentially cause the viewer to either consciously or subconsciously notice a difference between the two image streams and create impairments to the 3D effect.


Two primary alternatives are available today to broadcasters for contribution video: dual stream transport and frame compatible transport. As shown in Fig. 1A, the dual stream approach creates two distinct video streams, one for the left eye image and one for the right eye image. Fig. 1B shows the same 3D sequence in a side-by-side frame compatible mode, where the left eye and right eye images are combined into a single video frame prior to transport.

For contribution, ESPN has chosen to use the dual-stream approach, combined with some safeguards, according to Emory Strilkauskas, Principal Engineer, Transport Technologies at ESPN.

"For contribution, we elected to deliver full resolution left/right feeds from the venue to our production facility," he said. "This preserves the highest picture quality for use in our 3D workflow. The challenge with this choice is maintaining frame alignment between the left and right picture. The easiest way to accomplish that with what is available is to combine both signals into one MPTS [Multi Program Transport Stream] which ensures that both signals always travel the same path.

"Several manufacturers have also worked on the encode/decode process to ensure that that is also consistent between the left and right signal," he adds. "For us, contribution requires us to double our bandwidth. We use the newest compression solutions and modulation schemes to offset some of that cost."

Rick Ackermans, Engineering Technology Fellow at Turner Broadcasting, on the other hand, advocates the frame-compatible transport approach. "Given the current state of the available technology, I have had good success in transporting contribution video using frame-compatible stereoscopic streams," he said. "I feel that this approach ensures that video framing and compression GOP are always synchronized between left and right eye signals. While the loss of resolution is an admitted trade-off, the current economics of 3D make it a worthwhile method for production and transmission at this time."

Clearly, the approaches being used for live 3D content contribution are evolving. There are trade-offs in using either dual stream or frame-compatible transport in areas such as image quality, cost of equipment and bandwidth consumption. As these technologies mature, look for improvements in encoder and transport technologies that will help bring down the costs of 3D to make it feasible for more live events.

Related Articles

2015 NAB Signal Processing Review

ATSC 3.0 Bootstrap Signal Becomes Candidate Standard

Sencore VideoBridge Helps Keep ValuNet Signals Clean

Receive regular news and technology updates.
Sign up for our free newsletter here.