Wes Simpson /
05.03.2011 10:10 AM
The Care and Feeding of 3D Signals

ORANGE, CONN.—If transporting live stereoscopic video streams for 3D television was as simple as using two standard video circuits, then the decision to broadcast live 3D would only be a matter of economics. Producers could just book two contribution circuits in place of one, and deliver their content to the production studio.

(L to R) Fig. 1A: Dual Stream Method, Fig. 1B: Frame-compatible Method
The reality, unfortunately, is much more difficult. 3D video signals require a great deal of care throughout the transport network to ensure that they arrive in a usable form at their destination.

To create the illusion of 3D using stereoscopic video streams, the human eye/brain combination must be persuaded to see the two different images shown to each eye as being from the same original scene, with just a slight horizontal offset. Maintaining this illusion requires that essentially all other aspects of the video signal are identical—the exposure, the color balance, the resolution, and of course the synchronization between the two sets of image streams. Transmission systems must be designed to avoid any changes that would change the appearance of one stream relative to the other.


One of the most important issues in transporting 3D video is maintaining frame-accurate synchronization between the two video streams. If the left eye image gets out of timing alignment with the right eye image by even one frame, the 3D illusion can be lost. In transmission, loss of synchronization can occur for a number of reasons, including:

Different Routes: If the two video streams take different routes through the network, they may arrive at the destination at different times. This can occur if one route is longer than the other, or if one stream traverses more devices.

Data Loss: If one stream is affected by bit errors or other loss of data during transmission, then transmission can be delayed by error correction techniques such as ARQ (Automatic Repeat reQuest).

Queuing Delay: IP routers commonly use packet queues to manage the loads on telecom channels. Delays can occur if one stream passes through a congested router or is usurped by higher-priority traffic.

Compression can also cause impairments in 3D video streams. For example, the two streams could be compressed using MPEG encoders that are not synchronized with respect to their GOP (Group of Pictures) structure. This would allow, say, one frame of the left eye stream to be encoded using an I-frame when the corresponding frame of the right eye stream is being compressed using a B-frame. This mismatch could potentially cause the viewer to either consciously or subconsciously notice a difference between the two image streams and create impairments to the 3D effect.


Two primary alternatives are available today to broadcasters for contribution video: dual stream transport and frame compatible transport. As shown in Fig. 1A, the dual stream approach creates two distinct video streams, one for the left eye image and one for the right eye image. Fig. 1B shows the same 3D sequence in a side-by-side frame compatible mode, where the left eye and right eye images are combined into a single video frame prior to transport.

For contribution, ESPN has chosen to use the dual-stream approach, combined with some safeguards, according to Emory Strilkauskas, Principal Engineer, Transport Technologies at ESPN.

"For contribution, we elected to deliver full resolution left/right feeds from the venue to our production facility," he said. "This preserves the highest picture quality for use in our 3D workflow. The challenge with this choice is maintaining frame alignment between the left and right picture. The easiest way to accomplish that with what is available is to combine both signals into one MPTS [Multi Program Transport Stream] which ensures that both signals always travel the same path.

"Several manufacturers have also worked on the encode/decode process to ensure that that is also consistent between the left and right signal," he adds. "For us, contribution requires us to double our bandwidth. We use the newest compression solutions and modulation schemes to offset some of that cost."

Rick Ackermans, Engineering Technology Fellow at Turner Broadcasting, on the other hand, advocates the frame-compatible transport approach. "Given the current state of the available technology, I have had good success in transporting contribution video using frame-compatible stereoscopic streams," he said. "I feel that this approach ensures that video framing and compression GOP are always synchronized between left and right eye signals. While the loss of resolution is an admitted trade-off, the current economics of 3D make it a worthwhile method for production and transmission at this time."

Clearly, the approaches being used for live 3D content contribution are evolving. There are trade-offs in using either dual stream or frame-compatible transport in areas such as image quality, cost of equipment and bandwidth consumption. As these technologies mature, look for improvements in encoder and transport technologies that will help bring down the costs of 3D to make it feasible for more live events.

Post New Comment
If you are already a member, or would like to receive email alerts as new comments are
made, please login or register.

Enter the code shown above:

(Note: If you cannot read the numbers in the above
image, reload the page to generate a new one.)

Posted by: Brian Smith
Tue, 05-03-2011 02:59 PM Report Comment
I am amazed at the awkwardness of these schemes. The simplest best way to address would be pseudo interlaced signals where the data is treated as a interlaced payload until presented at which time it is reconstituted as left right.
Posted by: Anonymous
Sat, 05-26-2012 07:28 AM Report Comment
If you want to publish the real time video onnlie with the use of your laptop you can use Microsoft Windows Media Encoder. Microsoft has instructions on how to setup the realtime conference with a webcam or video camera on their site.Another option is using Unreal Streaming Server. The limit on this one is 10 concurrent connections.These two options are free of charge. You will have to open certain ports in your firewall TCP/UDP to allow the video out connection.With Windows Media Enconder the people will need Windows Media Player.With Unreal you have other options, but Windows Media Player is preferred.Cheers!

Tuesday 03:07 PM
WMUR-TV Says FAA Drone Rules Preclude ENG
The FAA’s current rules and proposed ban on flight over people, requirement of visual line of sight and restriction on nighttime flying, effectively prohibit broadcasters from using UAS for newsgathering. ~ WMUR-TV General Manager Jeff Bartlett

Manor Marketing /   Tuesday 10:32 AM
Cobham takes on American Ninja Warrior
Sue Sillitoe, White Noise PR /   Monday 05:38 AM
DPA Microphones Takes On A Downpour at the 2015 BRIT Awards

Featured Articles
Product News
Discover TV Technology