While many frame-compatible formats are used to send two 3-D images to consumer TV sets, the data rates at which these images are transmitted has to be carefully managed to ensure that the images are in sync. This is usually done inside an on-site production truck or within a remote headend facility using MPEG-4 encoding, because of its higher efficiency when compared to the more established MPEG-2,and a complementary multiplexer. Add another 2-D HD feed to the backhaul, however, and things can really get interesting.
Digital Transport Agnostic Gateway Solutions (DTAGS), a production services company in Tulsa, OK, worked with ESPN to televise last year’s Summer X Games and the MLB’s Home Run Derby live from Anaheim, CA, in 3-D. Both events were broadcast on ESPN's 3-D channel using two separate MPEG-4 signals; the 2-D side used MPEG-2 encoding. DTAGS had to send a total of three feeds, one 2-D and two 3-D, back to ESPN’s headquarters in Bristol, CT, via fiber, but only the 2-D feed had a satellite backup, due to cost.
The company used one Ericsson EN8090 MPEG-4 encoder per signal. This was necessary not only to limit the video file size, but also to make room for an increased amount of audio elements now associated with most 3-D productions. Company President Mike Burk said they use to send stereo pairs (two to four channels) of audio. Today they send 16 channels of audio to accommodate 5.1 surround, stereo and international feeds.
Many would agree that when using the digital encoders, two MPEG-2 signals at 40Mb/s could look nearly identical in picture quality to dual MPEG-4 streams at roughly 10Mb/s. The idea is to make room for these bandwidth-hungry 1080p/60 (or 50) signals.
The MSG Network and YES Network both completed 3-D telecasts in 2010 using MPEG-2 NetVX encoders from Harris. For a Yankees game televised last July, YES crew used the left- and-right-eye camera feeds and feed them into a RealD encoder to assemble the side-by-side (frame-compatible) 3-D format. Then it was passed through a conventional MPEG-2 encoder as a single HD-SDI signal and sent on to DirecTV and other pay-TV operators.
Alternately, there has been some testing of MPEG-2, which is also used for the right-eye part of a stereo image, for the millions of 2-D HD sets in consumers’ homes, while MPEG-4 was be used for the second feed, usually the left eye. However, many in the industry advise against this method because the difference between the two compression standards is a couple of frames. H.264/MPEG-4 AVC is a block-oriented, motion-compensation-based codec standard, while MPEG-2 uses a combination of lossy video compression and lossy audio data compression methods. By most accounts, MPEG-4 is twice as efficient as MPEG-2 in terms of reducing moving picture file sizes.
Changing the data rate of one side with an encoder might work, but a typical set-top box could have trouble recognizing the synchronized 3-D signal.
“We’ve done testing with both the right and left eye compressed in MPEG-2, but typically MPEG-4 is used for both signals,” Burk said. “Using a mix of compression formats is not really an advantage because the latency is going to be different between the two (signals).”
Burk added that perhaps a frame-delay device could be used to slow one side or the other down to match, but he isn't sure the stereo image would remain consistent over the course of a three-hour telecast.
“There are several different methods for transmitting a 3-D broadcast,” Burk said. “You could use a ‘panelized’ method (side by side or top/bottom), which hits the encoder as a single HD-SDI signals, or you send separate left-eye and right-eye signals, which is what ESPN does.”
ESPN likes to maintain unique production values for 2-D and 3-D viewers, complete with different announcers and camera angles for each, partly due to the fact that 3-D camera positions are limited. The separate left- and right-eye signals are sent to Bristol where they are multiplexed and sent out in a frame-compatible form.
If you are transmitting MPEG-2, it limits you in terms of bandwidth. If you're doing a 2-D/3-D simulcast, you need three separate signals, which can be problematic to send over satellite. Fiber lines are also often used to backhaul signals.
“The real challenge with 3-D broadcasts is that you have to maintain the eye synch,” Burk said. “It’s imperative that those two feeds are in unison with no latency or the 3-D simply won’t perform properly.”
Future US's leading brands bring the most important, up-to-date information right to your inbox
Thank you for signing up to TV Tech. You will receive a verification email shortly.
There was a problem. Please refresh the page and try again.