3ality Digital tackles live 3-D HD stereoscopic interview for IBC2008

IBC2008 in Amsterdam next month will feature a unique satellite interview of DreamWorks Animation CEO Jeffrey Katzenberg conducted in Los Angeles.

While the content of the interview promises to be interesting, the truly remarkable thing about the exchange is that it will be shot and presented in 3-D using an HD stereoscopic setup in Los Angeles and satellite transmission services provided by Arqiva.

3ality Digital will use its 3-D HD stereoscopic camera rigs to shoot the interview. This week, “HD Technology Update” discusses the project with Steve Schklair, founder and CEO of 3ality Digital.

HD Technology Update: 3ality will shoot a 3-D stereoscopic interview of Jeffery Katzenberg in Los Angeles that will be viewed live in Amsterdam at the RAI during IBC2008. What technical challenges are presented by a live 3-D shoot?

Steve Schklair: Leaving the broadcast side out of this for the moment, there are technical challenges presented by 3-D shooting of live action in general, because in the real world, lenses don’t match each other, cameras don’t match each other, image blocks don’t match each other, and for 3-D to really work, everything has to match perfectly.

You are using two cameras, and color has to match and shading has to match; geometry has to match absolutely perfectly in order for it to work. Normally, in 3-D live action, we solve those problems, then you go through post production to fix any of those geometric aberrations that might appear when shooting, especially when shooting with zoom lenses. No matter whose zoom lenses they are, no zoom lenses track on center.

When you’re shooting live, real time for broadcast, you don’t have the luxury of post production. So what had to be invented was a lot of technology that lets you shoot live action 3-D, and it’s either corrected by the camera system or electronics so that everything is geometrically absolutely perfectly aligned pixel for pixel right out of the back of the system.

So for the first time ever, we are shooting dead-on pixel accurate in a real-world environment. That’s a challenge.

HD Technology Update: Is that technology 3ality Digital developed?

Steve Schklair: Yes, 3ality developed a lot of tracking technology, a lot of color correction technology at the camera level, and a lot of geometric correction technology to take out any of the issues that used to be associated with 3-D, even mistracking of zoom lenses. If you are zooming in on a subject with two lenses, they never match because one is not built for that. We had to build technologies that would then track individual lenses by serial number and make adjustments based on look-up tables of what each of those lenses would do at every point along the zoom range.

HD Technology Update: And then correct for those on the fly?

Steve Schklair: Yes, we make those corrections in real time. We are only one frame behind the action, and that’s mostly because some of the rigs use mirrors, and we need the whole frame in order to flip it. We can’t flip the image to correct for the use of the mirror until that last scan line has drawn itself.

HD Technology Update: Does the distance covered by the satellite transmission introduce any challenges?

Steve Schklair: The way we do it, there are no other problems introduced. What we do is multiplex the signals into a single 2-D picture, and at least as far as the hardware is concerned, it is only looking at a 2-D picture.

So we will multiplex the images and then run through standard MPEG-2 or MPEG-4 satellite encoders and put it up onto a satellite and back down from the satellite all on a 2-D path. When it gets to the projector or monitor at the backend, we then decode it and put it back into the 3-D images.

There have been experiments tried with sending two synchronized signals — one for the left eye and the other or the right — but that introduces all kinds of issues, especially if you are going over fiber and suddenly you are routed down some other path to arrive at the backend. It throws things out of sync. We like to be in sync up to the scan line. Close to a frame isn’t enough.

HD Technology Update: This is a two-camera shoot — two HD stereoscopic camera rigs, each with two cameras. Why introduce the second camera rig and what complexity does that create for switching between the sources? Is there such a thing as an HD stereoscopic video production switcher?

Steve Schklair: Again, because we are multiplexing into a 2-D signal, while we do have a switcher that will switch dual signal paths, we are just using a standard 2-D switcher and path.

The reason for using two cameras is because visually, if we only had one camera, you’d be looking at zoom in, zoom out, recompose. That’s not very interesting.

HD Technology Update: Yes, I understand the production aesthetic, but I thought perhaps you chose the setup to demonstrate that it’s possible to switch between two 3-D camera rigs.

Steve Schklair: We’re always interested in showing that it’s possible, but actually we’ve done this enough times now that we don’t question whether it’s possible. We just wanted the two cameras because we have two camera rigs available that aren’t tied up on other shows.

Because this is a two-person interview, I thought about just bringing three cameras — a wide shot and two isos. But two cameras should be enough.

HD Technology Update: I understand for this project you will multiplex the signals from each camera in the two-camera rig and switch that with a conventional switcher. But I was wondering if there is such a thing as a 3-D production switcher?

Steve Schklair: We have one, but it turned out that’s how we used the switcher a couple of years ago. But now that we’re dealing with a multiplexed signal, it’s not necessary. We just did a large sports project and brought in a 2-D broadcast truck, and we utilized everything on that truck, including the monitors, the switcher, the CCUs, and all of the audio gear. We brought in some of our own gear and wired it in, but there wasn’t anything on the truck that was 2-D that we weren’t able to use on the show.

HD Technology Update: What sporting event did you produce in 3-D?

Steve Schklair: I don’t think I can talk about that yet, but it was a prize fight, so boxing.

HD Technology Update: I understand it can be visually jarring to viewers of 3-D programming during transitions, especially if there are any switcher effects used. Is that the case, and how do you avoid that issue?

Steve Schklair: If the depth of the two shots is completely uncontrolled, yes, the transitions can be kind of jarring. If it’s a straight cut in a broadcast, you are talking about a cut that happens over a 30th of a second. That means in a 30th of a second, you are asking people’s eyes to readjust to a new depth. Then there is another cut, and they have to readjust. But you are only giving them a 30th of a second to make that adjustment, which tends to be jarring and tiring after a while.

We have ways of leveling that depth so on a cut-for-cut basis there is very little shift in depth, if any. If we want the shots to be deeper than the one we are coming out of, we’ll transition immediately after the cut to the different depth. But it’s a process of leading the audience through the depth changes as opposed to cutting an audience through the depth changes.

HD Technology Update: Tell me about the 3ality HD stereoscopic rig.

Steve Schklair: We have a couple of flavors of stereoscopic rigs. There are two kinds of stereo rigs that everybody in the world uses. One is a beam splitter, which puts two cameras at a 90-degree angle shooting through a half-silvered mirror. The purpose of that approach is so you can put the optical centers over each other so you can shoot with a 1-inch spacing between the two cameras. If you physically tried to put cameras next to each other, the closest you often can get them together is 3 inches because of the front diameter of the lenses. The lenses will bump into each other, and that’s as close as they are going to get to each other.

But there are a lot of shots we want to shoot where we need an optical center much closer than that. So we’ll use a beam-splitter array.

For other types of shooting — certainly when things are farther away from the camera — we’ll use a side-by-side rig where you do mount the two cameras next to each other, but those are cameras we’re never looking to bring closer together than 2.5 inches to 3 inches.

The 3ality rigs are different from others in that they are highly computerized. They have both mechanical and electrical compensation for things like alignment. In the bad old days of 3-D, last year, it used to take three days to align two cameras so that they worked through part of the zoom range — partially. Then you could fix it in post.

Now, we have automatic alignment tools, look-up tables, things so when we take a camera off a truck, literally five to 10 minutes later it’s ready to shoot. So in the space of a year, we’ve gone from three days of lens tweaking to 10 minutes to be ready to go.

HD Technology Update: Is 3-D going to be a fad that fizzles, or will it have staying power?

Steve Schklair: You know, I haven’t been asked that question for a while now. It seems to me it’s well past the fad stage. During the fad stage, there weren’t 20 to 30 feature films in production at the studios, and at the fad stage, there weren’t a couple thousand theaters on the books, so I think we’ve moved past the fad stage. It’s now just about business growth and business cases.

HD Technology Update: Do you see a future for 3-D stereoscopic TV presentations? If so, how?

Steve Schklair: It really depends on the business case, but I believe the next three to five years will prove the business case behind that.

Tell us what you think! HDTU invites response from our readers. Please submit your comments to editor@broadcastengineering.com. We'll follow up with your comments in an upcoming issue.