3-D editing

With so many predicting that 3-D productions will soon move from fad to mainstream, editors are going to need to get ready to tackle the requirements for telling stories in Z-space. Yet with less than 40 films produced during the modern era of 3-D, until now only a small cadre of editors have experienced the challenge of cutting in 3-D. But that is all about to change.

Somewhat buried under the hoopla surrounding 3-D theatrical features such as “Avatar,” with the arrival of 3-D TVs, the delivery of 3-D content to the home is starting to attract growing interest and, more importantly, financial investment. Last December, the Blu-ray Disc Association announced the finalization and release of its Blu-ray 3-D specification.

Then at January 2010's CES Show, several entities, including ESPN and Discovery Communications, unveiled plans to start generating and delivering 3-D to the home over satellite and cable later this year. In addition, Next3-D announced it would start a Web-based 1080p VOD service this month. While 3-D in the cinema has been fed mostly by CG-dominated material, new programming for home viewing will be based on live production, which means editors ought to start sharpening their chops to get ready for the onslaught.

A new way of thinking

Having spoken with editors who worked on most of the recent 3-D film productions, I can tell you that cutting for the third dimension requires new thinking in both technology and aesthetics. At first blush, the technology seems fairly familiar. Ever since “The Power of Love” was premiered on Sept. 27, 1922, at the Ambassador Hotel Theater in Los Angeles using the dual projector Fairhall-Elder stereoscopic 3-D process, 3-D filmmakers have realized that the trick of creating depth perception relies on somehow getting two different images to the viewers' left and right eyes and letting the brain assimilate them into the perception of depth.

But even though today's 3-D productions are usually shot with two HD or higher resolution digicam rigs shooting through a beam splitter instead of the old side-by-side film cameras, almost all editors on 3-D productions cut only the “left eye” images on 2-D NLEs. That's because using two camera rigs shooting through a beam splitter, the “right eye” has to be flipped either vertically or horizontally in the lab (depending on camera configuration) to be properly oriented. Although some very high-end systems such as Quantel's Pablo can accomplish real-time stereoscopic 3-D editing and mastering in a single system, their operational costs have relegated them mostly to 3-D finishing.

For the rest of us involved with creative cutting, what we used to call “offline,” only the Avid systems equipped with version 3.5 software or later can edit true stereo timelines and play them out directly to a full color 3-D monitor. That means editors first see the images without Z-space depth on their conventional picture monitor while making cutting decisions, and then have to either turn to an ancillary 3-D monitor after they have been played out to evaluate the depth effect, or wait until a post facility can composite the two eyes together for projection in a 3-D theater.

However, a factor found to be crucial to the improvement of today's 3-D presentations is controlling the convergence of the lenses of the two recording cameras, thanks to the innovative research conducted by DP Vince Pace, a key collaborator with James Cameron on his underwater documentaries and, of course, “Avatar.” Pace is credited with developing the convergence-adjusting FUSION 3-D camera used on almost all live action film, TV and sporting events. As a result, 3-D editors will have to learn to deal with controlling the “toe in” or “toe out” of the eyes to match the desired position of an object in Z-space, and even the Avid systems cannot manipulate the convergence of the left and right eye images.

Nor can any of the other laptop edit systems currently available make those convergence adjustments by themselves. To the rescue comes a third-party software called Neo3-D from CineForm, a company known for its intermediate file codecs. In its latest incarnation, Neo3-D provides full-resolution left and right eye images within a single AVI/MOV wrapper on Apple's Final Cut Pro, Adobe's Premier Pro, and Sony's Vegas Pro 9 NLEs. Because Neo3-D can output in any of a number of anaglyph formats such as red/cyan, green/magenta or amber/blue, that means the editor can use inexpensive colored glasses looking at an edit room's conventional 2-D monitor to evaluate and adjust the left/right convergence. This can save significant post-production lab costs during editing, and once the picture is locked, it can subsequently be output in either the RealD or Dolby 3-D polarized, full spectrum stereo presentation formats.

A viable alternative

But this brings up another current in today's 3-D that is bubbling beneath the surface. While major proponents of 3-D insist that some form of full color stereoscopic 3-D is the only way to go, great advances have been made in single-image stream 3-D that can be viewed on any normal TV or computer screen. Starting off with anaglyph approaches such as those mentioned above, this has been significantly improved by companies like ColorCode 3-D, which has developed an encoding process matched to inexpensive glasses through which the color information is conveyed through a specially-designed amber filter, and the depth component is perceived through a unique blue filter.

Insisting loudly that this is not traditional anaglyph, ColorCode 3-D provided the supporting technology behind “3-D Week” on Channel 4 in the UK last November and was used for the 3-D ads during the 2009 Super Bowl. Editors should be aware of this kind of technology because it doesn't require a special high-cost presentation system or relatively expensive polarized or active-shutter glasses, and its images can simultaneously be fully appreciated by most people even without the special lenses.

With Panasonic having presented a prototype of the first cost-effective 3-D camera for mainstream use at CES 2010, and the new content delivery organizations hungry for 3-D content that doesn't yet exist, there may come a call for home 3-D systems that don't require pricey screens and lenses and whose images can be viewed in a pinch by people in the room who aren't wearing those funny glasses. These are, after all, still early days, and there is also going to be a lot of editing needed for nonbroadcast productions. Single-image stream 3-D may become a viable alternative.

A great challenge

But however it is presented, the aesthetics defining the new grammar of 3-D will afford the greatest challenge to editors trying to tell visual stories in this new medium, and by now there are enough editors with sufficient experience wrestling with Z-space to harvest some of their lessons learned.

The most important is that without proper convergence control, 3-D images rapidly become tiring to the eye. This is exacerbated by rapidly cut sequences if the editor requires the audience's eyes to shift their convergence from shot to shot. So 3-D editors will have to learn not to confuse the viewers' minds by scatter-gunning the convergence points all over the screen.

Another rule of 3-D aesthetics that has emerged is that foreground objects can be much more distracting when floating past the screen than they were in 2-D. This was immediately apparent during the experimental 3-D broadcast of the BCS College Football Championship on Jan. 8, 2009, during which the ubiquitous sideline shots did not work at all. Even the overheads of the field, a mainstay of football broadcasting, appeared disturbingly flat.

Finally, editors will have to learn to cope with the fact they are dealing with a proscenium stage in 3-D. Deciding whether objects should come out of its volume toward the audience, or recede into its depths away from the viewers, will be a fundamental consideration for editors to keep in mind. It will also be just one of the new creative tools they have at their command when cutting 3-D.

L.T. Martin is a freelance writer and post-production consultant.