Reducing the cost of live 3-D sports

At the end of a year that saw the launch of the first commercial 3-D sports broadcasts, producers can expect to budget 25 percent to 30 percent over 2D HD shoots, with costs soaring to twice that of conventional coverage on occasion.

Manufacturers argue that this cost profile is expected and will mirror that of the introduction of HD. Over time, as demand for content rises and more equipment becomes available, prices will naturally reduce, and the rental market will become more competitive.

As the initial wave of promotion and underwriting of events fades away, the financial reality of 3-D TV is coming under scrutiny. Broadcasters, particularly those with significant commitments to full channels rather than one-off events, are under pressure to bring costs in line with 2D.

It will take some time for 3DTV subscriptions to pay for the current investment in 3-D programming. BSkyB, however, which operates Sky 3D in the UK, rationalizes its service as a retention benefit, market differentiator and a premium content proposition, and it doesn't anticipate a standalone revenue stream from 3-D in the short term.

Others, like ESPN, need to find a way that will allow it to produce at least 100 stereo events annually at an absolute minimum of additional cost. If it can't do so, then the channel risks being pulled from air.

There are several areas where costs can be reduced. These include combining 3-D with 2D camera positions; greater use of 2D to 3-D conversion; the development of more maneuvrable and versatile rigs; and the use of automation platforms such as Sony's MPE200 linked with lens metadata. All of these have a knock-on reduction in the number of specialist crew.

Rig development

With the business case still uncertain, a key consideration is that any kit purchased to support 3-D can be usefully turned to HD 2D or other applications. That may be fine where matrices and switchers are concerned. After all, a 1080p50 compatible router will benefit the gradual increase in data through a facility as well as the bandwidth demands of stereo signals.

At the front end, though, there's no getting around the need to work with rigs. The cost (rental or purchase) of 3-D rigs, with twin cameras and twin sets of recording media along with staffing a convergence operator and usually a technician per set, typically represents the largest chunk of 3-D budgets. The labor component is often much larger than the equipment cost.

There has been considerable activity among manufacturers over the past year to bring costs down by moving from equipment built for particular productions, with its own assembly and operating instructions, tool kit and specialist personnel to a more commoditized model.

The stated aim of several rig developers is to enable a standard crew to operate rigs while maintaining the same basic OB schedule.

One of the most satisfying gains by Sony and host broadcaster HBS during production of the World Cup last summer was the ability to transport between venues, and set up and calibrate 3-D rigs within about four hours — bringing 3-D OB operation within the timescale of a standard shoot.

Tested for the first time at the Ryder Cup, Sony used fiber combiners to enable a pair of 3-D cameras to work down a single cable. This, it says, will significantly reduce the amount of fiber required at venues, as well as speed up the rigging process.

Another strategy is to ensure rig versatility. If rigs are designed for use with all cameras, lens packs, LCS, Steadicams, cranes, tripods and dollies, rig costs are reduced; prices have dropped by 50 percent in two years.

That said, simply adding a 3-D rig or a processor will not produce great 3-D. The crew stills needs a solid knowledge about how lenses, rigs, cameras and convergence work together. Cameramen and engineers need to keep two cameras at the same exposure levels, and EVS operators need to recognize when a pair of images are out of sync so as not to offer it up for replay.

Most rig R&D is concentrated on ways to bring weight and bulk down. This is especially pertinent for Steadicam operators, who have so far had to shoulder the burden of 36kg kit.

Automated convergence

Panasonic's integrated 3-D camcorder AG-3DA1 is now in the field although the company can only point to live trials at Roland Garros during the 2010 French Open as an example of its use in sports. UK OB company Arena has gyro-mounted the camera for aerial test shots while NEP Visions says it will toy with the camera for 3D greyhound and horse race trials.

Theoretically the use of the AG-3DA1 requires no rig, nor special recorder, technician or convergence operator. But, with a fixed interaxial distance, its range is limited to between 2m and 30m (fine for pitch edge work). It is likely to be used to augment 3-D OBs, perhaps where space is at a premium (such as inside racing cars) or for B-roll footage as used during production of the 2010 Indian Premier League 20/20 cricket.

There are two main methods of 3-D production: parallel with cameras side by side, and mirror rigs (one camera horizontal and the other vertical). In either case, the most critical factor is that the lenses be almost 100 percent synchronized.

Not only must focus and iris movement be synchronized (even during zooming), but also the end stop for zoom, focus and iris must also be equally set so that the two lenses work simultaneously.

Even if two lenses are exactly aligned in the factory, there are tolerances to consider. The image sensor positions in the camera have a tolerance; the standard camera lens mount has a tolerance; and the standard pin (to regulate the rotation) has a tolerance. All must be aligned: therefore, optical axis control needs to be solved jointly with the camera manufacturers.

Sony has been doing just this with Canon and Fujinon, eventually working with 38 Canon HJ22eX7.8 lenses for the 3-D World Cup. Currently, for most 3-D productions use a combination of Canon lenses and Sony cameras with the MPE200 processor.

There is an ongoing program with Canon and Fujinon to ensure lens metadata is inserted into the camera head and passes with the video signal back to the truck and the 3-D processor; the combination of lens data and rig position data are thus used to dynamically correct optical shift and zoom alignment errors.

Prior to the tournament, the behavior of the lens pairs were aligned manually by way of marking maximum zoom in and out points, which the MPE200 would compare.

Electronic adjustment by linking metadata to an image processor should mean that as long as the lens pair are of similar age with similar zoom control, they don't have to be a specific match, thus permitting use of a wider array of lenses that may already be in an OB company's inventory.

Mapping 3-D to 2D

As with HD, a consistent cost premium of around 15 to 20 percent might be the tipping point at which live 3-D production goes mainstream, but there's a crucial difference. Whereas the SD to HD transition was made by downconverting the HD signal to cover SD and HD reception from a single truck, originating 3-D requires another set of cameras, vehicle and crew, another production workflow, and another uplink — which can in some cases more than double costs.

The design of 3-D trucks needs to account for positions for stereographers and multiple convergence pullers while also being switchable for conventional HD operation.

In Sony's designs for Telegenic's trucks, the layout has been created with flexibility in mind. Because fewer cameras are necessary for 3-D, and therefore fewer EVS machines, this space is traded for seats for convergence operators and additional recording decks.

In time, the aim is to reduce the number of convergence operators by enabling them control over two or more rig positions. This has already been trialed at the Ryder Cup, where one convergence op controlled a main 3-D rig and supervised a less critical ISO 3-D position.

Moves are being made to standardize interfaces and protocols to enable two rigs to be adjusted by one stereographer, but the current way of working is unlikely to change before several years.

Simulcasts of a 2D and a 3-D broadcast are undesirable given the need for less frequent cuts and wider shots in 3-D editorial. Nonetheless, there are synergies. For example Steadicam operators on a soccer match are unlikely to do anything different for either format, so there's no reason not to take a 2D feed from a 3-D Steadicam.

FIFA's host broadcaster HBS is looking at further synergies in camera position and flyaway packs ahead of the 2014 World Cup. It needs to, as 3-D rigs take up premium space at World Cup stadia.

Real-time conversion is one handy way around the issue of doubling up on equipment and position, but sportscasters, including ESPN and Sky, are reluctant to employ it for any more than a handful of workaround shots — currently for Steadicam, aerial and remote ISOs.

There are numerous systems on the market that perform conversion with slightly different algorithms. Manufacturers won't divulge customer names, with broadcasters reluctant to admit that they are simulating 3-D. That resembles the early days of HD where much content was upconverted, yet unacknowledged by broadcasters, even though picture difference was negligible to most viewers.

Adrian Pennington writes about broadcast technology.

Adrian Pennington

Adrian Pennington is a journalist specialising in film and TV production. His work has appeared in The Guardian, RTS Television, Variety, British Cinematographer, Premiere and The Hollywood Reporter. Adrian has edited several publications, co-written a book on stereoscopic 3D and is copywriter of marketing materials for the industry. Follow him @pennington1