Camera lens guidelines

These days we are often focused on technically esoteric topics like file-based workflow, video-over-IP standards and stereoscopic imaging. It might seem hard to grasp, but modern optics are perhaps more difficult to design than most of the electronic infrastructure we have to deal with every day.

The complexity of today's optics

Think for a moment about the level of precision that is necessary in both the mechanics and the optical surfaces. With lenses extending out to 101X, it is clear that mechanical tolerances must improve to maintain image stability as the optical elements move to make the lens zoom. At the long end of the zoom range, such a lens (900mm) has a field of view of less than a half degree, less than the width of the full moon. The same lens at the wide end has a field of view of a whopping 52 degrees (horizontally).

When one looks at a cutaway of a lens, it is clear that a lot is happening! To control focal length and focus, and to minimize aberrations, the moving elements are controlled with high precision and move in both directions. Also, the considerable weight of a field lens places a bending moment on the barrel, or frame, of the optical system. Gravity is the law. To avoid image deflection, the system must hold the optical center precisely in alignment with the optical axis of the camera. Since modern field lenses can weigh upwards of 55lbs, the design of the lens must also use modern mechanical engineering tools to predict and control mechanical forces without jeopardizing the precision of the placement and movement of the optical elements.

With SD cameras, the demands on the optical system were already considerable, but with HD imagers, those demands become much more critical. For instance, with tube cameras, one could adjust size and center for all three image tubes at the time the camera was set up for each production. Optical back focus for each could be moved as well. Thus, if longitudinal chromatic aberration is present (i.e., all three images do not converge in the same plane), adjustments could be made. (See Figure 1.) With modern fixed imagers, often permanently aligned on the prism assembly at the time of manufacturing, the lens must of necessity provide precise focus across the entire zoom range for all colors.

With the number of pixels in a 1080p image being 6X higher than a 480 image, it is clear the image must converge to better than 6X the specs that worked for SD television.

All optical aberrations contribute to degradation of the sharpness, color accuracy, image contrast and geometry of the image. There is no perfect lens, so it is a matter of controlling the cumulative effect of all image defects for the purpose of making superior images. Some aberrations are easier to spot than others.

For instance, geometric distortion is relatively easy to spot. (See Figure 2.) Think of the wide-angle lens on your SLR and the distortion it creates on objects with vertical or horizontal edges. The further out toward the corners of the image generally the larger the amount of distortion.

One might, correctly, surmise that lenses without extreme wide angles should not exhibit excessive geometric distortion. But keep in mind that lenses are sometimes designed in a series with more than one target for delivery. For instance, a series of lenses may share common design elements for imagers of both 1/2in and 2/3in. The net effect is that the same lens would have a considerably different angle of view on the two different images, which is adjusted by elements added to the path. It might be that a lens with moderate geometric distortion would produce a fine image on a smaller sensor. This effect is also seen on SLRs where the “C size” sensors produce a longer effective focal length, but less distortion with lenses designed for full frame, 35mm imagers because the corners of the image are not used in the smaller format.

Cost is related to performance

There is a direct correlation between the cost of optics and their performance. Buying a less expensive lens for HD applications will likely result in lower performance, particularly in the areas of sharpness, ramping (increase in f-stop at longer focal lengths) and maximum aperture, which will affect low light results. It follows necessarily that less expensive consumer crossover cameras that cost as low as $1000 will have lower performance optics and produce marginal results when compared with cameras that have three larger imagers and more sophisticated optics.

There will be declining performance with any optical system as the size of the imager reduces. This is not only a cost factor, but assuming a 1/3in sensor is built with superior specs, the optics have to produce much higher resolution to achieve the same result at the camera output. A 2/3in sensor is actually only 11mm diagonally, and a 1/3in sensor is about 6mm diagonally. But in both cases, the output resolution would ideally be the same, the modulation transfer function (MTF) providing about 50 percent contrast at 872 TV lines per picture height (TVL/ph). To get that much resolution from a smaller sensor, the lens must increase from about 80 lines per millimeter (LP/mm)at the sensor to about 150LP/mm. By contrast, on an SD camera, the lens would only need to produce about 32LP/mm to get full performance from the camera (in 16:9).

Thus, it is obvious that there are huge benefits to using optics chosen specifically for the application. It is tempting to use existing SD lenses on new cameras. The capital saved is enormous, but so is the reduction in performance. Similarly, choosing low-cost, low-performance cameras and lenses and then intercutting the resulting images with the output from high-end studio cameras and appropriate lenses will produce a result that is quite obvious. It's sad to say, but as in all things, there is no free lunch.

John Luff is a broadcast technology consultant.

Send questions and comments to: john.luff@penton.com