The arrival of compact and relatively low-cost HD camcorders has opened the opportunity to employ them as b-roll cameras. Naturally, attention must be paid during camera setup and/or post to achieve an optimal visual match between these camcorders and more expensive digital cinema cameras.
When shooting 24fps and 30fps video, there is another visual characteristic that must be considered. All low temporal rate media exhibit judder when viewed. Nevertheless, there is concern that low-cost HD camcorders exhibit more judder than do film and digital cinema cameras.
In two white papers written for the BBC, Alan Roberts makes a convincing case that video camera technology differences, not simply camera operator inexperience, are responsible for excessive judder from low-cost HD camcorders. (The white papers can be found at http://tinyurl.com/9n4wb9 and http://tinyurl.com/88rv95.)
Temporal sampling judder
Film and video cameras sample motion at regular intervals. Frame rates of 24p and 30p have lower temporal sampling rates than 60p and 60i. Therefore, 24p and 30p media will often be represented by too few samples to accurately capture complex motion.
For this reason, motion captured at low frames rates will be less fluid than when captured at high field and frame rates. The lack of fluid motion is called temporal sampling judder. It is a signature film look, and it is a desirable type of judder.
The frame rate used for motion pictures is 24fps — a rate considered ideal for narrative motion pictures. Some feel 30fps is also acceptable.
If film was presented at 24fps, image flicker would be intolerable. To increase the presentation rate to 48Hz, film projectors use dual-blade shutters to show each frame twice. (For a relatively dim picture, a 48Hz rate only slightly exceeds the critical flicker frequency. Brighter pictures demand a higher presentation rate, hence triple-blade shutters.) Flicker from 30p video is eliminated the same way — by repeating each image twice (60i).
Doubling the presentation rate inherently creates eye tracking artifacts. Figure 1 illustrates a horizontally moving square. When film is projected with a double-bladed shutter, a new picture is flashed 24 times per second, and each picture is flashed twice. Between each presentation, the screen goes dark. The dark period tells our eyes a presentation is complete and clears the image from the retina.
As we watch the projected image, our brain uses the series of new images to determine the square's motion vector. In a series of short movements called saccades, our eyes track the moving square. When the projector shutter opens a second time on the same frame, our gaze — following this vector — has advanced halfway to the anticipated position of the square at the next new frame's presentation.
The square, therefore, is imaged onto our retina a second time, at a position displaced along the motion vector. These repeat images, which are not where they should be based on the motion vector, degrade the perception of motion. This degradation is called motion judder. A certain amount of motion judder is accepted as part of the look of film projected in a theater.
Excessive motion judder can be prevented, for example, by panning with a moving object. Follow panning itself creates another eye tracking artifact called background strobing. However, by forcing a shallow depth of field, background detail is reduced, thereby minimizing background strobing.
When film is telecined or when 24p video is broadcast, 2:3 pulldown is added to enable the media to be carried within 60i video. (See the blue and yellow cells in Figure 2.) While temporal sampling judder remains, motion judder is replaced by pulldown judder.
The dots in the second row in Figure 2 represent an object moving from left to right. Each field, within frames A, B, C and D, should successively carry an image captured 1/60sec earlier. Fields 3 through 5 show the process of adding 2:3 pulldown.
In field 6, the odd field within video frame 3 carries motion captured 1/30sec earlier. And the even field within video frame 4 carries motion captured 1/30sec later. The nonuniformity of motion in video frame 3 (dark green cells) and video frame 4 (light green cells) mixed with the uniform motion in video frames 1, 2 and 5 creates a visual 2:3 cadence. The 2:3 cadence creates judder. For this reason, video frames 3 and 4 are called judder frames.
According to the BBC white papers, the perception of temporal sampling judder, as well as motion (30p) judder or pulldown (24p) judder, is determined by the edges of moving objects. Hard edges create distinct moving objects. These increase our perception of judder. Therefore, any aspect of a camera's optical or electronic components that increases edge sharpness inherently increases the perception of all types of judder.
Edges have relatively low spatial resolution compared with fine detail. The perception of judder is increased by an unfavorable balance between a band of midspatial frequencies and an upper band of high spatial frequencies carrying fine detail.
A modulation transfer function (MTF) describes the relation between image contrast and spatial resolution. (See Figure 3.) An MTF curve's shoulder starts high and rolls off to a long foot. The higher the frequency the roll-off begins, the more fine detail passes through a lens.
Expensive cinema lenses have an extended MTF that transmits images with loads of fine detail. The lens on a less expensive camera has a lower frequency roll-off that significantly attenuates fine detail.
A camera's sensor size determines its ability to obtain a minimum depth of field (DOF). Film and digital cinema cameras, with their large frame size, offer a shallow DOF. Next come video cameras with 2/3in chips. At the bottom of the heap are cameras with 1/3in or 1/4in chips, which are unable to suppress background judder because they have an inherently deep DOF.
The perception of judder depends on image contrast, which is a function of a camera's gamma. Moderate-cost video cameras allow the selection of several gamma curves. (See Figure 4.) Panasonic has equipped its DVCPRO HD and P2 camcorders with a sophisticated Tele Gamma mode for use where the content will be viewed on televisions. (These camcorders also feature a different Cine Gamma mode for use where the content will be transferred to 35mm film.)
It seems obvious that an HD camera's sensor(s) should have resolution equal or greater than the recording resolution. At low frame rates, however, cameras that use horizontal and vertical green shift to quadruple the number of pixels may yield less judder because their softer video attenuates edge sharpness. (This softness may not be desirable at high field or frame rates.)
All video cameras incorporate a low-pass anti-aliasing filter to prevent aliasing when a sensor's signal is digitized. The more sophisticated the filter, the steeper the filter's slope. Conversely, an inexpensive filter rolls off more slowly. The former allows a high cutoff frequency and thus less lost detail. The latter forces the turnover point to be further below the Nyquist frequency, thereby causing a significant loss of fine detail.
Both vertical and horizontal components are further filtered to match the recording format used. The red curve in Figure 5 illustrates the horizontal spatial resolution from a typical low-cost 1440 × 1080 camcorder.
As shown by the red curve in Figure 5, signal strength is already very low by the midpoint of the recording bandwidth. Video cameras have a sharpness (detail enhancement) control that adjusts the amount of boost applied to the signal. In Figure 5, the green and blue curves represent, respectively, normal (midpoint) and maximum sharpness. The boost expands the area under the curve — thereby increasing overall image sharpness — and lifts the higher frequencies, thereby preventing loss of fine detail.
Unfortunately, even at a normal setting, the horizontal frequency response curve has a moderately large peak within the frequency range that creates judder. (See the gray zone in Figure 5.)
More sophisticated camcorders have separate controls for detail enhancement and aperture correction. While the detail control alters edge sharpness, the aperture control alters the amount of fine detail. These controls enable a camera operator to balance edge detail and fine detail to minimize judder.
Figure 6 illustrates the judder band (orange) plus three representative response curves: film (purple), a digital cinema camera (black) and the response of this camera with negative detail enhancement (blue). Negative detail correction, as offered by Sony HDCAM and CineAlta camcorders, reduces the perception of judder.
Until low-cost HD camcorders incorporate the ability to dial-in negative detail and yet not reduce the amount of fine detail, a camera operator can try to eliminate excessive judder by setting sharpness midway between minimum and normal. Figure 5 shows this curve by a series of purple dots. (Setting sharpness at the minimum, as is often done in an effort to create a film look, simply strips video of fine detail, as shown by the red curves in Figures 5 and 6.)
Another judder reducing solution is to include the use of appropriate optical filters and/or a slightly slower shutter speed that increases motion blur. Likewise, a camera operator can control camera motion while the director controls the movement of objects within the frame.
Steve Mullen is owner of Digital Video Consulting, which provides consulting services and publishes a series of books on digital video technology.