Recently I got a second chance to see the Sony test movie “Arrival,” which was first shown at NAB. Shot and projected in 4K, the pictures are stunning. As I watched — thinking this looks almost real — the illusion was shattered by motion. The familiar artifacts of 24fps destroyed the illusion. With film projection, the combined effects of weave, the loss of resolution through the contact printing processes of the interpositives, and internegs between the original camera negative and the release print mean that the low capture rate is somewhat masked.
If you combine 4K resolution with digital projection, you get a rock-steady, high-resolution image. However, capture at 24fps puts the temporal resolution at odds with the spatial resolution. Cinematographers call this the “film look,” as if they had crafted the look. But they had no choice. That's how motion is portrayed if you shoot at 24fps. The look has been defined by a technology platform developed in the 1920s.
In developing the spatial parameters of television systems, the horizontal resolution is matched to the vertical resolution, and this philosophy follows the work of early RCA engineers, including Raymond Kell. There is no reason why this should not be extended in three dimensions by increasing the capture rate. This is what some broadcasters have done with 720p/60 transmissions.
As storage costs fall, there is no technical reason why the film industry could move digital production to 48fps, but film is stuck with 24fps due to the mechanical limitations. I have seen 60fps film, but I recall that the prints didn't last long.
Most broadcasters have stayed with interlace as they migrate to HD, for marketing reasons more than the pursuit of video quality. The public — the viewers — see 1080i as better than 720p because the former is a larger number. Of course, economics come into this; interlace saves spectrum over 1080p/50 or 60. Another factor is that interlaced HD is compatible with legacy SD systems.
One question I raise is: Should the frame rate be fixed, or could it adapt to scene motion? Long-GOP coding is adapting bit rate to scene motion through motion prediction, but within a constant frame rate system. Is long-GOP coding just a simple way of achieving the same data rate savings as variable frame rate coding?
To the human visual system, it is important that the region of interest in the scene is delivered at the highest resolution that the (information) channel can deliver. This would imply that other regions could be coded at lower resolution. In filming, a wide shutter angle blurs the moving background in tracking shots, effectively achieving this aim, but in reality it is done to mask the strobing seen with small shutter angles at the low frame rate of film.
Frame rate adaption and region-of-interest coding are already techniques that fit the special requirements of video surveillance, but in that application, the purpose is face recognition or license plate capture, very different from the realistic motion portrayal need for film and television. Future video codecs will no doubt use these techniques to improve upon MPEG-2 and AVC, as well as motion JPEG2000.
So we have film and television systems that deliver great slide shows, but are not very good at portraying moving objects. Some in Hollywood are moving to 48fps, especially for 3-D. With sports being a great revenue source for television, for broadcasters it will be sportscasting that drives forward the more accurate rendition of fast action.
Send comments to: email@example.com