Universal standards remain a dream
On a regular basis, I get into discussions about converting video frame rates. For historic reasons, TV frame rates are linked to power frequency. Operating the receiver at a different frame rate from the power frequency no longer leads to hum bars, but the camera lighting still causes problems. Any lamp running off AC will show a frame-to-frame exposure variation if not synchronous with the scanning. Incandescent lamps are better than discharge, stemming from their thermal lag.
The film guys can use a generator running at whatever frequency they want, but most television studio work runs off the power grid. One way around this is to employ flicker-free lighting, which uses high frequencies to avoid frame-to-frame variation. Much acquisition is indoors with practicals like overhead fluorescents, so shooting at the local power frequency avoids problems.
So for legacy and practical reasons, we are stuck with two frame rate families: 25/50 and 30/60. And cinematography is similarly not going to deviate from 24fps in the foreseeable future.
The problem lies with program interchange: converting between 24fps, 25fps and 30fps, and 50/60. There are ways and means to convert, but all introduce artifacts to a greater or lesser extent. A major artifact is the speed change from 24fps to 25fps and vice versa, as is the 3:2 pulldown with disturbance to the cadence of moving objects. 25/30 interchange uses electronic techniques based on estimating the optical flow of objects, and can be very good, but not perfect for all scenes.
I got to thinking that the basic problem stems from the choice of regular temporal sampling back in the days that cinematography was developed. Our eye/brain system doesn't work this way, but for a mechanical system like film, it delivers a good approximation to continuous motion through a process that takes place in the visual cortex termed short-range apparent motion.
Video compression makes much use of motion vectors, so why not break away from frame-based capture and use object encoding? This is mentioned in the MPEG-4 standard. The big problem lies in the computational requirements of the camera. Objects must be detected and segmented, then motion-estimated and vectors generated. My guess is that the main problem would be the fidelity of the reproduction on the edges of objects. Until the level artifacts of the object-based capture are less than current standards-conversion techniques, such a system is just a dream. A halfway house would be variable frame rate coding, but would this be any more efficient than long-GOP constant frame rate? I doubt it.
The industry is on course to move away from interlaced scanning, with its inherent artifacts. Progressive scan delivers much better picture quality and is easier to compress, and it's been proven in numerous trials by the EBU. However, the artifacts of frame rate conversion remain. Many of these artifacts of standards conversion can be avoided with a carefully planned workflow, but if a production is for worldwide distribution, one of the transmission masters must be a conversion.
For more than a century, constant rate temporal sampling has proven to be a good engineering compromise. As countries are unlikely to change their power frequencies to a single global standard, I guess we are stuck with standards conversion for a long time.
Send comments to:firstname.lastname@example.org
Get the TV Tech Newsletter
The professional video industry's #1 source for news, trends and product and tech information. Sign up below.