Skip to main content

‘Fake’ vs. ‘True’ Progressive Scanned Video

Video frame rates would seem to be a static subject frozen in the various International Television Union (ITU) and Society of Motion Picture and Television Engineers standards (SMPTE).

However, equipment manufacturers seem to endlessly devise new wrinkles leading to endless debates over the plethora of frame rate designations, which exposes the uncertainty many video professionals have about those designations.

The user community can get testy when discussing the topic, but they need to understand, their insistence on “filmic” look video and the need for 24 frames per second (fps) recording, playback and editing within a media designed in the United States for 30 interlaced frames per second has driven manufacturers to add “workarounds” that jam ‘nonstandard’ frame rates into video equipment that meets those standards and supports legacy content.


Figure 1. ‘Standard’ 2:3 pulldown for converting 24p frames (A, B, C, D) to 30i frames (1, 2, 3, 4, 5) each with two fields, a and b. One of the first points to establish when discussing frame rates is whether you are talking about acquisition or storage. How an imager scans to capture a picture and how that image is organized in the data stream for recording or playback are two different issues. Progressive scan imagers can have their images stored in “interlaced” containers and vice versa.

Leaving overseas formats like phase alternating line and sequential color with memory aside for the moment, any video recorded for standard definition video viewing equipment needs to be output in “30i” (29.97 interlaced fps). Video that is captured at a filmic 24 fps will have to be spread across 30 interlaced frames containing 60 separate interlaced fields. That is where “pulldown” comes into play.

In 2:3 pulldown (aka 3:2 pulldown) the first frame of 24 fps imagery is stored in two fields of 30 fps video (as shown in Figure 1), and the next frame of 24 fps imagery gets stored in three fields of 30 fps video. That “cadence” of storing successive 24 fps images into two fields then three fields of 30 fps video repeats, and if both the 24 fps and 30 fps imagery have the same format (i.e. standard definition video) there is no degradation of image quality inherent in the pulldown process.

However, it does present post-production issues if the editor works only at 30 fps. Pulldown creates a “judder frame,” when the first interlaced field contains an image from one 24 fps frame but whose second field contains an image from a different 24 fps frame. When that occurs in a five-frame run of 30 fps video, frames 3 and 4 are judder frames. Clips on the timeline should not start or end with a judder frame to prevent a visually annoying “flash frame” artifact between clips.

Panasonic introduced “2:3 advanced pulldown”, or 2:3 pA, with its DVX-100 camcorder, the first to support “filmic” looks and 24 fps shooting. Instead of the standard 2:3:2:3… cadence, Panasonic used a 2:3:3:2 cadence (see Figure 2). This produced only one judder frame, frame 3 per five 30 fps video frames reducing the chance that an unaware editor would place a judder frame at the start or end of a clip.

Figure 2. Panasonic’s ‘advanced’ 2:3 pulldown. The intent of these “24 fps” shooting modes was to allow digital video camcorders to cheaply produce content that would eventually be released on film. Once the project was edited in a system that only outputs 30i video, it was necessary to reverse the pulldown process to get the video fields back into film frames. In that case, five 30i frames are “pulled down” to four 24p frames, hence the name, “5:4 pulldown” which is also called “5:4 pullup.” That process is tricky because the processor needs to know which cadence is used and which video frame corresponds to the A frame of 24p imagery. Otherwise judder frames will appear in the progressive scan display of the 24p material, once again creating a noticeable artifact.


Standard high definition video supports a plethora of interlaced and progressive frame rates. “True” progressive scan rates, 24p, 30p and 60p are often confused with “fsc” (color sub-carrier frequency based or interlaced) scan rates of 23.976 fps, 29.97 fps, and 59.94 fps. Adding to the confusion, “59.94 fps” is also used to designate 30i video where the “fps” stands for fields per second instead of frames per second. Frequently 24p is used to designate what is really 23.976 fps.

In HD video, the “s” from “fsc” shows up in descriptions for the interlaced organization of progressive scanned imagery for recording or playback. For example, 1080-30ps or 1080psF means that progressively scanned frames are stored in the same manner as interlace scanned frames – odd lines in one field, even in another – but without the interline filtering used for interlacing. Again, this produces no loss in quality. It simply allows progressive scanned imagery to be recorded, played back and viewed by equipment designed to support interlacing. Of course, with 1080-60p processing that either compresses the video to fit 30 fps data rates, or utilize equipment specifically designed for the higher data rate needs to added (most 1080- 60p recording equipment uses some from of long-GOP compression to fit the bandwidth of relatively slow transport rates for tape-based recording).

A stubborn myth is that “true progressive” scanned imagery is superior in quality to progressive imagery scanned at an equivalent fsc rate. In reality the only difference is that the scan or transport rate for the fsc material is 1/1000th slower than the “true” progressive rate. Not only is that speed difference undetectable to human senses, it has virtually no impact on resolution, the primary determinant in “picture quality.”


Such a miniscule time difference only matters to those who are performing time-critical analysis on high-speed events where milliseconds count. Otherwise, 24p and 23.976p (aka 24ps) can be used interchangeably. Converting back and forth between a “-p” and an “-i” format requires attention to the pull-down cadence and alignment to avoid judder frame artifacts. Image degradation would only come from conversion between image formats (SD, HD) and compression CODECs, MPEG-2, H.264) or improper cadence/judder frame handling.

-- Government Video