NLE rendering: Multiple solutions

Back in the linear age, the words render and real time were not uttered as effects were incorporated into productions. That's because while media was stored on nonlinear devices and a microprocessor in the edit controller directed the editing process, special effects were performed in a DVE and CG that was controlled by the controller. The media stream never entered the computer. (See “Post-production evolution,” Broadcast Engineering, August 2005.)

As computers became faster, it was possible to route video through the computer. It was now possible to perform dissolves and wipes. For anything more complex, video was still passed to an external DVE.

Eventually, operations performed by a box could be performed on a computer board. From there, the circuitry shrank to a single IC that could be mounted on the board that digitized video and audio. Simple wipes were performed by the computer's CPU. Complex effects were performed by ICs

In an effort to drive NLE prices lower, engineers realized that there was a distinction between preview video and recorded video. This set the stage for another type of effects — real-time rendered effects that were not of final recordable quality.

There are four fundamental techniques employed to produce previews — each with its own advantages and disadvantages. Both categories exist in two forms: static and dynamic.

Static render buffer

As it became cost-effective to equip computers with large amounts of RAM, it became practical to render effects at less-than-real-time speed into a RAM buffer. Upon pressing “preview play,” the buffer was filled. Then, the RAM buffer would play out the video to both RGB and NTSC/HD monitors. Sony Vegas uses this approach. The disadvantage is obvious: the disturbing wait before playback.

A variation of this technology is background rendering. Apple's Final Cut Pro and iMovie use this technology. As soon as an effect has been defined (or after a defined period of time), rendering begins with output to a double-buffered disk file. If you are the type of person who writes with spell check turned off so as not to interrupt the flow of words, this is a great technology. Conversely, if you like to fine-tune each effect before moving on, background rendering will be of little value to you.

Dynamic render buffer


Sandra Scagliotti uses the Canopus EDIUS Pro 3 for native editing and real-time processing to mix video content for Vaterman Broadcasting’s NBC and ABC affiliates in Ft. Meyers, FL.

To eliminate the delay before a video preview begins, a more sophisticated design was developed by Canopus. Upon initiating a preview, the PC decompresses one or more video streams. Based on the structure of the timeline, effects are rendered in the correct order for each frame. Each rendered, uncompressed frame is then stored in a large buffer held in system memory. Using double buffering, the buffer is drawn upon at the appropriate rate and sent to your monitor(s).

During a period when there are no FX, the buffer fills with rendered frames. When the effect workload is light, the buffer will not be emptied. At intermediate workloads, the number of frames in the buffer will slowly drop over time. If, however, a complex effect — or a combination of effects — is encountered, the buffer may be emptied quickly. While this FX technology can work beautifully, it is impossible to guarantee a real-time preview because it depends on the nature of the timeline itself.

Static quality reduction

Another scheme enables transitions to be previewed by working only with one field from each stream. Other variations on this approach use only every other horizontal pixel. And, of course, both strategies can be used together to reduce the load bya a factor of four. Alternately, frames are dropped as needed, which results in playback stuttering — even though the effect may be claimed to be real-time.

Dynamic quality reduction

Rather than choosing a fixed real-time strategy, it is possible for an NLE's rendering engine to analyze a timeline and generate a rendering approach that maximizes real-time performance. Final Cut Pro uses this approach.

Alternately, you can select a strategy that maximizes video quality or optimizes preview frame rate. One downside is that Final Cut Pro may simply skip effect features (e.g., an edge blur) during a preview.

Avid offers a similar type of rendering engine. However, no effect features are eliminated to increase performance. These types of subtle distinctions make it impossible to compare NLEs based on the number of real-time effects or the number of real-time streams.

Hardware remains


Pinnacle’s Liquid Edition PRO is an example of a system that uses a graphics processor unit to support real-time review of many video streams. The screen capture above is of the editing interface, showing A-B windows, the open timeline and the source files.

Pinnacle's Liquid Edition PRO supports real-time preview of many video streams by using a graphics processor unit (GPU). Data streams are transferred from disk via the PCI bus to buffers in system RAM. The PC's CPU grabs the data directly from system RAM and decompresses it.

For complex 3-D effects, uncompressed video from system RAM is transferred via the AGP bus to graphics RAM. The GPU on the AGP card then generates the complex effects. By using a GPU to render effects, not only are complex effects rendered rapidly, the total data transfer load is balanced between two separate PC busses — PCI and AGP.

After the GPU renders effects, the resulting uncompressed frame is held in graphics RAM. From here, the frame can be output via the graphics card's DVI/VGA port to provide an editor with a real-time preview. It can also be output as real-time analog video.

Hardware returns

While it is possible to compress DV using a computer's CPU, real-time HD MPEG-2 encoding is far from being a reality. This can cost many hours of encoding before a timeline can be recorded to tape or a DVD.

A potential solution is to incorporate a hardware MPEG-2 encoder. Although this will add cost and perhaps prevent HDV laptop editing, it enables direct HD recording without an encoding delay. It also supports the output of an MPEG-2 TS stream via FireWire to an HDV camcorder. In turn, the camcorder converts it to an HD analog component output for display on an HD monitor.

This is a very inexpensive way to obtain HD monitoring. An alternative, provided by the Canopus EDIUS NX, is a PCI board that outputs SD/HD analog component video.

When hardware DV and MPEG codecs are incorporated in an NLE, these systems will function much like linear editors of the last century, when every type of output was real-time.

Steve Mullen is owner of Digital Video Consulting, which provides consulting and conducts seminars on digital video technology.