HOLLYWOOD, CALIF.—As the industry anticipates high dynamic range content, it must address quite a few issues related to compression, compatibility among different displays and backward compatibility of standard dynamic range content.
Standards for HDR must be more flexible than BT.709 for SDR, since SDR works with a fixed definition of 100 nits (candela/meter square) at its brightest, while HDR will be mapped in one of many possible ways to display where the brightest output could be 400, 800 or theoretically 10,000 nits. HDR can offer a more lifelike image because of its greater contrast range, but that same characteristic could also introduce new kinds of artifacts. HDR displays likely will be expected to present SDR material.
These were some of the issues addressed on Wednesday by Pierre Larbier, chief technology officer of French ATEME; Olie Baumann of Ericsson Television, Southampton, U.K., and Avid Inc. Chief Architect Shailendra Mathur.
Larbier’s presentation, “High Dynamic Range: Compression Challenges,” addressed issues related to compression when working with the SMPTE 2084 Opto-Electronic Transfer Function in conjunction with codecs such as the HEVC (High Efficiency Video Coding) when encoding a higher contrast signal than BT.709 can support.
“Extensive research conducted using HDR-graded content shows that HEVC video encoders that were not specifically designed for this new format will produce new types of coding artifacts,” Larbier said
“Mapping between luminance and code values is not supposed to have an impact on coding efficiency,” he said. “HEVC was selected as the codec to compress 10-bit UHD HDR signals. But the tests on UHD HDR content showed unexpected new video defects that might become noticeable to viewers if the video compression system wasn’t specifically adapted.”
His conclusion: “HDR promises an immersive experience, but the video encoder has to be specifically optimized” to avoid artifacts such as noise and banding.
Baumann’s paper, “The Interaction Between Transfer Function and Compression in High Dynamic Range Video,” concerned the fact that consumer displays are offering imagery that can be both brighter and darker than previously possible. Even current SD displays, he noted, offer peak brightness of up to 300 nits. The result, he showed in a slide presentation, causes artifacts, specifically banding, at particular levels.
Baumann suggested a possible solution that involved adding more bits and increasing the baseband data rate, but pointed out the inefficiencies of such an approach. It would be preferable, he said, to change the transfer function in order to exploit the relative viewer sensitivity to contrast differences at low luminance. But, he asked, which transfer function should one choose? SMPTE 2084 or BBC/NHK Hybrid Log Gamma or something else? What are the trade-offs of each?
Among the potential solutions for delivering this additional information is the use of a nonlinear opto-electrical transfer function (OETF) prior to quantization, which exploits the relative sensitivity of the human visual system to contrast differences in light and dark areas. His presentation investigated the interaction between these transfer functions and HEVC video compression in terms of the perceptible artifacts in HDR video. His demonstration primarily dealt with his methods of identifying and isolating particular areas (cells) of potential banding, rather than offering a solution to the banding problem itself.
“Nonlinear transfer functions are fundamentally a compromise,” he said. “Banding/loss of detail is not only an issue in dark regions, but potentially throughout the image, and compression exacerbates the effect. Different transfer functions impact difference luminance regions.”
Mathur spoke about the SMPTE VC-3 video codec—the basis of Avid DNxHD —and the extension that is being added to the standard to allow for resolution-independent encoding of arbitrary raster resolution, any frame rate, multiple bit depths and a variety of color spaces.
Stressing its success as a low-complexity editorial, intermediate and mezzanine codec, Mathur said it is very important to maintain the low-computation complexity characteristics. This attribute is essential, he said, and worth compromising on compression ratio to maintain.
“The idea with VC-3,” he said, “is that you can put it in an archive and decode it any time in the future.”
The new VC-3 standard, he said, was enhanced to break all resolution, color and frame-rate boundaries, and to be compatible with existing hardware and software implementations. It has fast and scalable performance for real-time editing. It has specificity for existing video standards, and allows the freedom to work with non-standard video.
“It supports a variety of workflows involving better, faster and more pixels,” he said.