ROBERT GREEN AND AARON BEHMAN /
04.01.2012
Originally featured on BroadcastEngineering.com
The benefits of FPGAs
The devices' inherent flexibility provides a broad toolkit to equipment designers.

We are on the cusp of the next exciting age of video compression as High Efficiency Video Coding (HEVC) is expected to be standardized in early 2013. Due to the inherent flexibility of FPGA devices, equipment engineers and designers can get started now on next-generation equipment, and be assured that IP they develop should be adaptable to any possible, late-stage standards shifts.

The expected move to HEVC (H.265) follows a steady progression of video compression: the introduction of MPEG-1 in 1992 (which laid the foundation for the ensuing revolution in consumer digital video content), the release of MPEG-2 in 1994 (which offered compressed, broadcast-quality digital video) and H.264 in 2003 (which spurred worldwide HD digital video, Blu-ray, Internet streaming and mobile video).

Although the ISO/International Engineering Consortium Moving Picture Experts Group and the ITU have been working on a successor to MPEG-4 since 2004, the latest push toward HEVC came in 2010, when the group reviewed 27 technology proposals looking at how AVC could be advanced to the next level. During the review, it became clear that two paths were possible with the proposed standard: the same quality as AVC at half the bit rate, or twice the quality at essentially the same bit rate for transmission. Fortunately, HEVC can cover both scenarios and support a wide range of applications, from DTH transmission — where efficient use of low bandwidth (1Mb/s to 8Mb/s) is a priority — to acquisition — where highest video quality is paramount but bandwidth (>50Mb/s) and storage requirements, particularly, are still a concern. (See Figure 1.)

In parallel, the video-consuming public is expecting ever-higher definition in certain applications (HDTV, 4K2K and beyond), and sharing and watching increasing volumes of video-based content at various levels of quality. Supporting this trend, Cisco's Visual Networking Index predicts that the gigabyte equivalent of all movies ever made will cross global IP networks every five minutes by 2015. Cisco also estimates that Internet video will account for over 50 percent of consumer Internet traffic this year, rising from 40 percent of total traffic in 2010.

Against this backdrop, the move to a new compression standard that can handle higher definition video more efficiently, or double the throughput at current quality levels, is a much-needed evolution. HEVC is particularly well-suited for HDTV displays and content capture with progressive scanned frame rates and resolutions from 1080p up to Super Hi-Vision (16 × 1080p). And yet, video capture and display technology is also moving very rapidly, making it tricky to predict where, exactly, all standards will settle. This makes it nearly impossible for broadcast engineers and designers working to get ahead of the competition to rely on application specific standard products (ASSPs) as the backbone of their hardware solutions. For this reason, it is believed that FPGAs offer the only viable platform for the next several years for companies hoping to exploit advantages of HEVC.

Flexible devices

Broadcast engineers and designers are already starting the move to HEVC on flexible FPGAs due to several inherent features of the devices. These features include parallel processing for real-time support of video algorithms and massively parallel processing elements, which can look at multiple portions of an image concurrently. This is in stark contrast to consecutive processing of image sections currently done with software/DSP implementations, which result in a struggle to process real-time video at HD and beyond. Also, FPGAs can support the HEVC standard in software with hardware acceleration blocks for motion estimation and CABAC/CAVLC, which enables tradeoffs in device resource and performance while promoting design productivity.

Changes can also be implemented on these flexible devices both during equipment production and after deployment into the field. This allows equipment makers to get ahead of the standards and prevents early adopters from being penalized by late stage shifts in standards.

In the HEVC realm, this is especially important as the Joint Collaborative Team on Video Coding (JCT-VC) which was formed to meld elements from MPEG and the ITU-T Video Coding Experts Group (VCEG), are still evaluating modification to several current coding tools, including adaptive loop filter (ALF), extended macroblock size (EMS), larger transform size (LTS), internal bit depth increasing (IBDI) and adaptive quantization matrix selection (AQMS). New coding tools are also being considered for the new standard, including modified intra prediction, modified de-block filer and decoder-side motion vector deviation (DMVD). The toolsets provided to implement HEVC encoders can be used and modified in various ways to improve bit rate, video quality or both.

Several new features to support HEVC are also being considered. Although not fully baked at this point, they include a 2-D non-separable adaptive interpolation filter (AIF), separable AIF, directional AIF, a “supermacroblock” structure offering up to 64 × 64 with additional transforms, adaptive prediction error coding in spatial and frequency domain, competition-based scheme for motion vector selection and coding, mode-dependent KLT for intra coding, and IBDI. Table 1, on page 86, highlights the HEVC tool set and compares with H.264/AVC.

AVC High profile HEVC High efficiency HEVC Low complexity
16 × 16 macroblock Coding unit quadtree structure (64 × 64 down to 8 × 8)
Partitions down to 4 × 4 Prediction units (64 × 64 down to 4 × 4, square intra/inter + non-square inter)
8 × 8 and 4 × 4 transforms Transforms unites (32 × 32, 16 × 16, 8 × 8, 4 × 4 intra/inter + non-square inter)
Intra prediction (9 directions) Intra prediction (17 directions for 4 × 4, 3 directions for 16 × 16, 34 directions for rest)
Inter prediction luma 6-tap + 2-tap to 1/4 pel Inter prediction luma 8-tap to 1/4 pel
Inter prediction chroma bi-linear interpolation Inter prediction chroma 4-tap to 1/8 pel
Motion vector prediction Advanced motion vector prediction (spatial + temporal)
CABAC or CAVLC CABAC (Context Adaptive Binary Arithmetic Docing) CAVLC (Context Adaptive Variabl Length Coding)
8b/sample storage and output 10b/sample storage and output 8b/sample storage and output
Deblocking filter Deblocking filter
- Adaptive Loop Filter (AFL) and Sample Adaptive Offset (SAO) filter Sample Adaptive Offset (SAO) filter

Table 1. The toolsets provided to implement HEVC encoders can be used and modified to improve bit rate, video quality or both. Information courtesy Matthew Goldman, Ericsson, from the paper “High Efficiency Video Coding (HEVC) - The Next Generation Compression Technology,” presented at the SMPTE 2011 Technical Conference and Exhibition, Oct. 25-27, 2011.

At this point in time, only the decoder and the syntax of the MPEG stream are standardized, which means that encoding can be done in any manner using the toolsets in novel ways. From the board-level perspective, it is really only possible to implement this in FPGAs for real-time unless you can afford an 18-month design production cycle and a huge investment to make application specific integrated circuits (ASICs). Even then, this would never justify device development for the relatively lower-volume encoder market.

For designers who see the potential of using ASSPs for a portion of the encoding and functionality that is not changed from MPEG-4, FPGAs can help them meet the HEVC standard when used as co-processors to ASSPs to add more performance while differentiating the algorithms and tools being used as well as integrating the video/audio interfaces. At the same time, FPGAs allow for further pre- and post-processing to remove noise, de-interlace input video and de-embedding of audio from SDI to perform Dolby compression.

Depending on the application, the ability to trade off computational complexity, compression rate, robustness of errors and processing delay time are all elements that can only be evaluated in real-time with an FPGA-based design.

Robert Green is senior manager, broadcast marketing at Xilinx. Aaron Behman is senior manager, broadcast & consumer segment marketing at Xilinx.



Comments
Post New Comment
If you are already a member, or would like to receive email alerts as new comments are
made, please login or register.

Enter the code shown above:

(Note: If you cannot read the numbers in the above
image, reload the page to generate a new one.)

No Comments Found






 
Featured Articles
Discover TV Technology