Viewers and producers both seek a more realistic viewing experience from cinema and television systems. There are several ways to make the television more immersive. Three paths that are being followed include increasing the field of view, adding depth perception and improving motion rendition.
Wider field of view
We view television as a small 2D window on the world restricting us to the role of a voyeur rather than “being there.” The cinema has toyed with Cinerama, IMAX and Omnimax to give a very wide field of view. In the case of Omnimax, the field of view matches our peripheral vision. Current HDTV was originally conceived to increase the field of view of 10 degrees with SD to around 30 degrees. However, binocular human vision subtends over 120 degrees. The UHDTV or Super HiVision (SHV) project aims to increase the field of view to 80 to 100 degrees by raising the resolution to 8K.
One route to increased realism and immersion is to add depth information to video, with stereoscopic 3-D (S3D) being the first implementation. S3D is a long way from true 3-D, in that it creates a planar presentation at a fixed viewpoint and reproduces depth through binocular disparity. The depth budget of the production has to be managed as to avoid the eyestrain that results from objects being placed away from the display plane.
To achieve true representation of depth in the scene, the television system would have to reproduce the light field. Conventional flat-panel displays only carry intensity and color information at each pixel. A light field display also carries information about the direction of light rays, which allows objects to be viewed in front or behind the plane of the display. However, S3D is an affordable compromise, and consumer versions of light field systems are a long way off.
One side effect of increasing the field of view is that our eyes are more sensitive to flicker at the periphery of vision. Scanning rates of 24fps, 25fps or 30fps date back to pre-World War II technologies in film and television cameras. Those were the minimum rates that would work but still have always suffered motion artifacts. With film, it is temporal aliasing (the car wheel rotating the wrong way), and with television the well-known artifacts of interlace. For high-resolution systems, 25fps is simply just not adequate. We have already seen the way forward with 720p/60 broadcasts in the U.S. and high-frame rate cinema — pioneered by Douglas Trumball with the 60fps Showscan film system. It has now become much easier with digital cinema.
The current recommended EBU HDTV systems include system 4, 1080p/50, although broadcasters have not yet adopted the system for transmission. Research by the Super Hi-Vision team at NHK indicates a minimum frame rate of 120fps would have to happen in order to avoid flicker on the large screen of their system and to give motion portrayal worthy of the static resolution.
As viewers' expectations of picture quality increase, current 1080i/25 systems are showing the extent of their limitations. The biggest is interlace, a 1920s technology that is stubborn in its refusal to lie down. We have the curious situation where receiver manufacturers market sets as 1080p; however, decoders only support 1080i/25 or 720p/50. Sure, the panels are progressive, but that is how they work.
We are where we are with the crude technology of interlace and all its attendant artifacts. Even if it could be argued that viewers don't notice the artifacts, one fact is inescapable: It is more complex to encode to MPEG standards with the result of a lower compression efficiency than progressive scan video. Interlaced systems must also reduce the vertical resolution of graphics — anti-aliasing — to avoid interline twitter, with the result that the potential resolution of the system is halved.
Comprehensive viewing tests by the EBU have demonstrated that 1080p/50 can be transmitted at the same bit rate as existing 1080i/25 services, but with a better picture quality on large displays.
Most television receivers do not have the necessary performance in the decoder to support the AVC Level 4.2 that is required for 1080p/50 signals, and this remains an obstacle for migration to all-progressive services. It will change as receivers become more sophisticated and add support for DVB-T2 and for AVC level 4.2.
Producers look to maintain the value of their investments into the future. We already see SD channels commissioning HD programming with a eye on the future. New formats like S3D are gaining a niche following among viewers, but Super HiVision — 4K and 8K — is going to set a new benchmark for video quality.
Many broadcasters have an infrastructure that is largely 1.5Gb/s, for 1080i/25 or 720p/50, or even 270Mb/s standard definition. New builds are now predominately 3Gb/s, so the world is gradually moving to a position where 1080p/50 is supported by acquisition and post-production equipment. However, there remains a huge legacy of interlaced material in the program archives.
Mastering in 1080p/50 provides a file that can be transformed to 1080i/25 and 720p/50 without the artifacts inherent in crossconverting current interlaced or 720-line masters. Furthermore, much television content is consumed on inherently progressive devices like LCD TVs, PCs and tablets.
The 3Gb/s infrastructure also lends itself to the carriage of stereo signals, as a 3Gb/s can operate as two SMPTE292 1.5Gb/s links, for left and right. These could be 720p/50, 1080p/25 or 1080i/25.
As the NHK-sponsored project to find new levels of realism progresses, every aspect of the production chain, from cameras to displays, is evolving to support this high-res standard. There are many obstacles yet to overcome, with the delivery of such high-data rates to the viewer being perhaps the most challenging. 3Gb/s is a step on the way to these much higher data rates, but there are many obstacles, one being how to deliver the video steam. The uncompressed SHV signal is around 48Gb/s, and using current compression techniques would need a bandwidth of up to 400MHz, beyond current satellite transponders, FTTH systems or optical discs.
The most basic form of 3-D television is fixed view stereoscopic. It gives the illusion of depth, but every viewer gets the same view, irrespective of his or her position relative to the display. Stereo 3D effectively delivers a single view from a pair of cameras directly to the left and right eyes via separate, respective channels. Potential advances in technology could realize a free viewpoint, where the scene changes as the viewer moves from side-to-side.
Early coding schemes have used simple delivery of the left-right streams of a stereoscopic system to a display by spatially multiplexing left and right signals into the existing television frame. The display demultiplexes and displays the two channels using temporal multiplexing and shuttered eyewear, or though passive techniques based on polarization.
Frame compatible S3D
Two general forms of spatial multiplexing are used, called Side-by-Side (SbS) and Top-and Bottom (TaB). This is called Frame Compatible Plano-Stereoscopic 3D-TV.
Frame-compatible S3D sacrifices horizontal resolution (SbS) or vertical resolution (TaB) in the process of spatially multiplexing the L and R images streams. Frame-compatible in the first-generation implementation is not compatible with a 2D service, so it requires a simulcast to serve 3-D and 2D viewers. Extensions to standards to add support for signaling, which already exist within AVC standards, would allow future STBs to select, say, the left channel for the 2D viewer.
Service compatible S3D,
There are alternative methods of transmission that provide a service compatible with 2D viewers. One is the scalable video coding (SVC), which forms part of AVC. This allows additional data to be carried that basic decoders can ignore. The depth difference information could be carried as an additional channel to the base 2D channel, and suitable STBs could use that to reconstruct the left and right signals. MPEG-C Part 3 (ISO/IEC 23002-3) specifies a 2D+Depth coding scheme.
There is much redundancy between the left and right views, and this can be exploited in compression schemes much in the same way as the interframe compression in long-GOP MPEG.
The DVB has released a 3DTV specification (A154) detailing frame-compatible 3DTV service standards. Future compliant receivers could utilize signaling carried in the AVC supplemental Enhancement Information (SEI) to automatically manage mixed 2D and 3-D broadcasts to 3-D, or to 2D-only receivers. The specification is aligned with HDMI 1.4 and supports TaB and SbS multiplexing.
Multiview coding (MVC)
Another extension to AVC, the multiview compression profile (MVC), allows multiple viewpoints to be encoded to a single bit stream, and then decoded to 2D, stereo or multi-view to suit the display device. Typically, this can be used with two viewpoints (stereo high profile), with true multiple viewpoint capture (multiview high profile) to be used in the future. MVC uses temporal and inter-view prediction to remove redundant information across views.
The Blu-Ray Disc Association updated its specification to support 3-D using MVC encoding. The format allows existing players to decode a 2D signal from 3-D discs. Through the use of MVC, the BDA claims it can achieve the quality of separate L/R stream at 75 percent of the data rate.
As broadcasters move to a 3Gb/s infrastructure, and if and when a large proportion of receivers support H.264 level 4.2, the way is clear to move to 1080p/50.
The pace of change is accelerating, and for consumers each change — DVB-T to T2, MPEG-2 to AVC and 2D to 3-D — involves a new STB. The devices are rarely forward-compatible, and they are designed to a price where every cent counts. The days of a receiver lasting for a decade or more look set to be placed with constant obsolescence. Issues remain as to how the data rates of an 8K system can be delivered to consumers, although many would say 4K would suffice for the foreseeable future.
Future US's leading brands bring the most important, up-to-date information right to your inbox