Recently, I had a conversation with a friend about computer graphics and animation. Both of us are techies, so of course the discussion gravitated to hardware and software.
In 1986 B.W. (before Windows), a computer graphics company called Bosch had an initial run of success. The company's main product was the FGS-4000 graphics box, a full rack of expensive computing hardware with a price tag starting in the quarter-million euro range.
It offered 3-D rendering of models such as lettering and other items that could be defined on-screen using what today would be considered a primitive user interface. A hot machine in that era, it sported analog NTSC output cards and typically rendered to either a disk recorder or a Sony BVH-2500 analog 1in video recorder that could record single frames.
I have tried to find data on the capabilities of the system, but suffice it to say, any desktop computer today offers much greater capabilities.
We can thank Gordon Moore, a founder of Intel, for describing the growth in computing power in our era. In simple terms, Moore stated that the number of transistors on integrated circuits would double every 24 months. The assumption is that as the number of transistors grows, the power of the chip grows in direct proportion. (See Figure 1.)
To the first approximation, his prediction has been remarkably accurate. It is an empirical observation, from which nothing can really be calculated. From a business standpoint, however, it has assumed the status of nearly a law of nature.
Applying Moore's Law to TV graphics
Nowhere is this power more useful than in graphics applications, both scientific visualization and the creation of TV and motion picture special effects.
Many of the effects that the FGS-4000 couldn't render in real time a couple of decades ago are today rendered in hardware on graphics cards at higher resolution.
For example, take a look at the Microsoft Windows 3-D text screensaver to see the jump in real-time processing power that Moore's Law describes. That simple 3-D text would have taken many hours to render in the 1980s.
The power has moved from CPU-intensive operations to graphics primitives that are manipulated in high-speed, special-purpose graphics engines on graphics cards. Now, graphics cards are capable of outputting video to the tight specifications professional TV systems require. Today, essentially all cards on the market can be programmed into raster formats for common and unusual TV formats, with the correct color spaces and interfaces necessary for broadcast use.
The tablet PC on which I am writing this article will output clean NTSC, and with the proper physical layer interface adapter, it can output SMPTE 259M SD or SMPTE 292M HD signal formats. Software today can create excellent text keys (lower-thirds and full-screen images) that can be rendered in real time on cards designed for consumer applications and suitable drivers. The growth in home computer gaming is directly benefiting broadcast applications.
Output is only half of the picture, of course. Capturing video on general-purpose computers used to create color space conversion and monitoring problems. Simply put, the computer industry was slow to understand that the details of broadcast standards needed to be taken into account in both hardware and software design if the product was to have any applicability to our marketplace.
In the 1980s, plenty of terrible video was created using immature tools. By the mid-90s, the growth of prepress and broadcast graphics had spawned tons of options for graphics professionals involved in creating content. Timing, jitter and color space issues no longer existed in video capture. Black burst inputs on computer systems were not uncommon. More than anything else, the need for display hardware and software manufacturers to achieve predictable results for screen, print and professional monitors meant a new focus on the details was essential.
Open Adobe Photoshop and explore the myriad of color space options it now supports. It makes you want to open Charles Poynton's books, “A Technical Introduction to Digital Video” and “Digital Video and HDTV, Algorithms and Interfaces,” and bone up on the fundamentals of color reproduction.
TV effects today and in the future
The refinement of effects for television production effects is closely tied to the evolution of computer graphics hardware and software. Early graphics pioneers found unique ways to leverage the power of mainframe computers to accomplish the calculations needed to create art.
Today, the number of programs for the effects industry is staggering. A credible artist may move a project through a dozen programs to take advantage of the specialties each program features. Some are plug-ins for programs, including the Adobe suite of products.
Companies that specialize in effects work for television are finding it increasingly difficult to keep a hold on the market. Manufacturers that build general-purpose hardware with television-specific I/O have been successful as long as the hardware remained the favorite among developers. Today the popular choice is any Intel-based good-quality hardware platform that offers blazing speed.
It's a tall order to create a proprietary system that fully meets customers' needs. Even harder is predicting customers' future desires and keeping the product in step with the pace of development in the industry. Some companies have met this challenge by focusing on a subset of the application market.
Moore's Law to expect more far in the future is the kind of folly that Mark Twain might have turned into a pithy and quite apropos statement.
John Luff is a broadcast technology consultant.
Send questions and comments to:firstname.lastname@example.org
Future US's leading brands bring the most important, up-to-date information right to your inbox
Thank you for signing up to TV Tech. You will receive a verification email shortly.
There was a problem. Please refresh the page and try again.