Parallel processors poised for bigger role as viewers demand ubiquitous content

As the cornucopia of Internet-connected and TV-enabled appliances grows, it’s fair to say that choosing where to watch TV might soon be as big a decision as choosing what to watch. A viewing public that wants it all right now puts enormous pressures on television infrastructure, mandating massive increases in processing power to deliver anything, to any device, at any time.

Massively parallel computing — both on the operator side and on the viewer device side — is one answer, and Elemental Technologies is establishing itself solidly in that camp. The company's new parallel processing appliance for multi-screen video, the Elemental Server, uses parallel graphics processors (GPUs) to deliver the platform's hyper-fast — faster than real-time, says Elemental — transcoding

The platform lowers encoding costs, according to Elemental, by delivering the same performance as 14 quad-core CPUs with only two Intel microprocessors and four NVIDIA CUDA GPUs. The server encodes for H.264 and VC-1 and handles up to eight HD videos simultaneously in real time or produces up to 32 simultaneous 480 x 270 videos in real time.

As the leading maker of GPUs, NVIDIA is predictably enthusiastic about this trend. And not just because they make GPUs. "The very nature of graphics processing is a very parallel problem," explains Andrew Humber, NVIDIA's senior PR manager for handheld and GPU computing products. "Video [transcoding] processes thousands and thousands of pixels at a time.

"A CPU is a sequential processor: It does one thing at a time. If you gave a CPU a book and asked how many times a certain word appears, the CPU would read from start to finish," he continues. To understand parallel processing, think of the old saying, “Many hands make light work.”

"A GPU would tear that book into thousands of chunks and each one of those would get read at the same time," Humber says.

CPU technology has reached a point where it really can't get any faster, he says. "GPUs, on the other hand, have 240 cores. Each one can run thousands of threads. When you have these large applications, you can scale perfectly with the number of cores. You have the ability to do super-computing projects without a super-computer."

GPUs also supply "greener" computing power. "Compared what you would need in a data center built on CPUs, it's an exponential difference," Humber says. "You can build a 100 teraflop data center, at 10 times less cost and 21 times less power."

Parallel processing isn't for everything, though, Humber adds. It hasn't yet rendered the mythical man-month obsolete.

Have comments or questions about this article? Post a comment or visit our Forum and start a discussion.