9/18/2012 9:06 AM
As the viewer’s expectation of interstitial graphics rises over time, how and when graphics are rendered is driving innovation in master control. Should they be pre-rendered, or created on the fly?
Just before IBC kicked off, I received two announcements about new channel-in-a-box systems being launched at the show — Pebble Beach with Stingray (for fans of Supermarionation) and Harris Broadcast’s Versio. It does seem that we move inexorably towards function collapse from the old discrete component master control suite to highly integrated solutions. Are the new platforms better? Well, I don’t run a playout center, so I can’t comment from experience. As an outsider, I understand that having everything in one box cuts out all the inter-device communications and synchronization, so the system should be simpler. That means less to go wrong.
On the other hand, rendering complex 3-D graphics for end-boards and stings uses considerable CPU/GPU resources, and running all the other tasks of playout can prove challenging for a commodity server.
Many manufacturers have taken the decision to offload video processing to a separate card, typically from Matrox. However, there are servers around with as many as 256 CPU cores, it’s when folks try and run playout on a four-core that resources become stretched.
Take out fancy 3-D graphics, and a simple thematic channel with 2D graphics has modest demands on the hardware platform. Running such channels on a single box has been possible for several years, and there are thousands of channels out there standing testament to the fact.
Channel-in-a-box tends to be an emotive subject. But, when arguing the merits of one solution over another, the demands of the channel must be considered. What is the value of the channel? What are the demands for reliability? Does it take live inputs with unpredictable timing? How sophisticated are the graphics: how many layers, 2D or 3-D?
There is no single answer to the resources needed to run a channel as it depends on so many factors. Of course, it is very easy to benchmark a channel and run tests. Do the graphics falter, or do they run in realtime?
Another side to the channel-in-a-box was introduced by Vizrt, with realtime playout of video and graphics from the VizEngine. It suggests using a realtime compositer to layer video and graphics to air rather than prerendering graphics sequences. Althought the examples it cites are news and sport, they are not that different from airing a channel. Similar concepts can be seen from other vendors, with realtime Adobe Flash rendering engines from Harmonic (ChannelPort) and Pebble’s Stingray.
Talking to vendors around the show, you get different stories: “we use the CPU," “we use a Matrox card," “we use GPU acceleration." These are all valid approaches, and each has its merits. But, at the end of the day, will it run your channel in realtime, reliably, without dropping frames or otherwise impairing commercials?
My response is to test, test and test again. And, there may be several ways to deliver a solution. In many cases, the deciding factor will be the way the platform integrates with existing systems playing current channels to air, with which new channels may have to coexist.
One thing is on the side of the manufacturer, that old chestnut, Moore’s law. As CPUs and GPUs get more powerful, more can be rendered in a single chassis. The viewer will get their fancy graphics.