SMPTE: Conjuring the Virtual Media Facility

HOLLYWOOD—A fully virtualized live media workflow is on the horizon, according to presenters during a Wednesday morning session on virtualizing the media facility at the SMPTE Technical Conference.

“We know from our own observations of dealing with folks, there are many people going down that path, and we’ll see a lot more next year” using cloud platforms for live playout,” said Jim Duval of Telestream.

Eric Openshaw of Pebble Beach said, “Software-defined video pipelines have come a long way in the last three years… The first challenge is, how do we connect components in an IP world?”

Encapsulation—essentially wrapping serial digital interface signals in an IP-based package—has advanced, as well as other proprietary technologies supporting redundant paths, adaptive bitrates and so forth, Openshaw said.

“SMPTE 2110 is making good progress, but it’s early days,” he said of the developing suite of standards that include consideration of technical recommendations -03 and -04 from the Video Services Forum. (Described by SMPTE session chair Thomas Edwards of Fox in this Q&A.)

Even with standards, questions about adequate bandwidth, network reliability, jitter and latency remain. Despite these unresolved issues, virtualization is being driven by business realities, Openshaw said. Building a new physical facility is not only time- and resource-intensive; it’s becoming a race with obsolescence.

“The prevailing cost model has been to capitalize the system up front on the assumption that the infrastructure that’s been put in will serve the purpose for four to five years. The problem we see today is that the revenue streams we need to support this over the next several years are harder to predict,” he said. “Fundamentally, it’s agility that justifies going into the cloud.”

With virtualization, one begins with the data center, which comprises servers, storage and connectivity, and pretty much constitutes a cloud, or at least a portion of a cloud. Then there is the question of building one’s own cloud on premises or renting space on someone else’s. Both are subject to performance and security concerns.

“The question remains,” Openshaw said, “can broadcasters build a private cloud better than what Amazon has done, and can they reach the same level of security as Amazon?”

Once the virtual “facility” is designed, there are workflow issues to address. Telestream’s Duval said one of the problems with a cloud-based media workflow is not being able to see under the hood. He said a typical production workflow provides for multiple content modification cycles if necessary.

This process becomes cumbersome for content cycled in and out of the cloud. One round trip for one hour of DNxHD 145 is 67 GB, he said, requiring four hours to make the transit at 100 Mbps with TR transcoding.

There also can be bit-error problems and corrupted files in the cloud. Maybe there are satellite transmission bit errors, or anomalies from drive mapping, for example. Production inconsistencies also can result in file corruption. Then there may be deliberate or mistaken loading of files into the wrong bucket.

Duval talked about particular inconsistencies that came up at a network using an automated cloud workflow for citizen-submitted video. When video arrived, there was initially no way to verify its place of origin or even its orientation.

“When that video arrives… do you know for sure that it was relevant to the event that took place? The other thing is, with cellphones, you don’t know how a person may orient the phone to shoot it,” Duval said.

The answer was within the workflow, which included latitude and longitude data that could be checked against Google Maps, for example.

“What you get from the cellphone, it goes down to degrees, minutes and seconds. That was a very useful application. This application essentially pulls the information out of the cellphone video file to confirm both the location and orientation of the video,” he said.

As a result, modification itself was automated, and the only round-trip the clip would have to make was for a review.

“It’s that type of process that can make this work in a reasonable way,” he said.

Both Duval and Openshaw, participated in a Q&A following their presentations.

One question that came up was what today’s multi-vendor environment would look like in the cloud, or, “how do we end up with one throat to choke?”

Openshaw said there were no APIs that he knew of to operate all the different products from different vendors, but that rather an orchestration layer could be created to run them all.

Another query went to security. Bruce Devlin said there were “some awesome security tools that have been spun up” for cloud platforms, but he questioned whether or not folks were aware of them.

“What’s your experience getting newbies into this type of ecosystem?” he asked.

Duval said that in his experience, “either they don’t know those keys are there or they don’t know how to fine-tune them.”

Openshaw noted how recent, high-profile security breaches didn’t have as much to do with cloud platforms as, for example, not turning on encryption.