SMPTE 2015: Deciphering OTT Deployment

Data, metadata and workflow modifications
Author:
Publish date:
Updated on

HOLLYWOOD, CALIF.—Going over-the-top takes some deliberation, but it can yield insights advertisers crave and experiences that users engage. That was the upshot of a morning session on the technology behind and within OTT at the SMPTE 2015 Technical Conference on Wednesday featuring Amer Saleem, Yassar Syed and Arnav Mendiratta, and chaired by Steve Wong of Siemens.

Saleem addressed leveraging metadata on the front end; Mendiratta, on the back end; and Syed, a distinguished engineer at Comcast in Los Angeles, addressed getting there.

Image placeholder title

Amer Saleem
Creating a distinct workflow to extend the cable service to the Internet would be costly, Syed said, so Comcast sought to integrate content generation for OTT into its existing workflow. The system he described leverages a decoupled transcoder and packaging system that required two major characteristics.

“When we talk about services, we have an expected content experience, but it’s also about scalability of services and reliability of services,” he said.

Scaling, or growing with demand, required a certain amount of prognostication in terms of anticipated demand load, content inventory volume and various adaptive bit-rate implementations and digital rights management technologies. Bandwidth also had to be considered, as did storage and how to handle the ABR iterations.

“In QAM or OTT, you’re calling up content fragments that are assembled as elementary streams,” Syed said. “You have a compliant elementary stream, and video encoders and audio encoders, so there’s a possibility to be more efficient scaling up by using the same type of encoders across services.

“Coming out of encoders, we have the MPEG transport streams. We wanted to add in-band cues,” or SCTE-35 for ad insertion, and CEA 608/708 for closed captioning, for example. “And we can monitor video quality with existing equipment, and ingest with existing equipment. So now, instead of having frame-accurate ad splicing, you get fragment-accurate ad splicing.”

Syed said Comcast is using adaptive transport streaming, which can be adapted for OTT channels with a packager, “a fairly lightweight application that can exist near transport or anywhere in the network,” he said. He described both a liner, and a just-in-time packager.

“The function of a linear packager reads the transport stream and reads the segment marker for storage. At the same time, it’s parsing the stream and pulling out information that can create the manifest, so you can have SCTE 35 markers,” and the ability to swap out languages and other elements, he said.

DASH-type technology is used for the manifest to handle differing profile types.

The JIT packager uses one format. When it gets a request from storage or the content delivery network, it pulls the fragment and modifies the manifest.

“Since we’re using DASH, we can convert MPEG-TS to ISO BMFF,” he said. The packager also can handle ad insertion from the server side and the client side.

Syed said the work being done at Comcast is part of standards development at the Society of Cable Telecommunications Engineers for SCTE 214-1, -2 and -3 (MPEG DASH for IP-Based Cable Services).

Saleem, vice president of product and app security for Prime Focus Technologies, spoke about leveraging metadata for enhancing live TV events. PFT came up with a way to produce content from both live and archived files simultaneously, he said. Using algorithms that find archived content instantly allows it to be exploited to enhance user engagement.

“You can quickly build addictive stories from merged live and archived video,” Saleem said.

The platform enables remote creation to speed up the production process. It does immediate transcoding for multiple platforms, and automatically publishes to OTT and social media platforms.

“You’re able to publish faster and find content faster and remotely by using metadata,” he said. “You can catch the window of opportunity during live events” to draw in viewers. Users also can “jump to action and events” through metadata tags.

Monitization comes from reaching more screens on more devices, and striking while the iron is hot, i.e., the winning goal from the FIFA World Cup. Saleem said there was “huge interest” in watching that moment over and over again.

Arnav Mendiratta, a graduate student at the USC Viterbi School of Engineering, said he relies solely on the Internet for video content. He noted that OTT subscriptions were expected to double between 2014 and 2019, to $9 billion. He displayed a PwC Ovum chart that predicted Internet advertising would exceed TV advertising somewhere around 2017.

“We all agree success of a service depends on content, but the content is very subjective,” he said. “When we talk about a business model, we need to talk about it objectively.”

Any OTT workflow needs three distinct elements in the workflow: business management to handle licensing, the video engine, which manages the end-user gateway, and a way to accommodate various backend modules such as third-party APIs.

“The business management systems include the DRM system managing the licensing, and the electronic program guides,” he said. “When content is broadcast over live TV, it becomes available a couple of hours later on Hulu. The OTT provider already has the content, but the user can only get it after it’s aired, which is stored in the EPG.”

Big data— Mendiratta’s speciality—can be used to customize advertising and content recommendations, he said.

Siemans has an OTT platform called “Swipe” that has an ad recommendation engine for individuals, and for general products and service catagories, he said. It includes open platform Hitachi media analytics processing and handles third-party API integration, i.e., data integration, data discovery and predictive analytics “to get a meaningful advertisement out to users.”