Print Page
Second-screen systems and social media integration
4/12/2013

Interactivity across the broadcast spectrum is the new name of the game as broadcasters seek to expand — or perhaps more accurately, keep — viewers who spend increasingly more time on their connected devices. As laptops, smartphones and tablets become more ubiquitous in the market, viewers are using them not only to facilitate on-the-go lifestyles, but also to engage with social media, as well as other online forms of media and entertainment.

According to a 2011 Nielsen survey, 45 percent of U.S. tablet owners and 41 percent of smartphone owners use their devices while watching TV on a daily basis. In sports, the numbers are even higher, with an estimated 70 percent of tablet owners watching sports on TV while consulting their Web-connected device as a second screen following parallel stats or exchanging Tweets with friends and online viewers. A 2011 Ericsson report found that more than 40 percent of people use social media while watching TV on a weekly basis, and almost one in three chat online. In-depth interviews showed that families combined TV viewing with the use of Twitter, Facebook, texting, voice calls and forum discussions about what they watched — particularly when watching reality shows and sports. Viewing has evolved from a passive to an active pastime.

The silver lining is the new technology that enables broadcasters to take advantage, rather than risk the effects, of social media engagement — providing viewers with more reasons to engage with the content they’re consuming. New second-screen systems provide viewers with access to original content not seen anywhere else, utilizing unused content that often sits wasted on servers. Together, this premium content and increased viewer engagement provides new revenue opportunities, from charging subscribers for the service, through sponsorship and advertising, or facilitating the purchase of goods and merchandise.

New, customized content

The technology that enables broadcasters and content owners to work alongside, rather than opposed to, forces that threaten to pull away viewers is surprisingly easy. Designed as a suite of tools that can be added on to live multicamera production infrastructure, the technology enables broadcasters and rights owners to output original content — archives, highlights or third-party content — to multiple screens.

EVS C-Cast new second-screen systems interface with multicamera production

New infrastructure isn’t needed. The technology is integrated into a seamless file-based workflow either at a broadcast center or on-site production venue, providing all of the tools necessary to generate exclusive content on a personalized interface. (See Figure 1.) With this capability, broadcasters can provide more ways for viewers to engage with the content they’re viewing, including interacting through votes and evaluations, or receiving content customized to their preferences.

Sports — your way

Live sports provide a good example of how this technology is being used and the opportunities it can afford. For every sporting event covered, broadcasters are capturing hours of premium content that currently goes unused. For example, in a soccer game with up to 18 cameras recording 90 minutes of content, for every 90 minutes of viewed content, another 26.5 hours of content ends up “on the cutting room floor.” In other words, 90 percent of captured sports content is never aired.

Clips or highlights created during live productions and stored on servers can be made available instantly to Web app subscribers. The process begins with the processing and transfer of synchronized live multicamera media recorded on production servers. All metadata associated with event footage are used as keywords enabling easy content retrieval. Third-party stats are integrated into the database and associated to video clips and highlights, which are made available to the user in near real time. A second-screen timeline of events being produced is created, into which external elements such as ads, stats or surveys can be inserted. This enables a high degree of interactivity and facilitating cause-and-effect programming. Content providers can add value to media by creating context from live events through multi-angle replays at various speeds, on-the-fly edits, and the insertion of graphics and statistics.

The aim of the workflow is to enrich the viewer’s experience during live sports broadcasts. To this end, we need to guarantee the delivery of the events and the associated content in a timely manner. The whole process needs to happen in a short amount of time (e.g. less than 120 seconds), starting from the moment at which the action occurred at the venue.

For example, here are the main actions that need to be achieved in less than 2 minutes:

  • Clip the action at the venue;
  • Transfer clip (including multiple angles and metadata) from the venue server to a central database;
  • API ingest for third-party items (graphics, statistics) and Web application;
  • On-the-fly video transcoding into required format;
  • Distribute the timeline, content and metadata to connected devices.

The above-stated constraints, minimizing extra resources at the venue and the timely delivery of the content, lead us to move the transcoding service located in the data center. The external transcoding service needs to provide multiple bit rates for each video so that the player client can dynamically adjust the chosen video stream as a function of the available bandwidth. Video formats vary as a function of the video capabilities of the smart devices.

A publishing layer communicates with one or multiple standard CDN services to deliver the content to viewers. This distribution policy isolates distribution scalability issues from the central facility scaling. The use of HTTP Live Streaming (HLS) to distribute the content is perfectly compatible with major CDN services.

Protecting the rights of the content owner

Rights holders usually want to limit the offering to their subscribers. Aside from the advertising, this is one of the major business models used to monetize the content. One of the services a CDN offers is to geo-block access to the content from viewers who aren’t located in the region where content rights have been cleared. This is a first step, but it isn’t a sufficient measure to prevent access to the content by unauthorized viewers.

The use of the HLS model to distribute the content enables the use of caching services provided by delivery networks. However, this can represent a content security breach to our limited access requirement. Therefore, the platform needs to take control of the cached playlists and video segments, limiting their access to only authorized viewers. To this end, we can rely on the capabilities of HLS to encrypt the video content and to provide the corresponding DRM keys through a separate service only accessible by the transcoding service and the player.

The end-to-end workflow encompasses server technology, an integrated suite of video production management applications, viewer application software and a cloud-based content delivery network. The video production applications cover ingest control, metadata management, on-the-fly editing, playout and scheduling. The process is managed completely through a simple Web interface.

A new wave of applications

Live sports content owners are among the first to implement large-scale second-screen systems. The technology was recently used by Canal+ Sports, the sports channel of a France-based pay TV broadcaster, to launch its branded second-screen soccer app.

Canal+ Sports recognized that the way fans are consuming media is rapidly evolving, due both to new smart devices and the spread of fast broadband connectivity. To grow market share, it used a broadcast and media production systems provider to provide the technology for an app providing up-to-the-minute statistics, multicam video clips of all highlights and bonus material such as super-slow-motion action replays in full length. The app also features filmed reactions from commentators and special guests from their live sports programming as well as the ability to interact via social networks. Available on iOS for iPad users and Android for Samsung Galaxy tablets, the app was easy to create and deploy with the second-screen automated hardware and software system.

While the sports industry may be early adopters, consider the use cases possible for reality shows and live variety programming, from interview formats to cooking shows, even dramatic series. Unused content can be made immediately available, both to meet customized tastes, but also to spur interaction — polls, chats and discussions — via social media. The result is more engagement, more viewers and ultimately more revenue.

Exploiting market forces for greater revenue

Social media has come of age, and it’s affecting the way viewers consume media of all types. This development, as well as recent technology advancements, is driving a shift from traditional to personalized media consumption. Rather than fight against this evolution, let’s enable viewers to have more choice and receive personally relevant content. It’s time to implement collaborative, integrated environments of real-time content ingest, editing and enrichment. Success will depend on using existing infrastructure, multiple cameras and other technology to smartly gather, produce and monetize content. Broadcasters and rights owners must venture down this path or risk being left behind. To thrive in the years to come, we must manage live content so that it can be output and viewed on multiple platforms, giving consumers more and better choices. There are challenges, of course, but also great opportunities to monetize original content, increase revenues and ensure future business growth.


Johann Schreurs is general manager, new media broadcast, EVS.

  Print Page