Artificial Intelligence Makes Inroads in Broadcasting

ALEXANDRIA, VA.—You knew it was coming, right? When you walk around with more computing power in your pocket than it took to launch a Saturn V rocket to the moon, you get the hint that computers are increasingly doing work that we either don’t like doing or never could do before.

For example, take logging raw video and creating data files to let news organizations search for just the right clip when we need it. Need a shot of a burning building on East Main from November? Bingo… the AI system logged it and made it available on the server.

AI is also the tool behind giving viewers a better experience when they visit a station’s web page.

[Read: AI And The Digital Transformation]

“As consumers become more driven to personalized experiences, news stations need to keep up with dynamic content,” said Drew Martin, technical product manager for Grass Valley. “AI can provide rich end-user experiences with minimal manpower. This generates more viewership and small operating costs.”


And AI is not just for the viewers.

“AI can bring huge potential benefits to broadcasters, with particular relevance in areas of the workflow that are labor-and time-intensive—like ingest,” Martin said. “By enabling broadcasters to track how operations are being used across their organization, AI-based solutions can create more efficient operations and bring costs down by identifying trends. As broadcasters of all sizes are under pressure to produce more with lower budgets, AI-based solutions can help them focus their resources on creating more compelling content.”

Although Grass Valley has no current AI-related products, Martin said the company plans to add AI functionality to upcoming product releases.

Brick Eksten, Imagine Communications

Brick Eksten, Imagine Communications

Brick Eksten, chief technology officer for playout & networking at Frisco, Texas-based Imagine Communications, recommends using AI to test supply chain management enhancements before integrating them into a facility’s workflow.

“In a machine learning/artificial intelligence solution, the system could learn enough about the content types [by watching content] and could experiment with various combinations in an offline environment, until you have sufficient confidence that it is providing better management of the supply chain in real time than manual methods, optimizing for cost and quality at the level of each individual piece of content,” he said.

Another potential role is in monitoring. “An AI-assisted multiviewer could provide more in-depth information about each signal, but also put that in context across all of the individual devices that make up that particular channel of content,” Eksten said. “Today we monitor by exception; Tomorrow that monitoring will be more predictive and seamless.”


As promising as AI technology is, many operations touched by AI still need to have a human overseer to ensure smooth operations.

Richard Heitmann, Aspera, an IBM company

Richard Heitmann, Aspera, an IBM company

“We’re already seeing artificial intelligence being used as a tool to create content like highlight clips, with Aspera being used for the ingest of video content and the automated delivery of the produced assets,” said Richard Heitmann, vice president of Aspera, an offshoot of IBM. “The natural next step to automated production is automatic publication of personalized media experiences. But we are still in the early days of the technology, and there is a human-review element that won’t go away anytime soon.”

IBM has been developing AI applications for decades, and Aspera has been leveraging that background for its broadcast products.

In April 2018, IBM partnered with the Masters to bring cognitive highlights to the golf tournament, according to Heitmann. “IBM’s AI technology quickly identified key highlights based on cheering, high fives, commentary and TV graphics such as banners within specific video frames,” he said. “As a result, video editors were able to use Aspera technology to distribute highlight reels at high speed in near-real-time for fans.”


For broadcasters, it’s all about what you do with your bandwidth—the more effectively a broadcaster uses its broadcast bandwidth, the more profitable it can be. For this reason, AI products can now address bandwidth, including learning from one encoding session to improve the next.

Reinhard Grandl, Bitmovin

Reinhard Grandl, Bitmovin

“AI plays an increasingly important role in video encoding, where it can significantly help improve workflows,” said Reinhard Grandl, director of product management for Austria-based Bitmovin. “By continuously learning the parameters used in previous encodes, AI-optimized settings can be applied to every new video file. Furthermore, every asset that will be encoded with our service helps to train this machine-learning model and makes the prediction for future encodings more accurate. This results in faster processing times and significantly higher quality with no increase in bandwidth.”

The savings from properly configured AI-driven encoding are substantial, Grandl said.

“Netflix, for instance, estimates that its use of AI to automate workflows and reduce customer churn saves the company around $1 billion annually,” he said. “This not only increases the quality of experience and quality of service for users, but also reduces the number of bits required to achieve the same quality stream. YouTube is also at the forefront of using AI to reduce overall video latency and encoding costs.”

Netflix and YouTube are noteworthy examples, but is there anything AI can do for a call-letter TV station?

Paul Shen, TVU

Paul Shen, TVU

“Call-letter stations can receive immediate benefits from TVU Networks’ products with AI,” said Paul Shen, CEO of Mountain View, Calif.-based TVU Networks. “For example, the TVU Transcriber service is available today and ensures FCC compliance of any video content a station puts on air, on social media or any digital media platform. The AI engine in Transcriber detects the need for closed captioning in content and will automatically transcribe the missing speech as closed captions. In addition, TVU Transcriber uses AI to detect profanity and can automatically mute the audio.”

The tantalizing ability to deliver targeted content to web viewers is also within reach of TV stations, Shen said.

[Read: Media 4.0: Using AI To Meet Viewers’ Preferences]

“With the TVU MediaMind Platform, all digital and broadcast production groups can truly collaborate to cover the same story, while allowing each group to customize and deliver the completed program based on viewer demographics,” he said. “As a result, a station can cost-effectively create targeted content and allow it to better serve digital and broadcast viewers using the same raw videos. This becomes a truly story-centric workflow.”

Not only will a properly configured AI system process video as it is ingested, it can also dip deep into existing library files and process those.

AI functionality has recently been integrated into Prime Focus Technology’s, Clear Media ERP, Media Asset Management system, according to T Shobhana, vice president and global head of marketing & communications for the company. “[Clear] helps automatically recognize elements within audio and video, and generate associated metadata, making it easier to sort, locate and use content across all MAM workflows,” she said. “With Clear, content owners no longer have to rely only on manual effort to tag and catalog assets, as this is a time-consuming and expensive process.”

However, keep in mind that AI capability in MAM systems is not completely hands-off from a human standpoint—yet.

“All these functionalities require human review and quality control right now, but one of the key characteristics of AI and machine learning is the ability to learn and improve over time,” Shobhana said, “so we expect these functions to continue to evolve going forward.”

Over the past five years, artificial intelligence has moved out of the laboratory and into real products—you only have to go as far as Apple’s Siri and Google’s Alexa to find examples in the real world. The idea of a computerized assistant has now become real to millions, and that increases the pressure for similar machine aids in various professions and industries… including broadcasting.

It’s clear that the most efficient use of bandwidth and the ability to quickly create targeted programming are of great interest to broadcasters, and artificial intelligence is helping to make that possible.

Bob Kovacs

Bob Kovacs is the former Technology Editor for TV Tech and editor of Government Video. He is a long-time video engineer and writer, who now works as a video producer for a government agency. In 2020, Kovacs won several awards as the editor and co-producer of the short film "Rendezvous."