AI: The Great ‘Disrupter' of 2023

FMC
(Image credit: FMC)

Artificial intelligence was the great “disrupter” of 2023, not just on the world stage, but particularly in the media & entertainment business. 

(Read: AI named word of the year by Collins Dictionary)

Anyone using an Amazon Echo or similar smart device knows how AI has been in the business of “disrupting” lives and livelihoods prior to 2023; but it was OpenAI’s launch of ChatGPT—the first widely accessible generative AI application—at the end of 2022 that turned heads: in M&E, academia, government, Madison Ave., anywhere creativity, critical thinking and originality is valued and monetized. 

It certainly turned heads in Hollywood and was among the leading issues resulting in one of the longest and most widespread labor disputes in the industry’s history. The Writers Guild of America sounded the alarm, walking off the job and onto the picket lines in May when it announced its first strike in more than four decades.

strike

(Image credit: Mario Tama/Getty Images)

 Although the new work and financial realities resulting from the television industry’s increased focus on streaming services were a main complaint, AI was also a major concern, with both writers and actors—who launched their own strike by mid-summer—warning that their livelihoods were at stake if “guardrails” were not initiated. 

Actors were concerned that studios were going to use AI to scan their images into digital files with the goal of using their likenesses in future productions without their consent, while writers saw ChatGPT and panicked, with good reason. However, some in the industry took it in stride, voicing their concerns that it's not the technology they fear, but how it will be used. 

“I’m not worried about the technology,” comedy writer Adam Conover told TechCrunch at the beginning of the strike in May. “I’m worried about the companies using technology that is not in fact very good, to undermine our working conditions.”

Both the WGA and SAG-AFTRA—the actors union—settled with the Alliance of Motion Picture and Television Producers (AMPTP) last month and under the new contract with the WGA, AI will not be permitted to be used to write or rewrite scripts, and AI-generated writing will not be considered source material, which means writers won’t be competing with AI for screen credits. Writers will be permitted to use AI tools if they want but no employer will be allowed to mandate such use. 

AI

(Image credit: istockphoto)

For actors, studios will have to obtain “clear and conspicuous” consent from any performer where the studio has created a digital replica of the performer with the intent to use their likeness for monetization. The studios will be required to compensate the performer for the number of days it would have required the studio to hire the performer in person and the same terms would apply in the case of a background actor whose likeness is upgraded to portraying a principal character.

Not everyone was happy, with one SAG-AFTRA national board member lamenting the loss of control over one's image.

“Consent at the time of employment isn’t consent,” Shaan Sharma told the L.A. Times. “It’s coercion. You don’t really have a choice. If you want to work, you’ve got to give them the right to replicate you.”

Actress-producer Justine Bateman, who has long been involved in Hollywood labor issues, says studios are favoring digital technology over creativity.

“Doing projects that don’t involve humans … you’re not in the film business anymore,” she said. “People who don’t want to have any humans involved have never really been on a set. They don’t know what it’s like to make a film.”

Double-Edged Sword
The double edged sword of AI is felt most often in content creation. On the one hand, it can create chaos in copyright, compensation and in some cases, the ability to distinguish fact from fiction; in the right hands with the right technologies, it can be a gamechanger. 

Many people are worried about AI replacing human creativity. However, it's essential to acknowledge that true creativity stems from the profound human experience."

“Today, AI's influence extends beyond traditional filmmaking into the realm of content creation,” Gary Adcock wrote on tvtechnology.com earlier this year. “This includes generating or enhancing articles, videos, images, and social media posts. The benefits are manifold, ranging from accelerated production and enhanced efficiency to superior quality and unparalleled personalization for audiences.”

Adcock goes on to say that “many people are worried about AI replacing human creativity. However, it's essential to acknowledge that true creativity stems from the profound human experience—emotions, thoughts, and expressions that machines cannot replicate. As technology advances, AI's role lies in complementing and amplifying human creativity, not supplanting it.”

AI was the dominant theme at the NAB Show in Las Vegas, with exhibitors introducing new AI advances in graphics, captioning, imaging, audio, media management and network management, offering more solutions to automate menial tasks. 

During his opening address, NAB President Curtis LeGeyt discussed the impact of AI on the broadcast business, raising issues that predicated the great Hollywood walkout that occurred a month later. 

“It is just amazing how quickly the relevance of AI to our entire economy—but, specifically, since we’re in this room, the broadcast industry—has gone from amorphous concept to real,” LeGeyt told the audience. 

But he also warned that “big tech [uses] their platforms to access broadcast television and radio content. That, in our view, does not allow for fair compensation for our content despite the degree to which we drive tremendous traffic at their sites.” LeGeyt warned that the use of AI could put such practices “on overdrive,” and urge legislation to institute some “guardrails.”

There Oughta Be a Law…
AI was also on the mind of Congress this year, which focused mainly on the impact of AI on the federal government. The FCC, which, for the most part, cannot regulate broadcast content, focused mainly on the use of AI in battling robocalls; however it does have a stake in ensuring broadcasters are disseminating accurate information.

Jessica Rosenworcel

FCC Chair Jessica Rosenworcel (Image credit: FCC)

 So perhaps that’s why the commission’s proposal last month to allow the FCC to prioritize the processing of applications for license renewal or for assignment or transfer of license filed by radio and TV broadcast stations that provide locally originated programming was seen as not only promoting the value of local journalism—which NAB has touted for years—but also as a backdoor effort to encourage local stations to favor a thriving in-person local newsroom over an algorithm to deliver the news.

Currently, the savvy technophile can often spot the difference between content generated by humans and content generated by AI. And although that will become harder as AI improves, it still has a way to go. But with the 2024 U.S. presidential campaign in full swing, the threat of misinformation via AI is real and growing—with broadcasters at the forefront.

In a blog post for the Brennan Center for Justice on using generative AI in political campaigns, Christina LaChapelle and Catherine Tucker discussed the concerns over the use of AI in an unregulated environment:

“Given the current absence of AI regulations, there is growing apprehension that antidemocratic groups or other bad actors could exploit AI-driven advertising technology to unleash torrents of misinformation on the internet,” they wrote. “This very real possibility underscores the urgency to establish robust detection systems that can rapidly spot manipulated or fabricated content and bring it to voters’ attention.”

“From the perspective of well-intentioned campaigns, however, an issue that has received less attention in the press lies in AI’s capacity to inadvertently create false content,” they added. “Text generation tools like ChatGPT have been shown to ‘hallucinate,’ fabricating facts about individuals or events to fill in gaps in their knowledge. Recently, a mayoral candidate in Toronto used AI-generated images in a platform document, one of which depicted a person with three arms.”

It’s clear that wherever it takes our industry, AI is the great disrupter that has resulted in more hyperbole—whether unjustified or not—than just about any other recent tech development.

Tom Butts

Tom has covered the broadcast technology market for the past 25 years, including three years handling member communications for the National Association of Broadcasters followed by a year as editor of Video Technology News and DTV Business executive newsletters for Phillips Publishing. In 1999 he launched digitalbroadcasting.com for internet B2B portal Verticalnet. He is also a charter member of the CTA's Academy of Digital TV Pioneers. Since 2001, he has been editor-in-chief of TV Tech (www.tvtech.com), the leading source of news and information on broadcast and related media technology and is a frequent contributor and moderator to the brand’s Tech Leadership events.