FCC’s Carr Opposes Effort to Require Disclosure of AI-Content in Political Ads

Brendan Carr
(Image credit: FCC)

WASHINGTON, D.C.—FCC Commissioner Brendan Carr has come out against a proposal by FCC Chairwoman Jessica Rosenworcel that the FCC should require disclosure of AI-created content in political ads airing on TV and radio

“The FCC’s attempt to fundamentally alter the rules of the road for political speech just a short time before a national election is as misguided as it is unlawful,” Carr said in a statement. 

“There is no doubt that the increase in AI-generated political content presents complex questions, and there is bipartisan concern about the potential for misuse,” Carr said. “But none of this vests the FCC with the authority it claims here. Indeed, the Federal Election Commission is actively considering these types of issues, and legislators in Congress are as well. But Congress has not given the FCC the type of freewheeling authority over these issues that would be necessary to turn this plan into law.”

Carr also argued that the proposal for new FCC regulations “can only muddy the waters.  AI-generated political ads that run on broadcast TV will come with a government-mandated disclaimer but the exact same or similar ad that runs on a streaming service or social media site will not?  Consumers don’t think about the content they consume through the lens of regulatory silos. They just view content on screens. Will they conclude that the absence of a government warning on an online ad means that the content must be real?  I don’t see how this type of conflicting patchwork could end well. Unlike Congress, the FCC cannot adopt uniform rules.

“Indeed, the FCC can only tilt the playing field,” he added. “Applying new regulations to candidate ads and issue ads but not to other forms of political speech just means that the government will be favoring one set of speakers over another.  And applying new regulations on the broadcasters the FCC regulates but not on their largely unregulated online competitors only exacerbates regulatory asymmetries.  All of this confirms that the FCC is not the right entity to consider these issues.”

Carr also stressed that the proposal raises some thorny administrative issues. “What does it mean to have `AI-Generated Content’ in a political ad?” he noted. “Is it everything?  Is it nothing?  Lawyers will undoubtedly end up telling their clients to just go ahead and slap a prophylactic, government-mandated disclosure on all political ads going forward just to avoid liability.”

The Rosenworcel proposal, which has been circulated to other colleagues at the agency, specifically said it was not intended to ban or restrict the use of AI-content in ads. 

Carr argued, however, that “I am also concerned that it is part and parcel of a broader effort to control political speech. Is the government really worried that voters will find these ads misleading in the absence of a regulator’s guiding hand? Or is the government worried that voters might find these ads effective?  Imagine going after President Lyndon Johnson for his 1964 `Daisy Girl' ad because voters might think that the child actually died in a nuclear strike. The type of government intervention envisioned by this plan would only do more harm than good.” 

George Winslow

George Winslow is the senior content producer for TV Tech. He has written about the television, media and technology industries for nearly 30 years for such publications as Broadcasting & Cable, Multichannel News and TV Tech. Over the years, he has edited a number of magazines, including Multichannel News International and World Screen, and moderated panels at such major industry events as NAB and MIP TV. He has published two books and dozens of encyclopedia articles on such subjects as the media, New York City history and economics.