Major Tech Companies Pledge to Combat Deceptive AI Content in 2024 Elections

cybersecurity
(Image credit: Image by Pete Linforth from Pixabay)

MUNICH—Amid growing concerns that deceptive AI-created content could disrupt the 2024 elections around the world, major tech companies have announced a new initiative to detect and combat deceptive AI content. 

The announcement was made at the Munich Security Conference (MSC)  by 20 leading technology companies including Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, TikTok, and X.

Faced with fears that AI content could interfere with this year’s global elections in which more than four billion people in over 40 countries will vote, the companies unveiled a “Tech Accord to Combat Deceptive Use of AI in 2024 Elections” that includes a set of commitments to deploy technology countering harmful AI-generated content meant to deceive voters. 

As part of the agreement, the signatories pledged to work collaboratively on tools to detect and address online distribution of such AI content, drive educational campaigns, and provide transparency, among other concrete steps. It also includes a broad set of principles, including the importance of tracking the origin of deceptive election-related content and the need to raise public awareness about the problem. 

Digital content addressed by the accord consists of AI-generated audio, video, and images that deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where, and how they can vote.

As of Feb. 16, the signatories are: Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap, Stability AI, TikTok, TrendMicro, Truepic, and X.

Participating companies agreed to eight specific commitments:

  • Developing and implementing technology to mitigate risks related to Deceptive AI Election content, including open-source tools where appropriate
  • Assessing models in scope of this Accord to understand the risks they may present regarding Deceptive AI Election Content
  • Seeking to detect the distribution of this content on their platforms
  • Seeking to appropriately address this content detected on their platforms
  • Fostering cross-industry resilience to Deceptive AI Election Content
  • Providing transparency to the public regarding how the company addresses it
  • Continuing to engage with a diverse set of global civil society organizations, academics
  • Supporting efforts to foster public awareness, media literacy, and all-of-society resilience
  • These commitments apply where they are relevant for services each company provides.

“Elections are the beating heart of democracies. The Tech Accord to Combat Deceptive Use of AI in 2024 elections is a crucial step in advancing election integrity, increasing societal resilience, and creating trustworthy tech practices,” said Ambassador Christopher Heusgen, Munich Security Conference Chairman.

“Transparency builds trust,” said Dana Rao, general counsel and chief trust officer at Adobe. “That’s why we’re excited to see this effort to build the infrastructure we need to provide context for the content consumers are seeing online. With elections happening around the world this year, we need to invest in media literacy campaigns to ensure people know they can’t trust everything they see and hear online, and that there are tools out there to help them understand what’s true.”  

“Democracy rests on safe and secure elections,” said Kent Walker, president, Global Affairs at Google. “Google has been supporting election integrity for years, and today’s accord reflects an industry-side commitment against AI-generated election misinformation that erodes trust. We can’t let digital abuse threaten AI’s generational opportunity to improve our economies, create new jobs, and drive progress in health and science.”

The announcement comes at a time when AI tools for creating images, audio and even video are becoming more sophisticated. Just this week, OpenAI, the company behind ChatGPT, introduced a new tool that uses generative AI to create videos from text.

In an early reaction to the the accord, Lisa Gilbert, executive vice president of Public Citizen, said: “Without guardrails, the 2024 election will almost certainly be rife with deception and interference exacerbated by new tools using generative AI. We are happy to see these companies taking steps to voluntarily rein in what are likely to be some of the worst abuses. Committing to self-police will help, but it’s not enough.

“The AI companies must commit to hold back technology – especially text-to-video – that presents major election risks until there are substantial and adequate safeguards in place to help us avert many potential problems," she added. "All of the companies should also affirmatively support pending laws, both federal and state, as well as needed regulations that will rein in political deepfakes, and not introduce dangerous new technologies until those legal protections are in place.”

George Winslow

George Winslow is the senior content producer for TV Tech. He has written about the television, media and technology industries for nearly 30 years for such publications as Broadcasting & Cable, Multichannel News and TV Tech. Over the years, he has edited a number of magazines, including Multichannel News International and World Screen, and moderated panels at such major industry events as NAB and MIP TV. He has published two books and dozens of encyclopedia articles on such subjects as the media, New York City history and economics.