Manipulated Video Raises Concerns About Misleading AI as Election Day Nears

https://icaro.icaromediagroup.com/system/images/photos/16305113/original/open-uri20240728-55-5pkad4?1722206320
ICARO Media Group
Politics
28/07/2024 22h26

The power of artificial intelligence (AI) to deceive and mislead has come into focus with the emergence of a manipulated video mimicking the voice of Vice President Kamala Harris saying things she did not say. This video, which was shared by tech billionaire Elon Musk on his social media platform X, has raised concerns as the United States approaches the presidential election in three months.

The video features visuals similar to a real campaign ad released by Harris, the likely Democratic presidential nominee, but replaces the voice-over audio with a convincing impersonation of Harris. The manipulated voice claims that Harris is a "diversity hire" and questions her ability to run the country. The video retains branding as "Harris for President" and incorporates authentic past clips of Harris.

Mia Ehrenberg, a spokesperson for the Harris campaign, expressed their belief that the American people want the real vision and leadership offered by Vice President Harris, rather than the false information propagated by Elon Musk and Donald Trump.

This widely shared video exemplifies how lifelike AI-generated content, including images, videos, and audio clips, has been used for both satire and political manipulation. The accessibility of high-quality AI tools has prompted concerns about the lack of significant federal regulation, leaving the responsibility of governing AI in politics to individual states and social media platforms.

The video also highlights the challenge of distinguishing appropriate use of AI, particularly in the realm of satire. The original creator Mr Reagan, who posted the parody video on YouTube and X, has disclosed its nature as a parody. However, Elon Musk's post on X, which has garnered over 123 million views, simply captions it as "This is amazing" with a laughing emoji, without directing users to the original post for disclosure.

While some users on X have suggested labeling Musk's post, no such label has been added as of now. Questions have emerged about whether Musk's post violates X's policies, which prohibit the sharing of synthetic, manipulated, or out-of-context media that could deceive or confuse individuals. However, X's policies do make a provision for memes and satire, as long as they do not significantly confuse the authenticity of the content.

As of now, neither Mr Reagan nor Musk have responded to requests for comment on the video. Two AI experts who reviewed the audio of the fake ad confirmed that AI technology was used to generate a significant portion of it. Hany Farid, a digital forensics expert from the University of California, Berkeley, acknowledged the power of generative AI and deepfakes displayed in the video. He emphasized the responsibility of AI companies to prevent their tools from being misused and causing harm.

However, not all experts agree on the potential impact of such videos. Rob Weissman, co-president of the advocacy group Public Citizen, argued that many people could be fooled by the video and may not immediately recognize it as a parody. Weissman highlighted the need for regulation of generative AI to address the kind of misinformation and manipulation that this video represents.

This manipulated video is not an isolated incident, as other instances of generative AI deepfakes have been used to influence voters with misinformation or satire. The lack of comprehensive federal regulation has left most of the existing rules on AI in politics to be determined by individual states. Over one-third of states have already enacted laws regulating the use of AI in campaigns and elections.

Social media platforms, including YouTube, have also implemented their own policies regarding the sharing of synthetic and manipulated media. YouTube users, for instance, must disclose if they have used generative AI in their videos, or else they may face suspension.

As the presidential election draws nearer, the issue of misleading AI-generated content becomes increasingly pertinent. Advocacy groups continue to call for Congress and federal agencies to take decisive action in regulating generative AI, safeguarding the accuracy and integrity of information shared during political campaigns.

The views expressed in this article do not reflect the opinion of ICARO, or any of its affiliates.

Related