Manipulated Video Raises Concerns About AI Misinformation in the 2024 Election

https://icaro.icaromediagroup.com/system/images/photos/16305151/original/open-uri20240728-55-b3qegc?1722211069
ICARO Media Group
Politics
28/07/2024 23h46

In a concerning development just three months ahead of the 2024 presidential election, a manipulated video featuring the voice of Vice President Kamala Harris saying fabricated statements has sparked renewed concerns about the deceptive power of artificial intelligence (AI) technology. The video gained significant attention after tech billionaire Elon Musk shared it on his social media platform X, raising questions about the potential for AI-generated content to mislead the public.

The video cleverly mimics the visuals of a real campaign ad released by Harris, the presumed Democratic presidential nominee, with the voice-over audio replaced by a convincingly impersonated voice. In the manipulated video, Harris is made to say, "I, Kamala Harris, am your Democrat candidate for president because Joe Biden finally exposed his senility at the debate." It goes on to falsely assert that Harris is a "diversity hire" due to her gender and ethnicity, and claims she is ill-equipped to lead the country.

The video, which retains the branding of "Harris for President," also incorporates authentic past clips of Harris, further blurring the line between reality and manipulation. Mia Ehrenberg, a spokesperson for Harris's campaign, denounced the video as "fake, manipulated lies" propagated by Elon Musk and former President Donald Trump.

The widespread sharing of this video highlights the increasing prevalence of lifelike AI-generated content in political discourse. These AI tools can create realistic images, videos, and audio clips that can be used either for humorous purposes or to deceive and manipulate public opinion. However, the lack of significant federal regulations governing the use of AI in politics leaves the responsibility to regulate AI largely in the hands of states and social media platforms.

The uncanny realism of the AI-generated voice in the manipulated video has raised concerns among experts. Hany Farid, a digital forensics expert from the University of California, Berkeley, recognizes the power of AI-generated media and deepfakes, stating that "the video is that much more powerful when the words are in her voice." Farid urges AI companies to take more responsibility in ensuring that their technology is not misused to harm democracy or individuals.

However, opinions differ on the potential impact of such manipulated videos. Rob Weissman, co-president of the advocacy group Public Citizen, believes that many people could be fooled by the video, asserting that "most people will believe it to be real." Weissman emphasizes that this video is precisely the kind of disinformation campaign that has raised alarm bells among experts.

This incident is not an isolated one. Other instances of AI-generated deepfakes and misinformation attempts have surfaced globally. In Slovakia in 2023, fake audio clips impersonated a candidate discussing election rigging and price manipulation. Similarly, in Louisiana in 2022, a satirical ad manipulated an actor's face onto a mayoral candidate, portraying him as an underachieving student.

While some social media platforms have implemented policies against synthetic and manipulated media, there is a pressing need for comprehensive federal legislation to regulate AI in politics. Currently, over one-third of the states have enacted their own laws to regulate AI use in campaigns and elections.

As the 2024 election approaches, the increasing prevalence of AI-generated content poses a significant challenge. The blurring lines between fact and fiction call for robust safeguards to protect public trust, democracy, and the integrity of the electoral process.

The views expressed in this article do not reflect the opinion of ICARO, or any of its affiliates.

Related