States Urged to Take Action on AI and Deepfake Regulations Ahead of 2024 Election
ICARO Media Group
In the face of growing concerns over the impact of artificial intelligence (AI) and deepfake technology in political campaigns, experts are urging state lawmakers to prioritize the development of regulations. With just three states enacting laws related to AI and deepfakes in 2023, proponents emphasize the need for action at the state level, given the lack of federal government intervention and the potential benefits of experimenting with different approaches.
According to the National Conference of State Legislatures, Minnesota, Michigan, and Washington were the only states to pass laws addressing AI and deepfakes in 2023. These laws garnered bipartisan support, highlighting the recognition of the urgency to tackle this issue. However, another seven states introduced bills that failed to pass or stalled in the legislative process.
The reasons behind the slow progress in addressing AI and deepfakes at the state level are varied. Experts point out that reconciling potential regulations with First Amendment rights and surviving legal challenges poses a significant challenge. Additionally, the rapidly evolving nature of generative AI and deepfake technology leaves lawmakers grappling to understand and respond effectively. Moreover, enforcing any regulations would require the cooperation of various parties, including major social media companies.
Despite these obstacles, advocates emphasize the need for states to start navigating these challenges promptly. Daniel Weiner, director of the elections and government program at the Brennan Center, highlights the potential benefits of effective policy solutions and urges lawmakers to prioritize the issue. The increased prevalence of deepfakes, which are videos that use AI to create realistic but false depictions of real individuals, raises concerns about the 2024 election being dubbed the first "deepfake election." The potential for voters to encounter political disinformation online without being able to discern what is real and what is fabricated is a significant concern.
The enacted laws in the three states that addressed the issue, Minnesota, Michigan, and Washington, fall into two categories: disclosure requirements and bans. Washington's law, enacted in May, mandates a disclosure on synthetic media used to influence an election. Minnesota's law, enacted in August, prohibits the publication of deepfake media to influence an election within 90 days before an election. Michigan's law, recently enacted, combines a ban on the distribution of materially deceptive media with a disclosure requirement. However, the ban will not be enforced if the media includes a disclosure stating that it has been manipulated.
In addition to state-level efforts, social media and tech giants have taken steps to address AI and deepfakes. Meta, which owns Facebook and Instagram, along with Microsoft and Google, have committed to requiring political ads on their platforms to disclose if they were created using AI.
With the federal government largely inactive on the issue, experts emphasize the importance of state action in upcoming legislative sessions. While proposals in the U.S. Senate and House aimed at regulating AI and deepfakes in political campaigns have not progressed, the Federal Election Commission has shown limited progress in its effort to regulate deepfakes in campaign ads. President Joe Biden's executive order in October called for stakeholders to address safety concerns surrounding deepfakes and tasked the Commerce Department with creating guidance on watermarking AI content.
As the United States heads into a potentially chaotic 2024 election year, the influence of AI and deepfakes in campaign ads raises concerns about the authenticity of political content. The potential impact of deepfakes on election outcomes, along with the ability for candidates to deny the validity of authentic content, underscores the need for comprehensive regulations and increased vigilance to safeguard the democratic process.
In light of these challenges, experts stress that states must take immediate action to navigate the complexities posed by AI and deepfakes in political campaigns, ensuring the integrity and transparency of the electoral process.