OpenAI Foils Iranian Influence Campaign Utilizing Artificial Intelligence

ICARO Media Group
Politics
16/08/2024 23h26

In a recent development, OpenAI, the San Francisco-based artificial intelligence company, announced that it had successfully uncovered and disrupted an Iranian influence campaign. The campaign made use of OpenAI's generative artificial intelligence technology, specifically its online chatbot called ChatGPT, to spread misinformation, including content related to the U.S. presidential election.

OpenAI took swift action by banning several accounts associated with the Iranian campaign from their online services. Fortunately, the company observed that the campaign did not gain significant traction or reach a substantial audience. According to Ben Nimmo, a principal investigator for OpenAI with extensive experience in tracking covert influence campaigns, there were no indications of meaningful engagement from real individuals.

The rise in popularity of generative AI, especially OpenAI's ChatGPT, has raised concerns about its potential contribution to online disinformation during major global elections. OpenAI itself acknowledged this issue in May when it released a groundbreaking report highlighting the discovery and disruption of five other online campaigns that misused their technology to manipulate public opinion and influence geopolitical matters. These previous efforts were conducted by state actors and private entities in Russia, China, Israel, and Iran.

OpenAI's technology has been exploited in past covert operations to generate social media posts, translate and edit articles, write headlines, and even debug computer programs. These tactics were typically employed to gain support for political campaigns or sway public opinion in geopolitical conflicts.

In the most recent incident, OpenAI identified several ChatGPT accounts that were generating text and images for an undisclosed Iranian campaign dubbed "Storm-2035." The campaign utilized ChatGPT to generate content covering various topics, including commentary on U.S. presidential election candidates. The Iranian operatives employed ChatGPT to produce articles, as well as rewrite and repost user comments on social media platforms.

However, OpenAI reported that the majority of the campaign's social media posts received minimal engagement in terms of likes, shares, and comments. Furthermore, there was little evidence to suggest that the web articles generated by the campaign were widely shared across social media platforms.

The discovery and disruption of this Iranian influence campaign once again highlight the challenges posed by the misuse of AI technology in spreading disinformation. OpenAI remains committed to proactively identifying and thwarting such campaigns, protecting the integrity of public discourse during significant events like elections.

As the capabilities of generative AI continue to evolve, it becomes vital for organizations like OpenAI to employ robust measures to prevent their technology from being exploited for nefarious purposes. The battle against the misuse of AI in influencing public opinion and geopolitical conflicts remains ongoing, with companies like OpenAI at the forefront of countering such threats.

The views expressed in this article do not reflect the opinion of ICARO, or any of its affiliates.

Related