Chinese Propagandists Exploit AI to Conduct Online Influence Operations
ICARO Media Group
****
Chinese propagandists are leveraging AI tools like ChatGPT to craft social media posts and comments as well as performance reviews for internal documentation. According to OpenAI researchers, this activity is part of China's broader efforts to manipulate public opinion and engage in online surveillance.
OpenAI's recent threat report highlights that China is now utilizing a diverse set of tactics in these covert operations. Ben Nimmo, the principal investigator on OpenAI's intelligence and investigations team, detailed on a call with reporters that, over the past three months, the company disrupted 10 operations misusing AI tools, with four of these operations likely originating in China.
One particularly noteworthy operation, dubbed "Sneer Review" by OpenAI, used ChatGPT to generate short comments posted across multiple social media platforms such as TikTok, X, Reddit, and Facebook. These comments spanned subjects from the Trump administration's actions to criticism of a Taiwanese strategy game targeting the Chinese Communist Party. This operation aimed to create an appearance of organic engagement by not only generating initial posts but also crafting corresponding comments.
In addition to influencing social media, the actors behind Sneer Review used ChatGPT for internal documentation, including crafting detailed performance reviews on their operations. Another Chinese operation posed as journalists and geopolitical analysts to gather intelligence, using ChatGPT to draft posts, biographies, and translated communications.
In an earlier report from February, OpenAI identified another China-linked surveillance operation that claimed to monitor social media for real-time protest updates in Western countries, reporting these to Chinese security services. The operation used AI tools to refine code and generate sales pitch descriptions for their monitoring software.
Besides Chinese operations, OpenAI's recent report also flagged covert influence activities linked to Russia, Iran, and a spam operation in the Philippines, along with recruitment and employment scams connected to Cambodia and North Korea. While these operations have employed a broad array of tactics and platforms, OpenAI noted that they were mostly disrupted in their infancy, failing to engage large audiences.
OpenAI’s findings underline the increasing sophistication of AI-aided influence operations but also suggest that better tools do not necessarily translate to more successful outcomes.