Concerns Arise Over AI Image Generator's Controversial Depictions on Social Media Platform X
ICARO Media Group
An AI image generator on the social media platform X (formerly known as Twitter) is causing a stir due to the controversial depictions it has produced. The generator, developed by Black Forest Labs, has recently displayed images that appear to show ballot drop boxes being stuffed and political figures holding firearms. This release of the generator has raised concerns about the timing and potential misuse of such powerful technology.
Eddie Perez, a former information integrity director at Twitter and current board member at the nonpartisan nonprofit OSET Institute, expressed his discomfort with the lack of testing and safeguards on this powerful technology. Perez questioned the rationale behind rolling out the image generator just two and a half months before a major election and emphasized the importance of public confidence in the electoral process.
Despite multiple interview requests, X has not yet responded to NPR's inquiries regarding the image generator. Since its acquisition by billionaire Elon Musk in 2022, X has seen the addition of various new features, including this AI image generator. Musk himself has reposted praises of the tool and shared images generated by users, promoting an $8/month subscription for AI access and enhanced features with fewer ads.
Paid X users can access the AI image generator through the platform's AI chatbot named Grok. Using the chatbot, users can input prompts and receive generated images in return. NPR was able to use Grok to produce images that appeared to depict security camera footage of individuals stuffing ballots into drop boxes.
One of the most widespread false narratives surrounding the 2020 election was the alleged involvement of "ballot mules" who were said to be dumping fake ballots into drop boxes to manipulate the results. However, multiple investigations and court cases found no evidence to support these claims. The distributor of a film featuring surveillance footage to support these fraudulent claims has since apologized and retracted the film.
Perez expressed concerns about how these synthesized surveillance-type images could quickly spread on social media platforms and provoke strong emotional reactions regarding election integrity. While the public is becoming more aware of generative AI, there is still a risk that the indications of AI-generated images can be concealed using graphic design tools, emphasizing the need for effective policy guardrails.
In contrast to X's image generator, other mainstream generators have implemented policy guardrails to prevent abuse. OpenAI's ChatGPT Plus, for example, responded to a prompt about ballot drop box stuffing by stating its inability to create an image promoting or depicting election fraud or illegal activities.
The nonprofit Center for Countering Digital Hate reviewed the policies of well-known AI image generators and found that most prohibit misleading content, images that could harm election integrity, and feature political figures. However, Black Forest Labs' terms of service do not explicitly prohibit these uses, though they state a prohibition against generating outputs that violate intellectual property rights.
While X users have been able to generate explicit images, there are indications that the platform or Black Forest Labs are implementing guardrails in real-time. Notably, responses from the AI chatbot seem to evolve, and certain image requests are now being declined.
Critics have also pointed out Elon Musk's earlier actions on X, in which he rolled back trust and safety measures, reinstated banned accounts (including those of white supremacists), and promoted conspiracy theories. This pattern of controversial changes, according to Zach Praiss, the campaign director of advocacy group Accountable Tech, raises concerns about the lack of safety testing.
It is worth noting that a notice on X warns users that the AI chatbot, Grok, may provide factually incorrect information. As of Friday, the generator no longer produced images of people holding guns, but users were able to bypass this restriction by asking for a "model gun." The chatbot then suggested a "banana gun" as an alternative, which still generated images of realistic-looking guns, sometimes with a banana.
The release of the AI image generator on X has sparked concerns regarding the timing, lack of safeguards, and potentially controversial depictions. As public awareness of generative AI grows, the need for policy guardrails that prevent abuse and maintain trust in these technologies becomes increasingly critical.