Microsoft Engineer Urges FTC to Address Safety Concerns with Copilot Image Creator
ICARO Media Group
In a concerning development, a Microsoft engineer has recently penned a letter to the Federal Trade Commission (FTC) expressing his concerns about the safety of OpenAI's DALL-E 3, the image creator behind Microsoft's Copilot Designer. The engineer, whose attempts to highlight potential security vulnerabilities in the system were allegedly blocked by Microsoft's legal team, is now urging the company to take action.
The engineer's primary concern revolves around the ease with which DALL-E 3 can generate violent or sexually explicit images. In his letter to FTC Chair Lina Khan, he insisted that Microsoft remove Copilot Designer from public use until stronger safeguards can be implemented. Furthermore, he called for the addition of disclosures to the product to warn consumers about the alleged danger it poses. He also emphasized the importance of changing the app's rating to ensure it is only accessible to adult audiences. Currently, Copilot Designer's Android app carries an "E for Everyone" rating.
Despite the engineer's plea, Microsoft continues to market the product to a broad audience, with recent endorsements from the company's CEO, Satya Nadella. Seeking additional support for his concerns, the engineer also wrote a separate letter to Microsoft's board of directors, urging them to conduct an independent review of the company's responsible AI incident reporting processes.
The issue at hand revolves around whether Microsoft's implementation of DALL-E 3 can inadvertently produce violent or sexual imagery, despite the guardrails put in place. The engineer claims that it is disturbingly easy to trick the platform into generating objectionable content. Examples he cited include prompts like "pro-choice" generating images of demons feasting on infants and Darth Vader menacing a baby with a drill.
Confirming the engineer's allegations, CNBC was able to replicate most of the scenarios he described using the standard version of the software. Despite the number of consumers encountering such problematic results, Microsoft appears to be inadequately addressing the issue. The engineer contends that the Copilot team receives over 1,000 daily product feedback complaints but lacks the necessary resources to thoroughly investigate and solve these problems.
Expressing his concerns, the engineer highlighted the absence of an effective reporting mechanism for harmful images generated by the product. "If this product starts spreading harmful, disturbing images globally, there's no place to report it, no phone number to call, and no way to escalate this to get it taken care of immediately," he told CNBC.
OpenAI, the developer of DALL-E 3, responded to the engineer's initial complaint by stating that the prompting technique he shared does not bypass their security systems. They claimed to have robust image classifiers in place to steer the model away from generating harmful images. However, Microsoft's spokesperson acknowledged the concerns raised and stated that the company has internal reporting channels to address such issues. The spokesperson also emphasized the need for the engineer to validate and test his concerns before escalating them publicly.
This incident follows Google's Gemini chatbot encountering its own image generation controversy. The chatbot faced scrutiny after it was discovered that it could generate objectionable content. With both incidents highlighting the challenges of responsible AI implementation, it remains to be seen how Microsoft and OpenAI will address these ongoing concerns.
As of now, an updated statement from both companies is yet to be received.