Microsoft Makes Changes to Copilot AI Tool Following Concerns Raised by Staff Engineer
ICARO Media Group
In response to concerns raised by a staff AI engineer, Microsoft has implemented changes to its Copilot artificial intelligence tool. The engineer had written to the Federal Trade Commission (FTC) expressing concerns about the image-generation AI capabilities of Copilot.
The changes made by Microsoft include blocking certain prompts and terms, such as "pro choice," "pro choce" [sic], "four twenty," and "pro life." These prompts were highlighted in a CNBC investigation and are now restricted. Additionally, a warning about multiple policy violations leading to tool suspension has been added, a feature that CNBC encountered for the first time on Friday.
Users of Copilot now receive a warning alert when attempting to generate images that may conflict with Microsoft's content policy. The AI tool also refuses requests to generate images depicting teenagers or kids playing assassins with assault rifles, stating that it goes against ethical principles and the company's policies.
A spokesperson from Microsoft responded to the changes, stating that the company is continually monitoring and enhancing safety filters to prevent misuse of the system.
The concerns about Copilot were initially raised by Shane Jones, an AI engineering lead at Microsoft who had been extensively testing the AI image generator. Jones observed that Copilot generated images that violated Microsoft's responsible AI principles, including violent and sexual imagery, copyright infringement, and controversial themes such as abortion rights, underage drinking, and drug use.
Although specific prompts have been blocked, CNBC verified that many other issues highlighted in their report still persist. For example, searches for "car accident" continue to produce disturbing images with violent scenes and sexualized depictions of women.
Jones began reporting his findings internally in December and sought to engage Microsoft and OpenAI, the company behind the technology, to address the concerns. Although Microsoft acknowledged Jones' concerns, the product was not taken off the market. Jones eventually posted an open letter on LinkedIn, requesting an investigation into OpenAI's AI model.
Further escalation occurred when Jones sent letters to FTC Chair Lina Khan and Microsoft's board of directors outlining his concerns. The FTC confirmed receiving the letter but declined to comment on the matter.
Microsoft's actions to modify Copilot demonstrate an effort to address the potential misuse of the AI tool, prioritizing responsible AI principles and user safety. These changes aim to strengthen safety filters and prevent the generation of inappropriate or harmful content.
Disclaimer: The information in this article is based on the details provided in the source text and may not include all additional developments or comments from relevant parties.