Microsoft Under Fire for Publishing AI-Generated Poll on Woman's Death

https://icaro.icaromediagroup.com/system/images/photos/15854090/original/open-uri20231101-56-zim664?1698870229
ICARO Media Group
Politics
01/11/2023 20h21

In a controversial move, Microsoft has come under intense criticism from The Guardian for publishing an AI-generated poll speculating on the cause of a woman's death alongside an article by the news publisher. The incident has not only raised concerns about the use of artificial intelligence in journalism but has also sparked outrage among readers.

The AI-generated poll appeared next to a Guardian story reporting on the tragic death of Lilie James, a 21-year-old water polo coach who was found dead with serious head injuries at a school in Sydney. The automated poll, created by an AI program, solicited readers' opinions on the reason behind the woman's death, providing three options: murder, accident, or suicide.

Readers reacted strongly to the poll, expressing their anger and disgust. Despite the poll being taken down, critical comments from readers were still visible on the survey as of Tuesday morning. Some readers even called for one of the Guardian reporters, who was unrelated to the poll, to be dismissed. One reader described the poll as "the most pathetic, disgusting poll" they had ever seen.

The publication of the AI-generated poll has not only caused distress to Lilie James's family but has also dealt a blow to the journalistic reputation of The Guardian. In a letter to Microsoft's president, Brad Smith, the CEO of the Guardian Media Group, Anna Bateson, highlighted her concerns about the incident. She stressed that the poll was an inappropriate use of generative AI and had caused "significant reputational damage" to both the organization and the journalists who worked on the story.

Bateson also emphasized the importance of a strong copyright framework, highlighting the need for publishers to have control over how their journalism is used. Microsoft responded by deactivating all Microsoft-generated polls for news articles and launching an investigation into the cause of the inappropriate content. A Microsoft spokesperson acknowledged that the poll should not have appeared alongside such a sensitive article and vowed to take steps to prevent similar errors in the future.

This incident shines a spotlight on the ethical considerations surrounding the use of AI in journalism. As the boundaries between human input and automated processes blur, news organizations and technology companies must navigate these challenges carefully to preserve journalistic integrity and respect for those affected by sensitive stories.

While the implications of this incident are being examined, it serves as a reminder that the responsible deployment of AI technologies in journalism requires a thoughtful and sensitive approach to prevent potential harm and uphold journalistic standards.

The views expressed in this article do not reflect the opinion of ICARO, or any of its affiliates.

Related