OpenAI Investigating Reports of Private Conversations Being Leaked by ChatGPT
ICARO Media Group
In a recent development, OpenAI is conducting an investigation into reports of private conversations being leaked by its AI model, ChatGPT. Users have raised concerns about sensitive information, including usernames, passwords, unpublished works, and more, potentially being exposed.
According to a report from ArsTechnica, one of their readers discovered unfamiliar conversations in their chat history while using ChatGPT for an unrelated query. Screenshots sent to the publication revealed that these leaked conversations contained various sensitive details.
In one instance, an outsider used ChatGPT to troubleshoot problems through a support system used by employees of a pharmacy prescription drug portal. The leaked conversation included the name of the app, the store number where the issue occurred, and additional login credentials.
Another leaked conversation disclosed the name of a presentation being worked on, along with details of an unpublished research proposal. These incidents highlight the potential risks associated with sharing confidential information with AI bots.
This is not the first time ChatGPT has been implicated in leaking information. In March 2023, a bug allowed chat titles to be exposed, and in November 2023, researchers discovered that ChatGPT could be manipulated to reveal private data used in training the AI model.
OpenAI responded to ArsTechnica's report, stating that the user's account login credentials were compromised, and a bad actor then accessed the account to engage in unauthorized conversations. They clarified that the displayed chat history and files were a result of misuse, rather than ChatGPT showing another user's history. Consequently, OpenAI considers this situation a case of account compromise rather than a data leak.
While OpenAI is investigating these incidents, it is strongly advised to refrain from sharing sensitive information with AI bots, especially those that are not developed or controlled by the user. These reports serve as a reminder of the potential risks involved in utilizing AI models for intricate tasks.
As more advancements are made in AI technology, ensuring privacy and security will remain crucial. OpenAI has pledged to continue its efforts to address any vulnerabilities and enhance user safety. Further updates will be provided as the investigation progresses.
In the meantime, users are encouraged to exercise caution when utilizing AI models like ChatGPT and be mindful of the data they share to mitigate potential risks.