OpenAI's ChatGPT Misused in Over 20 Cyber Operations by Global Threat Actors
ICARO Media Group
****
OpenAI has revealed the disruption of more than 20 illicit cyber operations that exploited its AI-driven chatbot, ChatGPT. These nefarious activities involved the development and debugging of malware, dissemination of misinformation, evasion of detection, and execution of spear-phishing campaigns. This update marks the first official confirmation that mainstream generative AI tools are being utilized to bolster offensive cyber capabilities.
Evidence of such activity first came to light in April when Proofpoint discerned that the threat group TA547 (aka "Scully Spider") might be deploying an AI-generated PowerShell loader for their malware, Rhadamanthys info-stealer. Subsequent reports from HP Wolf researchers last month suggested that cybercriminals targeting French users were leveraging AI to script multi-step infection chains.
OpenAI’s latest findings expose an array of threat actors leveraging ChatGPT. Among these is 'SweetSpecter,' a Chinese cyber-espionage group initially identified by Cisco Talos in November 2023. This group targeted OpenAI staff directly with spear-phishing emails containing malicious ZIP files camouflaged as support requests. Investigations revealed SweetSpecter utilized ChatGPT for scripting and vulnerability analysis.
Another major threat actor identified by OpenAI is the Iranian Islamic Revolutionary Guard Corps (IRGC)-affiliated group 'CyberAv3ngers.' They exploited ChatGPT to generate default credentials for Programmable Logic Controllers (PLCs), develop bespoke bash and Python scripts, disguise code, and strategize post-compromise activities. Their operations aimed at critical infrastructure in Western nations, using AI to enhance their attack strategies.
Additionally, Iranian group Storm-0817 employed ChatGPT for debugging malware, creating scripts, translating LinkedIn profiles into Persian, and developing custom Android malware. This malware was capable of stealing contact lists, call logs, files, browsing history, and precise user locations. Also, they used ChatGPT to devise server-side code to manage compromised device connections, employing a WAMP setup linked to the domain stickhero[.]pro.
OpenAI has banned all implicated accounts and shared pertinent indicators of compromise, such as IP addresses, with cybersecurity partners. While these cases do not grant novel capabilities in malware creation, they underscore how generative AI tools can streamline offensive operations for less skilled actors, aiding in various stages from planning to execution.