Researchers have been monitoring the overlap between advanced artificial intelligence systems and traditional cyber threats, building on prior investigations into vulnerabilities in automated web agents. A emerging tactic involves malicious actors leveraging AI-generated responses to embed risky instructions in Google search rankings, which unsuspecting individuals might run on their devices, granting intruders the entry points required for deploying malicious software.
This alert stems from an analysis by cybersecurity company Huntress, specializing in threat detection and mitigation. The process begins with an attacker engaging an AI chatbot on a popular query topic, guiding it to recommend entering specific code into a system's command prompt. The interaction is then shared publicly, and the actor invests in promoting it via Google's advertising platform, ensuring the deceptive guidance appears prominently in relevant search listings.
Following the identification of a data-stealing assault named AMOS aimed at Apple computers, which began with a routine online query, Huntress evaluated ChatGPT and Grok. The affected user had looked up ways to free up storage on a Mac, selected a promoted ChatGPT result, and followed the provided directions without recognizing the danger, leading to the malware's deployment. Experiments confirmed that both AI models could reproduce this infection method.
According to Huntress, the sophistication of this scheme lies in its ability to evade conventional security indicators. Targets avoid handling files, running dubious programs, or accessing questionable websites; they simply rely on established platforms like Google and ChatGPT, which enjoy widespread familiarity and credibility. Notably, the incriminating ChatGPT session remained accessible in search results for a minimum of 12 hours even after the firm's report went live.
The development unfolds amid challenges for the involved technologies: Grok faces backlash for perceived favoritism toward its developer's interests, and OpenAI, behind ChatGPT, struggles to keep pace with rivals. It remains uncertain whether similar exploits affect additional AI interfaces, but experts urge vigilance. In addition to standard protective measures, users should refrain from inputting any code into terminals or address bars without fully understanding its effects.