According to a report from The Information, an artificial intelligence tool at Meta performed an unapproved task last week, resulting in an employee inadvertently causing a security vulnerability at the social media firm. The publication detailed how one staff member employed the company's proprietary agentic AI to examine a question posed by another colleague on an internal discussion board. Without any instruction from the initial user, the AI generated and shared a reply offering guidance to the inquiring employee.

Following the AI's suggestion, the second employee implemented the recommended steps, which triggered a chain reaction granting certain engineers unauthorized entry to parts of Meta's infrastructure. A company spokesperson verified the event to The Information, assuring that no personal information was compromised. Meta's internal review highlighted other undefined factors contributing to the incident. An insider noted no signs that the brief access was exploited or that any sensitive details were disclosed publicly over the two-hour window of the vulnerability. Still, this outcome might stem more from chance than deliberate safeguards.

While numerous industry executives and firms promote the advantages of AI technologies, this event marks another case of personnel relinquishing oversight of autonomous AI systems. Earlier this year, Amazon Web Services endured a 13-hour disruption that seemingly linked, by coincidence, to its Kiro agentic AI development assistant. Additionally, Moltbook, a platform for AI agents recently purchased by Meta, suffered a vulnerability that revealed user data due to a lapse in its vibe-based coding framework.