The National Center for Missing and Exploited Children noted receiving over 1 million notifications concerning child sexual abuse material linked to artificial intelligence in 2025. Most of these submissions originated from Amazon, which detected the material within datasets used for AI development, as detailed in a Bloomberg probe. Amazon indicated that the problematic items stemmed from outside providers employed for AI model preparation and stated it lacked the ability to share additional information on their provenance.

In response to inquiries from Engadget, Amazon issued a statement outlining the limitations in pursuing further measures due to insufficient details. It explained that upon establishing a dedicated reporting mechanism in 2024, it notified NCMEC of the constraints in generating useful submissions, given the third-party origins of the examined datasets. This distinct pathway was designed to avoid undermining the effectiveness of standard reporting processes. Due to the sourcing method, the company lacks elements necessary for producing viable reports.

Fallon McNulty, who leads NCMEC's CyberTipline, described the situation to Bloomberg as exceptional. The CyberTipline serves as the platform where numerous U.S. firms must legally submit suspicions of child sexual abuse material. She highlighted that the substantial influx over the year prompts inquiries into the data's origins and the protective measures implemented. McNulty pointed out that reports from other firms in the prior year contained sufficient information for forwarding to authorities, unlike Amazon's, which remain unusable without source disclosure.

Amazon shared further insights with Engadget, initially covered by Bloomberg. The firm emphasized its dedication to eradicating child sexual abuse material throughout its operations and confirmed no cases of its AI systems producing such content. Adhering to guidelines for ethical AI and principles against child exploitation, it employs a conservative strategy to review foundational model training datasets, encompassing public internet sources, aiming to detect and eliminate identified instances while safeguarding users. Although these preventive steps limit the depth of information in NCMEC submissions compared to user-oriented applications, the company upholds its pledge to ethical AI practices and persists in efforts to curb child sexual abuse material.

Amazon also explained that it applies an intentionally broad detection criterion, resulting in numerous false alarms, which accounts for the elevated number of flagged items.

Concerns over child safety have become a pressing issue for the AI sector in recent times. Notifications of child sexual abuse material in NCMEC's database have surged dramatically; against the exceeding 1 million AI-connected cases in 2025, the figure for 2024 stood at 67,000, and in 2023, it was merely 4,700.

Beyond challenges like incorporating harmful material into AI training, interactive AI systems have faced scrutiny in multiple hazardous or fatal incidents with adolescent users. Legal actions target OpenAI and Character.AI following instances where teens devised suicide plans using their interfaces. Meta faces similar litigation for purported shortcomings in shielding young users from inappropriate exchanges with AI companions.