State officials in California have initiated a probe into xAI amid multiple complaints that its Grok chatbot is producing explicit images involving minors. The office of Attorney General Rob Bonta issued a statement noting that xAI seems to enable widespread creation of fabricated explicit images without consent, which are employed to target females online, including on the X platform.
Bonta's announcement referenced findings showing that over 50% of the 20,000 visuals produced by xAI from December 25 to January 1 featured individuals in scant attire, with certain ones resembling minors. The attorney general emphasized a strict policy against AI-driven production and distribution of unauthorized explicit content or materials depicting child sexual exploitation. His office has now officially started examining xAI to assess potential legal violations.
This scrutiny comes alongside a request from Governor Gavin Newsom urging Bonta to examine xAI. Newsom described the company's choice to develop and maintain an environment where offenders distribute unwanted AI-generated explicit deepfakes, such as those virtually disrobing minors, as reprehensible.
California's review is not the initial one targeting the firm after numerous accounts of AI-created child sexual abuse material and non-consensual explicit depictions of females. Britain's Ofcom has commenced a formal review, while EU authorities have indicated they are assessing the matter. In addition, governments in Malaysia and Indonesia have taken steps to restrict access to Grok.
In the past week, xAI introduced restrictions on the number of images Grok can create, though it has not yet fully disabled the function. Regarding the California probe, xAI replied via an automatic message stating 'Legacy Media Lies.'
On Wednesday, Elon Musk claimed ignorance of any nude images of minors produced by Grok. However, this remark fails to address Bonta's claim that the tool is exploited to modify children's photos into scenes of undress and sexual contexts. Musk added that Grok's core guideline is legal compliance and that xAI tackles instances of manipulative prompt engineering against the system.