Authorities in Australia are contemplating tougher measures to prevent minors from interacting with AI conversational tools. According to Reuters, officials might compel digital marketplaces to restrict access to AI applications lacking mechanisms for age confirmation to limit adult-oriented material, with a potential enforcement date of March 9.

"The eSafety office intends to deploy every available authority in cases of violation," stated a spokesperson for the commissioner in comments provided to the news outlet. Such measures might target intermediary platforms, including web search tools and application distribution sites that serve as primary entryways to specific offerings.

A Reuters analysis revealed that among 50 prominent text-generating AI chat platforms operating locally, just nine had rolled out or announced intentions for age verification systems. The investigation noted that 11 others had applied across-the-board content restrictions or intended to prohibit Australian users entirely, as per the findings, which means many remained inactive publicly just one week before the national cutoff. Non-adherence could result in penalties for AI developers reaching as high as A$49.5 million (about $35 million).

Global discussions continue on who should bear the burden of shielding kids from risky online material. For example, in the United States, companies like Apple and Google are advocating for content hosts to handle this duty instead of distribution channels. While the Australian statements regarding universal store involvement remain tentative for now, this approach fits with the government's recent expansive prohibition on social networks and interactive online spaces for those younger than 16, signaling a firm commitment from policymakers.