Amid ongoing legal challenges concerning ChatGPT's safety shortcomings, OpenAI has voiced support for the Kids Online Safety Act (KOSA). This backing aligns with the firm's pledge to develop targeted regulations for AI tools focused on protecting young users.
The support arrives as KOSA, approved by the Senate this year, builds further traction following its initial proposal in 2022. Among various initiatives for digital child welfare, this legislation mandates that social networks and web services adopt more robust measures to shield minors. After several updates, the latest draft compels apps to let underage users disable engaging elements and personalized content feeds. Additionally, sites must exercise a responsibility to curb dangerous material linked to issues like disordered eating, self-harm, and abuse.
Major tech firms such as Apple, Microsoft, Snap, and X have similarly thrown their weight behind the proposal. However, NetChoice, representing entities including Meta and various services, argues that it promotes suppression of speech while failing to truly bolster youth security on the internet. Advocacy organizations for privacy and online freedoms, including the Electronic Frontier Foundation, share this criticism.
While KOSA primarily targets social networks, OpenAI views it as an extension of its existing child protection efforts. In a public remark, the company's top international policy executive, Chris Lehane, warned against echoing the oversights of social media's expansion, where teen protections only emerged after such services had become integral to adolescents' daily routines.
OpenAI confronts multiple court cases tied to its handling of user safety. One involves a claim of negligence leading to a teenager's suicide, brought by the youth's relatives after interactions with the AI assistant reportedly involved suicidal ideation. In a separate action, parents of another minor accuse the tool of providing faulty health guidance, resulting in a fatal drug overdose.