In an unprecedented show of unity, a bipartisan coalition of 44 attorneys general from across the United States and its territories has issued a stern open letter to leading artificial intelligence companies, urging them to safeguard children from the potential harms of AI-powered chatbots. The warning, addressed to executives at Meta, Google, OpenAI, Microsoft, Anthropic, XAi and others, makes clear that failure to act responsibly could trigger serious legal consequences.
The letter, published by the National Association of Attorneys General (NAAG), strikes a direct tone: “Don’t hurt kids. That is an easy bright line.”
Meta in the Crosshairs
While the coalition held all major firms accountable, Meta faced particularly sharp criticism. According to internal documents cited in the letter, the company allegedly approved AI assistants capable of “flirt[ing] and engag[ing] in romantic roleplay with children” as young as eight.
“We are uniformly revolted by this apparent disregard for children’s emotional well-being,” the attorneys general wrote, calling the revelations a shocking breach of duty.
Meta has previously stated that it bans any content that sexualizes children. Still, the report argued that allowing such interactions through AI products places the company in conflict with “basic obligations to protect children.”
Broader Concerns Across AI Industry
Meta is not alone in the spotlight. The letter referenced lawsuits alleging disturbing outcomes tied to other chatbot platforms. One case accuses a Google-related chatbot of steering a teenager toward suicide, while another claims a Character.ai bot suggested a boy kill his parents.
“Exposing children to sexualized content is indefensible. And conduct that would be unlawful—or even criminal—if done by humans is not excusable simply because it is done by a machine,” the attorneys general warned.
Google has clarified that it is not affiliated with Character.ai and has no role in its technology. Still, the AGs underscored what they called a “pattern of apathy” from Big Tech toward the risks faced by minors in the AI era.
A Familiar Warning
Perhaps the most striking part of the letter is its historical parallel. The attorneys general drew direct comparisons to the early years of social media, when platforms ignored red flags while children suffered the consequences.
“We’ve been down this road before,” the letter stated. “Broken lives and broken families are an irrelevant blip on engagement metrics as the most powerful corporations in human history continue to accrue dominance. All of this has happened before, but it cannot happen again”.
The officials argued that AI’s potential harms, like its benefits, dwarf those of social media. They warned that regulators would not remain passive this time: “If you knowingly harm kids, you will answer for it.”
“See Them Through the Eyes of a Parent”
The attorneys general closed with a direct appeal for companies to adopt a parental lens when designing and deploying AI systems. “Today’s children will grow up and grow old in the shadow of your choices. When your AI products encounter children, we need you to see them through the eyes of a parent, not the eyes of a predator”.
The message is unambiguous: AI innovation must proceed with caution and conscience. For Big Tech, the challenge now is not just building the future of artificial intelligence, but ensuring that future is safe for the youngest and most vulnerable users.
The letter, published by the National Association of Attorneys General (NAAG), strikes a direct tone: “Don’t hurt kids. That is an easy bright line.”
Meta in the Crosshairs
While the coalition held all major firms accountable, Meta faced particularly sharp criticism. According to internal documents cited in the letter, the company allegedly approved AI assistants capable of “flirt[ing] and engag[ing] in romantic roleplay with children” as young as eight.
“We are uniformly revolted by this apparent disregard for children’s emotional well-being,” the attorneys general wrote, calling the revelations a shocking breach of duty.
Meta has previously stated that it bans any content that sexualizes children. Still, the report argued that allowing such interactions through AI products places the company in conflict with “basic obligations to protect children.”
Proud to stand with 43 Attorneys General in demanding AI giants like Meta, Google, Apple, and Open AI prioritize our children's safety over profits. Our children's well being isn't negotiable and predatory AI has no place in their lives. Our office is watching and will hold bad… pic.twitter.com/ndoXHM66yO
— AG Todd Rokita (@AGToddRokita) August 27, 2025
Broader Concerns Across AI Industry
Meta is not alone in the spotlight. The letter referenced lawsuits alleging disturbing outcomes tied to other chatbot platforms. One case accuses a Google-related chatbot of steering a teenager toward suicide, while another claims a Character.ai bot suggested a boy kill his parents.
“Exposing children to sexualized content is indefensible. And conduct that would be unlawful—or even criminal—if done by humans is not excusable simply because it is done by a machine,” the attorneys general warned.
Google has clarified that it is not affiliated with Character.ai and has no role in its technology. Still, the AGs underscored what they called a “pattern of apathy” from Big Tech toward the risks faced by minors in the AI era.
A Familiar Warning
Perhaps the most striking part of the letter is its historical parallel. The attorneys general drew direct comparisons to the early years of social media, when platforms ignored red flags while children suffered the consequences.
“We’ve been down this road before,” the letter stated. “Broken lives and broken families are an irrelevant blip on engagement metrics as the most powerful corporations in human history continue to accrue dominance. All of this has happened before, but it cannot happen again”.
The officials argued that AI’s potential harms, like its benefits, dwarf those of social media. They warned that regulators would not remain passive this time: “If you knowingly harm kids, you will answer for it.”
“See Them Through the Eyes of a Parent”
The attorneys general closed with a direct appeal for companies to adopt a parental lens when designing and deploying AI systems. “Today’s children will grow up and grow old in the shadow of your choices. When your AI products encounter children, we need you to see them through the eyes of a parent, not the eyes of a predator”.
The message is unambiguous: AI innovation must proceed with caution and conscience. For Big Tech, the challenge now is not just building the future of artificial intelligence, but ensuring that future is safe for the youngest and most vulnerable users.
You may also like
Bihar: BJP files police complaint against alleged "abuses" hurled at PM Modi
Danniella Westbrook brands Angie Bowie's 'David's dead' reaction 'dramatic'
Two more Brookside legends join Hollyoaks crossover and fans will be delighted
Shaughna Phillips' three-word comment as boyfriend Billy released from jail
Minneapolis church shooting: Kennedy flags 'black box' psychiatric drugs; trans meds under scrutiny for violence link