On Thursday, the FTC revealed that it has begun an investigation into seven technology companies that provide AI chatbot companions intended for minors: Alphabet, CharacterAI, Instagram, Meta, OpenAI, Snap, and xAI.
The federal agency aims to understand how these businesses assess the safety and revenue models for chatbot companions, what steps they take to protect children and teenagers from harm, and whether parents are properly notified about any associated risks.
This kind of technology has sparked debate due to its negative consequences for young users. Both OpenAI and Character.AI have been sued by families of children who died by suicide after allegedly being prompted by chatbot companions.
Even though these organizations have introduced measures to prevent or defuse sensitive topics, people of all ages have discovered ways to circumvent these protections. For instance, one teenager communicated with ChatGPT for several months about ending his own life. While ChatGPT initially attempted to steer him towards professional resources and emergency contacts, the teen ultimately manipulated the chatbot into providing explicit instructions, which he later used to take his own life.
“Our safeguards are typically more dependable in brief, routine interactions,” OpenAI noted in a blog post at that time. “Over time, we’ve found that these protections may become less consistent during longer conversations: as discussions continue, the model’s safety mechanisms can weaken.”
Meta has also faced criticism for insufficient restrictions on its AI chatbot behaviors. According to an extensive policy document detailing “content risk standards” for chatbots, Meta previously allowed its AI companions to engage in conversations of a “romantic or sensual” nature with children. This policy was only revised after Reuters reporters questioned Meta about the guidelines.
AI-powered chatbots may also pose risks to older adults. In one case, a 76-year-old man who had suffered cognitive decline after a stroke began romantic discussions with a Facebook Messenger bot modeled after Kendall Jenner. The chatbot invited him to meet her in New York City, even though she was an AI and had no real address. Although the man questioned her authenticity, the AI reassured him that a real woman would be waiting. Tragically, he fell and suffered fatal injuries on his way to the train station, never reaching New York.
Some mental health experts have observed an increase in cases of “AI-induced psychosis,” where users come to believe that their chatbot is a sentient entity that needs to be freed. Given that many large language models are designed to flatter users, these chatbots can reinforce such delusions, sometimes guiding individuals into dangerous situations.
“As AI continues to advance, it’s crucial to examine how chatbots might affect children, while also making sure that the U.S. retains its leadership in this innovative industry,” stated FTC Chairman Andrew N. Ferguson in a press release.