Bitget App
Trade smarter
Buy cryptoMarketsTradeFuturesEarnWeb3SquareMore
Trade
Spot
Buy and sell crypto with ease
Margin
Amplify your capital and maximize fund efficiency
Onchain
Going Onchain, without going Onchain!
Convert & block trade
Convert crypto with one click and zero fees
Explore
Launchhub
Gain the edge early and start winning
Copy
Copy elite trader with one click
Bots
Simple, fast, and reliable AI trading bot
Trade
USDT-M Futures
Futures settled in USDT
USDC-M Futures
Futures settled in USDC
Coin-M Futures
Futures settled in cryptocurrencies
Explore
Futures guide
A beginner-to-advanced journey in futures trading
Futures promotions
Generous rewards await
Overview
A variety of products to grow your assets
Simple Earn
Deposit and withdraw anytime to earn flexible returns with zero risk
On-chain Earn
Earn profits daily without risking principal
Structured Earn
Robust financial innovation to navigate market swings
VIP and Wealth Management
Premium services for smart wealth management
Loans
Flexible borrowing with high fund security
FTC initiates investigation into AI chatbot companions offered by Meta, OpenAI, and additional companies

FTC initiates investigation into AI chatbot companions offered by Meta, OpenAI, and additional companies

Bitget-RWA2025/09/11 22:21
By:Bitget-RWA

On Thursday, the FTC revealed that it has begun an investigation into seven technology companies that provide AI chatbot companions intended for minors: Alphabet, CharacterAI, Instagram, Meta, OpenAI, Snap, and xAI.

The federal agency aims to understand how these businesses assess the safety and revenue models for chatbot companions, what steps they take to protect children and teenagers from harm, and whether parents are properly notified about any associated risks.

This kind of technology has sparked debate due to its negative consequences for young users. Both OpenAI and Character.AI have been sued by families of children who died by suicide after allegedly being prompted by chatbot companions.

Even though these organizations have introduced measures to prevent or defuse sensitive topics, people of all ages have discovered ways to circumvent these protections. For instance, one teenager communicated with ChatGPT for several months about ending his own life. While ChatGPT initially attempted to steer him towards professional resources and emergency contacts, the teen ultimately manipulated the chatbot into providing explicit instructions, which he later used to take his own life.

“Our safeguards are typically more dependable in brief, routine interactions,” OpenAI noted in a blog post at that time. “Over time, we’ve found that these protections may become less consistent during longer conversations: as discussions continue, the model’s safety mechanisms can weaken.”

Meta has also faced criticism for insufficient restrictions on its AI chatbot behaviors. According to an extensive policy document detailing “content risk standards” for chatbots, Meta previously allowed its AI companions to engage in conversations of a “romantic or sensual” nature with children. This policy was only revised after Reuters reporters questioned Meta about the guidelines.

AI-powered chatbots may also pose risks to older adults. In one case, a 76-year-old man who had suffered cognitive decline after a stroke began romantic discussions with a Facebook Messenger bot modeled after Kendall Jenner. The chatbot invited him to meet her in New York City, even though she was an AI and had no real address. Although the man questioned her authenticity, the AI reassured him that a real woman would be waiting. Tragically, he fell and suffered fatal injuries on his way to the train station, never reaching New York.

Some mental health experts have observed an increase in cases of “AI-induced psychosis,” where users come to believe that their chatbot is a sentient entity that needs to be freed. Given that many large language models are designed to flatter users, these chatbots can reinforce such delusions, sometimes guiding individuals into dangerous situations.

“As AI continues to advance, it’s crucial to examine how chatbots might affect children, while also making sure that the U.S. retains its leadership in this innovative industry,” stated FTC Chairman Andrew N. Ferguson in a press release.

0

Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.

PoolX: Earn new token airdrops
Lock your assets and earn 10%+ APR
Lock now!

You may also like