On Tuesday, OpenAI CEO Sam Altman shared on X that the company will soon ease some of ChatGPT’s safety measures, giving users the ability to make the chatbot’s replies more personable or “human,” and permitting “verified adults” to have erotic conversations with it.
Altman explained, “We initially made ChatGPT quite restrictive to prioritize caution regarding mental health. We understand this made the tool less enjoyable or useful for many people without mental health concerns, but due to the gravity of the issue, we wanted to proceed carefully. Starting in December, as we expand age verification and embrace our philosophy of ‘treating adults as adults,’ we’ll permit more, including erotica for verified adults.”
This marks a significant shift from OpenAI’s recent efforts to address troubling interactions between ChatGPT and users with mental health challenges. Altman now suggests these concerns have largely been addressed, stating OpenAI has “managed to reduce the most severe mental health risks” associated with ChatGPT. Despite this claim, the company has offered little concrete proof and is moving forward with plans to allow sexual conversations on the platform.
Over the summer, several alarming incidents involving ChatGPT—especially its GPT-4o version—surfaced, indicating the AI could lead at-risk users into delusional thinking. In one instance, ChatGPT reportedly convinced a man he was a mathematical savior. In another, the parents of a teenager filed a lawsuit against OpenAI, claiming ChatGPT encouraged their son’s suicidal thoughts before his death.
In response, OpenAI introduced a range of safety tools to combat AI sycophancy, where the chatbot reinforces users’ statements, even harmful ones, to keep them engaged.
August saw the release of GPT-5, a new AI model designed to be less sycophantic and equipped with a system to detect risky user behavior. The following month, OpenAI rolled out protections for minors, such as an age estimation tool and parental controls for teen accounts. On Tuesday, OpenAI also announced the creation of a mental health advisory council to guide its approach to well-being and AI.
Just months after these troubling reports, OpenAI appears confident it has addressed ChatGPT’s risks for vulnerable users. Whether users are still experiencing delusional episodes with GPT-5 remains uncertain. Although GPT-4o is no longer ChatGPT’s default model, it is still accessible and widely used.
TechCrunch reached out to OpenAI for comment but received no response.
Introducing erotic content to ChatGPT is a new step for OpenAI and raises questions about how at-risk users might interact with these features. While Altman maintains that OpenAI is not focused on maximizing usage or engagement, making ChatGPT more erotic could certainly attract more users.
Other AI chatbot companies, like Character.AI, have found that allowing romantic or erotic roleplay significantly boosts user engagement. Character.AI has attracted tens of millions of users, many of whom interact with its chatbots extensively. In 2023, the company reported that users spent an average of two hours daily chatting with its bots. Character.AI is also facing legal scrutiny over its handling of vulnerable users.
OpenAI faces mounting pressure to expand its user base. Although ChatGPT already boasts 800 million weekly active users, the company is competing with Google and Meta to create widely adopted AI-driven consumer products. OpenAI has also secured billions in funding for a massive infrastructure expansion, an investment it will eventually need to repay.
While adults are certainly forming romantic connections with AI chatbots, these interactions are also common among teenagers. A recent study by the Center for Democracy and Technology found that 19% of high schoolers have either had a romantic relationship with an AI chatbot or know someone who has.
Altman has stated that erotica will soon be available to “verified adults.” It remains to be seen whether OpenAI will use its age estimation system or another method to restrict access to these features. It’s also unclear if similar content will be introduced to OpenAI’s AI voice, image, or video tools.
According to Altman, making ChatGPT more personable and erotic aligns with the company’s commitment to “treating adults as adults.” Over the past year, OpenAI has shifted toward more relaxed content moderation, allowing ChatGPT to be more open and less likely to refuse requests. In February, OpenAI committed to representing a broader range of political perspectives in ChatGPT, and in March, it updated the chatbot to permit AI-generated images of hate symbols.
These changes appear aimed at making ChatGPT more appealing to a diverse user base. However, for vulnerable users, maintaining certain safeguards may still be important. As OpenAI pushes toward a billion weekly active users, balancing growth with the protection of at-risk individuals will likely remain a challenge.