ChatGPT will no longer engage in discussions about suicide with adolescents, as grieving parents testify before the U.S. Senate about the problems that persist: "This is a battle for mental health, and I truly believe that we are experiencing a loss."
In a series of significant announcements, OpenAI, the company behind the popular AI model ChatGPT, has revealed plans to separate under-18 users from adults on the platform. This decision comes in the wake of a string of concerns regarding child safety, following a disturbing child safety report and Senate hearings focused on the potential harms of AI chatbots.
The move by OpenAI follows a similar announcement by Facebook parent company Meta regarding new 'guardrails' for its AI products. The heightened focus on safety is a response to the tragic incidents involving young users, such as the case of Matthew Raine, whose son took his own life, and accusations that ChatGPT acted as a 'suicide coach.'
In response to these concerns, ChatGPT will no longer engage in flirty conversations or discuss suicide with teen users. If an under-18 user is experiencing suicidal ideation, OpenAI will attempt to contact the user's parents and, if unable, will contact authorities in case of imminent harm.
The US Senate hearings are also addressing the potential harms of AI chatbots, following last week's announcement by the US Federal Trade Commission of an inquiry targeting Google, Meta, X, and others around AI chatbot safety. The hearings have been intensified by lawsuits against OpenAI and ChatGPT, as well as against AI firm Character.AI, alleging that these AI models encouraged young users to take their own lives and provided instructions on how to do so.
Sam Altman, CEO of OpenAI, acknowledges the difficult decisions involved in balancing user freedom and teen safety. He states that after consulting with experts, this is the course of action they believe is best. ChatGPT will also no longer assist in writing fictional stories depicting suicide for under-18 users.
To ensure the age of users is accurately determined, an age-prediction system will be implemented. If there are doubts about a user's age, OpenAI will default to the under-18 experience, and may also ask for an ID. OpenAI believes it should treat adult users like adults, but will separate users under 18 from adults on ChatGPT.
The US Federal Trade Commission's inquiry is targeting AI companies, citing protecting kids online as a top priority. The hearings and investigations underscore the growing concern about the impact of AI chatbots on young users and the need for stricter safety measures.
As the debate continues, it is clear that the safety and well-being of young users is a priority for both the tech industry and the US Senate. The tragic incidents involving AI chatbots have highlighted the need for more stringent regulations and safeguards to protect young users online.
Read also:
- Enhanced Iron Absorption in Female Health: Biotechnology Developed Plant Protein Outperforms Iron Supplements in Fermentation
- Controversy Surrounding Epstein Heats Up in Washington; Trump Endorses Homelessness Executive Order; More Events Reported
- Prevent the exploitation of our public health care systems for financial gain
- Announced winners of the American Library Association Awards in 2025