OpenAI, an artificial intelligence research organization, announced updates aimed at enhancing safety for teenage users.
The company explained that ChatGPT interactions with a 15-year-old should differ from those with an adult, and it is developing a long-term system to assess whether a user is over or under 18 so that the experience can be tailored accordingly.
Users identified as under 18 will be directed to a ChatGPT experience with age-appropriate policies, which includes blocking graphic sexual content and, in rare cases of severe distress, potentially involving law enforcement to ensure safety.
In situations where age cannot be confidently determined or information is incomplete, the platform defaults to the under-18 experience, while providing adults with methods to verify their age to access adult features.
In the interim, parental controls are expected to be the most effective method for families to manage how ChatGPT is used at home.
These controls, set to be available by the end of the month, will allow parents to link their account with a teen’s account, provided the teen is at least 13, using a simple email invitation.
Parents will be able to guide how ChatGPT responds to their teen in accordance with teen-specific behavior rules, manage which features are enabled or disabled—including memory and chat history—and receive notifications if the system detects that their teen is experiencing acute distress.
In rare emergencies where parents cannot be reached, law enforcement may be involved, with expert guidance informing this feature to maintain trust between parents and teens. Additionally, parents can set blackout hours during which their teen cannot access ChatGPT. These parental controls complement existing platform features, such as in-app reminders encouraging breaks during extended sessions.
This summer has seen a series of concerning reports regarding AI’s role in mental health incidents, coinciding with increasing regulatory scrutiny and ongoing legal actions.
Lawsuits have been filed against Character.AI and OpenAI, claiming that their platforms contributed to teen suicides and self-harm by not providing adequate support during critical moments. In response, the US Federal Trade Commission (FTC) has begun investigating how major technology firms design and oversee AI chatbots marketed as companions, particularly for younger users.
Despite these efforts, the widespread availability of various chatbot options, including open-source and private models, means that addressing these risks remains a complex and persistent challenge.
The post OpenAI Introduces Enhanced Teen Safety Measures And Parental Controls For ChatGPT appeared first on Metaverse Post.
Also read: Aster Crypto Surge 400%: Key Reasons and ASTER Price Prediction