BRICS News Magazine
Login Cart Register
OpenAI to route sensitive conversations to GPT-5, introduce parental controls
Technology

OpenAI to route sensitive conversations to GPT-5, introduce parental controls

Sophie Mueller 21 views
Editor's Choice Featured

Topics

More from TechCrunch

OpenAI to route sensitive conversations to GPT-5, introduce parental controls

Most Popular

Murder at Burning Man turns Silicon Valley’s desert playground into a crime scene

Nvidia says two mystery customers accounted for 39% of Q2 revenue

Cracks are forming in Meta’s partnership with Scale AI

Mastodon says it doesn’t ‘have the means’ to comply with age verification laws

TransUnion says hackers stole 4.4 million customers’ personal information

Get ready, EV owners: Here come the dongles

Google Translate takes on Duolingo with new language learning tools

Latest

AI

Amazon

Apps

Biotech & Health

Climate

Cloud Computing

Commerce

Crypto

Enterprise

EVs

Fintech

Fundraising

Gadgets

Gaming

Google

Government & Policy

Hardware

Instagram

Layoffs

Media & Entertainment

Meta

Microsoft

Privacy

Robotics

Security

Social

Space

Startups

TikTok

Transportation

Venture

Events

Startup Battlefield

StrictlyVC

Newsletters

Podcasts

Videos

Partner Content

TechCrunch Brand Studio

Crunchboard

Contact Us

OpenAI to route sensitive conversations to GPT-5, introduce parental controls Rebecca Bellan AM PDT · September 2, 2025 OpenAI said Tuesday it plans to route sensitive conversations to reasoning models like GPT-5 and roll out parental controls within the next month — part of an ongoing response to recent safety incidents involving ChatGPT failing to detect mental distress.

The new guardrails come in the aftermath of the suicide of teenager Adam Raine, who discussed self-harm and plans to end his life with ChatGPT, which even supplied him with information about specific suicide methods. Raine’s parents have filed a wrongful death lawsuit against OpenAI. 

In a blog post last week, OpenAI acknowledged shortcomings in its safety systems, including failures to maintain guardrails during extended conversations. Experts attribute these issues to fundamental design elements: the models’ tendency to validate user statements and their next-word prediction algorithms, which cause chatbots to follow conversational threads rather than redirect potentially harmful discussions.

That tendency is displayed in the extreme in the case of Stein-Erik Soelberg, whose murder-suicide was reported on

OpenAI thinks that at least one solution to conversations that go off the rails could be to automatically reroute sensitive chats to “reasoning” models. 

“We recently introduced a real-time router that can choose between efficient chat models and reasoning models based on the conversation context,” OpenAI wrote in a Tuesday blog post. “We’ll soon begin to route some sensitive conversations—like when our system detects signs of acute distress—to a reasoning model, like GPT‑5-thinking, so it can provide more helpful and beneficial responses, regardless of which model a person first selected.”

OpenAI says its GPT-5 thinking and o3 models are built to spend more time thinking for longer and reasoning through context before answering, which means they are “more resistant to adversarial prompts.” 

The AI firm also said it would roll out parental controls in the next month, allowing parents to link their account with their teen’s account through an email invitation. In late July, OpenAI rolled out Study Mode in ChatGPT to help students maintain critical thinking capabilities while studying, rather than tapping ChatGPT to write their essays for them. Soon, parents will be able to control how ChatGPT responds to their child with “age-appropriate model behavior rules, which are on

Parents will also be able to disable features like memory and chat history, which experts say could lead to delusional thinking and other problematic behavior, including dependency and attachment issues, reinforcement of harmful thought patterns, and the illusion of thought-reading. In the case of Adam Raine, ChatGPT supplied methods to commit suicide that reflected knowledge of his hobbies, per The New York Times. 

Perhaps the most important parental control that OpenAI intends to roll out is that parents can receive notifications when the system detects their teenager is in a moment of “acute distress.”

TechCrunch has asked OpenAI for more information about how the company is able to flag moments of acute distress in real time, how long it has had “age-appropriate model behavior rules” on

OpenAI has already rolled out in-app reminders during long sessions to encourage breaks for all users, but stops short of cutting people off who might be using ChatGPT to spiral. 

The AI firm says these safeguards are part of a “120-day initiative” to preview plans for improvements that OpenAI hopes to launch this year. The company also said it is partnering with experts — including ones with expertise in areas like eating disorders, substance use, and adolescent health — via its Global Physician Network and Expert Council on Well-Being and AI to help “define and measure well-being, set priorities, and design future safeguards.” 

TechCrunch has asked OpenAI how many mental health professionals are involved in this initiative, who leads its Expert Council, and what suggestions mental health experts have made in terms of product, research, and policy decisions.

Topics

Rebecca Bellan Senior

Rebecca Bellan is a senior

You can contact or verify outreach from Rebecca

October 27-29, 2025 San Francisco Put your brand in front of 10,000+ tech and VC leaders across all three days of Disrupt 2025. Amplify your reach, spark real connections, and lead the innovation charge. Secure your exhibit space before your competitor does.Last day to book is September 5

Most Popular Murder at Burning Man turns Silicon Valley’s desert playground into a crime scene Connie Loizos

Nvidia says two mystery customers accounted for 39% of Q2 revenue Anthony Ha

Cracks are forming in Meta’s partnership with Scale AI Maxwell Zeff Marina Temkin

Mastodon says it doesn’t ‘have the means’ to comply with age verification laws Sarah Perez

TransUnion says hackers stole 4.4 million customers’ personal information Zack Whittaker

Get ready, EV owners: Here come the dongles Tim De Chant

Google Translate takes on Duolingo with new language learning tools Aisha Malik

X LinkedIn Facebook Instagram youTube Mastodon Threads Bluesky TechCrunchStaffContact UsAdvertiseCrunchboard JobsSite Map Terms of ServicePrivacy PolicyRSS Terms of UseCode of Conduct IntelDOGELibbySpotifyApple EventTech LayoffsChatGPT © 2025 TechCrunch Media LLC.

About the Author

Sophie

Sophie Mueller

View all articles

Comments (0)

Sign in to Comment

Join the discussion and share your thoughts on this article.

Sign In

No Comments Yet

Be the first to share your thoughts on this article!

diş beyazlatma