OpenAI adds open source tools to help developers build for teen safety
OpenAI said Tuesday it is releasing a set of prompts that developers can use to make their apps safer for teens. The AI lab said the set of teen safety policies can be used with its open-weight safety model known as gpt-oss-safeguard.
Rather than working from scratch to figure out how to make AI safer for teens, developers can use these prompts to fortify what they build. They address issues like graphic violence and sexual content, harmful body ideals and behaviors, dangerous activities and challenges, romantic or violent role play, and age-restricted goods and services.
These safety policies are designed as prompts, making them easily compatible with other models besides gpt-oss-safeguard, though they’re probably most effective within OpenAI’s own ecosystem.
To write these prompts, OpenAI said it worked with AI safety watchdogs Common Sense Media and everyone.ai.
“These prompt-based policies help set a meaningful safety floor across the ecosystem, and because they’re released as open source, they can be adapted and improved over time,” said Robbie Torney, head of AI & Digital Assessments at Common Sense Media, in a statement.
OpenAI noted in its blog that developers, including experienced teams, often struggle to translate safety goals into precise, operational rules.
“This can lead to gaps in protection, inconsistent enforcement, or overly broad filtering,” the company wrote. “Clear, well-scoped policies are a critical foundation for effective safety systems.”
Techcrunch event
Disrupt 2026: The tech ecosystem, all in one room
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $400.
Save up to $300 or 30% to TechCrunch Founder Summit
1,000+ founders and investors come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the industry. Connect with peers navigating similar growth stages. Walk away with tactics you can apply immediately
Offer ends March 13.
San Francisco, CA
|
October 13-15, 2026
REGISTER NOW
OpenAI admits that these policies aren’t a solution to the complicated challenges of AI safety. But it builds off its previous efforts, including product-level safeguards such as parental controls and age prediction. Last year, OpenAI updated guidelines for its large language models — known as Model Spec — to tackle how its AI models should behave with users under 18.
OpenAI doesn’t have the cleanest track record itself, however. The company is facing several lawsuits filed by the families of people who died by suicide after extreme ChatGPT use. These dangerous relationships often form after the user eclipses the chatbot’s safeguards, and no model’s guardrails are fully impenetrable. Still, these policies are at least a step forward, especially since it can help indie developers.
Topics
AI, ChatGPT, OpenAI, teen safety
Amanda Silberling
Senior Writer
Amanda Silberling is a senior writer at TechCrunch covering the intersection of technology and culture. She has also written for publications like Polygon, MTV, the Kenyon Review, NPR, and Business Insider. She is the co-host of Wow If True, a podcast about internet culture, with science fiction author Isabel J. Kim. Prior to joining TechCrunch, she worked as a grassroots organizer, museum educator, and film festival coordinator. She holds a B.A. in English from the University of Pennsylvania and served as a Princeton in Asia Fellow in Laos.
You can contact or verify outreach from Amanda by emailing amanda@techcrunch.com or via encrypted message at @amanda.100 on Signal.
View Bio
June 9
Boston, MA
Actively scaling? Fundraising? Planning your next launch?
TechCrunch Founder Summit 2026 delivers tactical playbooks and direct access to 1,000+ founders and investors who are building, backing, and closing.
REGISTER NOW
Most Popular
-
Someone has publicly leaked an exploit kit that can hack millions of iPhones
- Lorenzo Franceschi-Bicchierai
- Zack Whittaker
-
Delve accused of misleading customers with ‘fake compliance’
- Anthony Ha
-
Cyberattack on vehicle breathalyzer company leaves drivers stranded across the US
- Zack Whittaker
-
Jeff Bezos reportedly wants $100 billion to buy and transform old manufacturing firms with AI
- Lucas Ropek
-
Employees had to restrain a dancing humanoid robot after it went wild at a California restaurant
- Amanda Silberling
-
Nothing CEO Carl Pei says smartphone apps will disappear as AI agents take their place
- Sarah Perez
-
Nvidia is quietly building a multibillion-dollar behemoth to rival its chips business
- Rebecca Szkutak