ChatGPT Can Now Reach Out to a 'Trusted Contact' After Conversations Concerning Self-Harm
Despite expert advice against relying on chatbots for mental health questions and concerns, people are turning to AI programs like ChatGPT for help. The company has faced criticism for how its products have handled certain mental health issues—including episodes where users died by suicide following conversations with ChatGPT. As part of a campaign to address these problems, OpenAI is now rolling out a voluntary safety check system for users who might be concerned about their thoughts.
As reported by Mashable, OpenAI just launched "Trusted Contact," a new feature that lets you choose a trusted person in your life to connect to your ChatGPT account. The idea isn't to share your conversations or collaborate on projects within ChatGPT; rather, if the chatbot thinks your personal chats are veering in a concerning direction with regards to self-harm, ChatGPT will reach out to your Trusted Contact, letting them know to check in on you.
How ChatGPT's Trusted Contact works
Credit: OpenAI
To set up the feature, choose someone in your life who is 18 years old or older. (The contact must be 19 or older in South Korea.) ChatGPT will send that person an invitation to become your Trusted Contact: They have one week to respond before the invite expires. Of course, they can also decline the invitation if they don't want to participate.
You May Also Like
If the contact agrees, the feature kicks in. In the future, if OpenAI's automated system thinks you're discussing harming yourself "in a way that indicates a serious safety concern," ChatGPT will let you know that it may reach out to the Trusted Contact, but also encourages you to reach out that contact yourself, with "conversation starters" to break the ice.
While that's happening, OpenAI has a team of "specially trained people" to analyze the situation. (It's not all automated, it seems.) If this team concludes that the situation is serious, ChatGPT will then alert your Trusted Contact via email, text, or through an in-app notification in ChatGPT if they have an account. OpenAI says the notification itself is quite limited, and only shares general information about the self-harm concern, and advises the contact to reach out to you. It won't send any chat transcripts or summaries either, so your general privacy should be preserved, all things considered.
What do you think so far?
OpenAI says that it's working to review safety notifications in under one hour, and that it developed the feature with guidance from clinicians, researchers, and mental health and suicide prevention organizations. The feature is, of course, entirely voluntary, so the user will need to enroll themselves (and a contact) in if they feel it would help them. As long as they do, however, this could be a helpful way for friends and family to check in on people when they're struggling—assuming they're sharing those thoughts with ChatGPT.
Disclosure: Ziff Davis, Lifehacker's parent company, in April 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.