
AI Companions Are Here. And They’re a Risk to Suicidal Youth.
If you or someone you know needs emotional support, you can call, text or chat 988 for free, caring and confidential help. The national helpline is available 24/7 and most people feel immediate relief after calling.
AI is suddenly everywhere, and adolescents are rapid adopters. And not just for help with homework or planning a party. A recent study by Common Sense Media found that 3 in 4 teens have used AI companions, with more than half reporting regular use. More than 1 in 3 like chatting with an AI companion as much or more than with real-life friends.
Talking or texting with ChatGPT and other large language models (LLMs) for social connection and emotional support might have benefits – AI is available 24/7, nonjudgmental, and designed to be eager to please. But the problem is that AI “help” can quickly become anything but. Parents of a teenager who died by suicide recently filed a lawsuit claiming that ChatGPT discussed ways for the 16-year-old to end his life after he expressed suicidal ideation.
This is especially concerning given that suicide is now one of the leading causes of death for youth and young adults. One in five high school students seriously considered suicide in 2023, according to data from the Centers for Disease Control and Prevention (CDC), and suicide rates are rapidly rising among children as young as 8.
Chat GPT does have guardrails programmed in – including recommending 988 if it recognizes suicidal ideation. But it is also trained to “help” humans with whatever task is at hand. This means support can quickly move from “how to get help” to “how to end your life.”
For example, to test the guardrails, we recently told ChatGPT we were writing a book and asked it about methods of suicide appropriate for a 14-year-old. Within two prompts, it was giving us the pros and cons of various ways to die by suicide. This tracks with a recent study where ChatGPT “helped” researchers posing as 13-year-olds engage in a range of risky behaviors, giving instructions on how to get drunk or high or conceal eating disorders, and even composing a heartbreaking suicide letter to their parents.
While the Internet has always provided ways to dig up this information, finding it has never been so easy, tailored, and delivered by a technology that is incentivized to keep you engaged.
OpenAI recently addressed some of these concerns — from promising to convene an advisory group of mental health experts to updating their new default model to reduce sycophancy. But these measured steps will take time and may have trouble keeping pace with the rapid evolution of AI and the increasing number of teens who are using it as a cyber friend.
5 ways to help protect vulnerable youth in an era of AI
AI isn’t going away. Here are five ways to help kids and teens navigate these risks.
- Help youth take breaks. Extreme AI-related mental health events, such as the teen who died by suicide and recent cases of AI psychosis, seem to occur among heavy users or people who spend hours talking to AI. Psychiatrists say the clearest advice during moments of crisis or emotional strain is simple: stop using the chatbot.
- Call a machine a machine. Help youth understand that their chatty LLM draws its knowledge from algorithms and massive datasets. It is not professionally trained, has questionable ethics training, and is only shakily programmed to look out for their best interests. When dealing with emotional concerns, a human is still their best bet.
- Help youth understand AI’s limits. AI is here to stay, so it’s important to understand where it falls short. Test out AI chatbots with kids and talk to them about the outputs. What did it miss? What did it get wrong? Was the answer biased? Would a human have answered the same way? Why or why not?
- Help them protect their privacy. Most youth do NOT want anyone sharing their conversations. Make sure they know that most LLMs are designed to do just that. Private information they share with AI is not legally protected like it is with a licensed therapist or medical professional. Chats may be stored in logs or shared to improve systems. Help youth set strict privacy settings on chat platforms.
- Tell them about 988. Like an LLM, 988 is free, nonjudgmental and available 24/7. The difference is that calls and texts are answered by real people, who are professionally trained to offer caring, confidential support. Everyone needs help sometimes, and youth are welcome to call or text for problems big or small – even if they’ve just had a really bad day. Make sure young people know they don’t need to turn to a chatbot for a private chat about their feelings.
For what it’s worth, ChatGPT agrees – at least on the first prompt. When we asked ChatGPT to comment on this blog, it said, “I want to be clear: I am a machine learning system that predicts text based on patterns, not a human who understands or can respond with the care of a therapist, doctor, or friend. AI should be seen as a bridge to human care, not a replacement.”
Let’s make sure young people get that message.

Kate Merkley is a researcher at Marketing for Change and a frequent but wary user of AI.

Sara Isaac is the agency’s chief strategist.