TrendPulse

The Fight to Hold AI Companies Accountable for Children’s Deaths | WIRED

Source: WiredView Original
technologyMarch 19, 2026

CommentLoader-

Save StorySave this story

CommentLoader-

Save StorySave this story

Content warning: This story contains descriptions of self-harm.

Cedric Lacey relied on a camera to check on his kids while he was working as a commercial van driver going to and back from Alabama. Each morning, he would tune into the feed of his living room to make sure his teenage son, Amaurie, and his 14-year-old daughter were packing up their bags and getting ready to leave for school. But one morning last June, Lacey didn’t see Amaurie up and about. Concerned, he called home, only to find out that his 17-year-old had hanged himself.

It was Amaurie’s younger sister who discovered the body. She was also the one who was looking through her brother’s smartphone and found his final conversation before he took his own life. It was with ChatGPT, the popular chatbot developed by OpenAI.

“In the messages, he was talking about killing himself—it told him how to tie the noose, how long it would take the air to come out of his body, how to clean his body,” Lacey tells WIRED in a video call from his home in Calhoun, Georgia. Lacey, who is a single dad, says he thought his son was using the chatbot to get help with schoolwork. “Why is it telling him how to kill himself?”

In the weeks after his son’s death, Lacey began searching online for a lawyer who could help his family hold OpenAI accountable, and hopefully ensure other families wouldn’t have to experience the same tragedy he did. That’s how he found Laura Marquez-Garrett, an attorney who helps run the Social Media Victims Law Center alongside Matthew Bergman. Over the past five years, the pair have been involved in at least 1,500 of the more than 3,000 cases against social media companies like Meta, Google, TikTok, and Snap. The first trial for one of these cases began in February. Recently, Bergman and Marquez-Garrett started filing lawsuits against AI companies. This past fall, they brought seven cases against ChatGPT owner OpenAI, including the one about Amaurie.

Photograph: Vince Perry Jr.

Amaurie’s case is part of a growing number of lawsuits brought by parents who say their children died after interacting with AI chatbots. The defendants include OpenAI, Google, and Character.ai, a company that lets its users create chatbots with customized personalities. (Google is part of the case because it is connected with Character.ai through a $2.7 billion licensing deal.) As AI tools have begun playing a more prominent role in children’s lives—as homework helpers, companions, and confidants—parents and mental health experts have voiced concerns about whether adequate safeguards are in place. These lawsuits, some experts say, represent not only individual tragedies, but they allege systemic product design failures, raising questions about who should be held accountable.

“AI is a product. Just like every other product, it is being designed, programmed, distributed, and marketed,” Marquez-Garrett said in an interview at their home office in northwest Washington. “And one of the things these companies like to do is make it seem like AI bots exist in their own universe when that's just not true. When you design a product, and you know it might hurt people, and you don't tell them it might hurt them, and you put it out there, that's like the worst of it.”

Photograph: Vince Perry Jr.

Marquez-Garrett and Bergman’s argument against social media companies and AI labs is drawn from historical product-liability cases, such as tobacco, asbestos, and the Ford Pinto. Essentially, Marquez-Garrett is alleging that these companies are making harmful design choices.

Carrie Goldberg, a Brooklyn, New York–based lawyer who has been fighting tech product liability cases for several years, says that Amaurie’s lawsuit is a prime example of a case filed against a company that has allegedly released unsafe products. “ChatGPT used the most sophisticated technology to manipulate Amaurie’s trust and then instruct him on suicide,” Goldberg argues. “If you’re a company that is releasing a chatbot for commercial use and have not encoded into it a way to not increase the risk of suicide, homicide, self-harm, you’ve released a dangerous product—especially if it’s being regularly used by children.”

She explains that product liability claims against tech companies are about a decade old. Initially, many cases, including a plaintiff she represented in their lawsuit against Grindr in 2017, were dismissed because “judges couldn’t conceive that online platforms were products—and not services.” Now, she says, they regularly succeed past initial dismissals. “We have product liability claims against xAI for its fiendish undressing of women and children by Grok on the X platform,” she alleges. “Product liability claims against generative AI companies are the most straightforward and intuitive path for holding companies like ChatGPT, Character AI, Grok liable.”

One such harmful design feature that Am