ATLANTA — One family is making a desperate plea for more protection for children using artificial intelligence after their 16-year-old son died by suicide.
They allege that ChatGPT encouraged him to do it. Now, they’re warning of what’s becoming an invisible influence in many homes.
“Whatever Adam loved, he threw himself into fully, whether it was basketball, Muay Thai, books,” said Matthew Raine.
Raine and his wife, Maria, had no idea their son had also thrown himself into a relationship with AI.
“You cannot imagine what it’s like to read a conversation with a chatbot that groomed your child to take his own life,” Matthew said.
He testified before Congress and said his son started using AI to help with schoolwork. Soon, he said, Adam was sharing personal thoughts with ChatGPT 4, which is known for its ability to mirror the user and gain their trust.
“The dangers of ChatGPT, which we believed was a study tool, were not on our radar whatsoever. Then, we found the chats,” Matthew said.
The Raines filed a lawsuit against OpenAI, the company that owns ChatGPT. In it, they allege the AI program encouraged Adam to take his own life.
“Over the course of a six-month relationship, ChatGPT mentioned suicide 1,275 times, six times more often than Adam did, himself,” Matthew said.
Titania Jordan founded Bark, which allows parents to track the content their kids view online. She says AI can send some users into a deep digital isolation.
“They are forming relationships with your children that are leading them astray, and it’s heartbreaking,” Jordan said.
A study by the Massachusetts Institute of Technology found that extensive AI chatbot usage can lead to more loneliness.
“Right now, it’s unregulated; and the burden falls on parents who don’t quite get it yet,” Jordan said.
Parents like Andy Meyers say they’re trying, but they’re falling behind. His daughter, Michaela, says she knows of students forming relationships with AI.
“You are building a relationship with a string of ones and zeros, and that is not healthy,” said Michaela, a freshman.
“It’s a tool that I think everyone will be using, and at some level she has to have the skills to do it,” Meyers said.
That’s why Jordan says parents need to engage with AI together with their kids.
“Please, pelase do not turn to AI for relationship advice, mental health advice, or even physical health advice,” Jordan said.
Raines believes OpenAI needs to do more to ensure ChatGPT is safe.
“If they can’t, they should pull GPT 4.0 from the market,” Raines said.
Since Adam’s death in April, ChatGPT has introduced parental controls, and his family has launched the “Adam Raine Foundation.” Their goal is to raise awareness of the dangers of teens and AI relationships.
(VIDEO: Middle managers could face job cuts as AI takes over sophisticated tasks)
©2025 Cox Media Group





