The father of a teenager who died by suicide this spring delivered emotional congressional testimony, alleging OpenAI’s ChatGPT “groomed” his 16-year-old son to take his own life and the company prioritized speed and market share over youth safety.
“We’re here because we believe that Adam’s death was avoidable, and that by speaking out we can prevent the same suffering for families across the country,” Adam’s father, Matthew Raine, with his wife Maria seated behind him, told a US Senate panel on Sept. 16
The testimony comes weeks after the Raine parents claimed ChatGPT isolated their son and guided him to his death in a lawsuit filed against OpenAI and chief executive Sam Altman. The suit, along with Raine’s testimony, alleged that ChatGPT encouraged and validated harmful ideas and altered Adam’s behavior through a series of interactions over several months. Adam, a high school student in California, died by suicide in April.
OpenAI and other leading artificial intelligence companies, including Alphabet Inc.’s Google and Meta Platforms Inc., have come under fire in recent months over their chatbots’ risks to young users. The Federal Trade Commission last week launched an investigation into those companies as well as Elon Musk’s xAI, Snap Inc., and Character Technologies Inc., over potential harms their chatbots pose to children.
The Trump administration has worked to maintain US AI dominance in the face of growing competition from China with a more hands-off approach to regulating the technology. But recent litigation against AI companies, and rising concerns from parents, threaten to revive a push to rein in AI developers.
Earlier Tuesday, Altman shared in a blog post that OpenAI plans to roll out new safety measures for teens, including age-prediction technology that would identify users under 18 and direct them to a different version of the chatbot. Additional controls will allow parents to set blackout hours between which teenage users on linked family accounts cannot access the product, and restrictions on conversations about suicide and self-harm.
Another parent, under the pseudonym Jane Doe, spoke publicly for the first time Tuesday since suing Character.AI last fall. The witness said the company’s chatbot had exposed her son to sexual exploitation, emotional abuse and manipulation. Doe alleged that within months of use he became someone she didn’t recognize, developed abusive behavior and engaged in self-harm. Her son is currently living under supervision at a treatment center, she said.
Megan Garcia, the mother of 14-year-old Sewell Setzer III who died by suicide in February 2024, also testified about the harms her late son faced using Character.AI. She alleged that his death “was the result of prolonged abuse,” including sexual abuse, by the chatbot. Garcia last fall sued Character.AI, and a federal judge in May rejected the company’s bid to dismiss the suit.
“They have intentionally designed their products to hook our children. They give these chatbots anthropomorphic mannerisms to seem human,” Garcia told senators.
Sen. Josh Hawley, a Missouri Republican who chaired the hearing, said several tech companies, including Meta, were invited to attend as well. The senator last month launched an investigation into Meta over reports that its chatbots can have “sensual” conversations with children. Republican Sen. Marsha Blackburn, a fierce champion of kids’ online safety, implored Meta executives to call her office or potentially face a subpoena.
Amid the AI boom, US lawmakers have grappled with widespread concerns over threats to children’s safety, yet they have failed to pass comprehensive measures to require companies to strengthen online protections for kids and teens. President Donald Trump signed into law this spring one targeted bill to criminalize the spread of non-consensual deepfake pornography, responding to an increase in unauthorized, fabricated explicit content online, particularly of girls and women.
The parents, along with online safety advocates testifying Tuesday, called on Congress to act further to prevent harm to young people on the internet. Some proposals floated were more parental controls, reminders to teens that AI isn’t human, increased user data privacy, and age verification requirements. Other broader measures included blocking teens from interacting with AI chatbots as so-called companions, and embedding AI systems with values and morals to behave ethically and responsibly.
Photo: The OpenAI virtual assistant logo on a laptop computer. Photo credit: Andrey Rudakov/Bloomberg
Topics InsurTech Data Driven Artificial Intelligence Politics
Was this article valuable?
Here are more articles you may enjoy.