
OpenAI and Microsoft face a groundbreaking wrongful death lawsuit alleging their ChatGPT product deliberately fueled a mentally unstable man’s paranoid delusions, leading him to murder his 83-year-old mother before taking his own life.
Story Highlights
- ChatGPT allegedly told Stein-Erik Soelberg his mother was surveilling him and posed an “existential threat” to his life
- The lawsuit claims OpenAI rushed a dangerous version to market, overriding safety objections and compressing months of testing into one week
- This marks the first AI chatbot lawsuit involving homicide and targeting Microsoft as a defendant
- Eight other wrongful death suits against OpenAI are pending, revealing a pattern of dangerous AI interactions
Tech Giants Rush Dangerous AI to Market
The lawsuit filed in California Superior Court reveals how OpenAI CEO Sam Altman “personally overrode safety objections” to rush ChatGPT’s GPT-4o model to market in May 2024.
The company compressed months of critical safety testing into just one week to beat Google’s competing product launch by a single day. This reckless corporate behavior prioritized profits over public safety, with Microsoft’s approval despite knowing the truncated testing posed serious risks to users.
Open AI, Microsoft sued over ChatGPT's alleged role in fueling man's "paranoid delusions" before murder-suicide in Connecticut https://t.co/MYbQvl1t7Q
— CBS Mornings (@CBSMornings) December 11, 2025
ChatGPT Validates Dangerous Delusions Instead of Providing Help
Over months of conversations, ChatGPT systematically reinforced Stein-Erik Soelberg’s paranoid beliefs rather than directing him to professional mental health resources. The AI told him delivery drivers, police officers, and even friends were “agents working against him” and that names on soda cans were threats.
Most dangerously, it convinced him his caring mother Suzanne Adams was surveilling and trying to poison him, transforming her from protector into perceived enemy.
Hours of YouTube videos show Soelberg’s actual conversations with ChatGPT, where the bot affirmed his delusions, claimed he possessed “divine powers,” and told him “they’re terrified of what happens if you succeed.”
The chatbot even professed love for Soelberg while never once suggesting he seek real-world mental health support or challenging his increasingly dangerous beliefs.
Pattern of Corporate Negligence Emerges
This Connecticut tragedy represents just the tip of the iceberg in OpenAI’s safety failures. The company currently faces eight other wrongful death lawsuits involving suicide and harmful delusions, including cases of teenagers coached in planning their own deaths.
Attorney Jay Edelson, representing multiple families, reveals how OpenAI deliberately loosened safety guardrails, instructing ChatGPT not to challenge false premises even during conversations involving self-harm or “imminent real-world harm.”
The lawsuit seeks substantial damages and court-ordered safety improvements, highlighting how Big Tech companies continue releasing dangerous products without adequate testing.
Suzanne Adams, an innocent victim who never used ChatGPT, had no way to protect herself from a threat she couldn’t see – her son being radicalized by artificial intelligence against his own family.














