No, ChatGPT did not cause psychosis and Futurism should be ashamed for suggesting it

Frustrated man holding his head next to ChatGPT logo, symbolizing media panic

There’s a wild claim floating around that ChatGPT is sending people to psych wards. The article published by Futurism leans into a sensational theory that OpenAI’s chatbot somehow broke users’ brains. It’s the kind of headline that gets clicks, sure. But let’s be real. Blaming a chatbot for psychosis is not only irresponsible, folks, it’s just plain dumb.

Mental illness is serious. Psychotic breaks rarely come from a single event, and the idea that a large language model is causing them out of the blue is, frankly, laughable. If someone believes a chatbot is talking to them like God or manipulating their thoughts, that’s a clear sign something deeper was already going on. A healthy mind doesn’t spin out because a chatbot told it something weird. It’s more likely that the person was already struggling and simply latched onto ChatGPT as part of their delusion.

Bitdefender antivirus banner – protect your Windows PC

Nerds might not need antivirus. But for your family’s PC, Bitdefender is a solid choice .

The Futurism piece presents a few anecdotes without real clinical data. It quotes some individuals who claim they had breakdowns after talking to ChatGPT. But correlation doesn’t mean causation. These stories are tragic, no doubt. But they don’t prove anything about ChatGPT’s capabilities or its risks to mental health. If anything, they reflect how mental illness can grab onto any cultural object, whether it’s a religious text, a television, or yes, even an AI chatbot.

Let’s not forget, people used to blame video games for everything too. Doom, Grand Theft Auto, even Pokémon, all were supposedly corrupting young minds. Now it’s ChatGPT’s turn in the hot seat. Give it a rest. If someone spirals after chatting with an AI, the problem isn’t the AI. The problem is untreated mental illness. And maybe a system that fails people long before they ever type a prompt.


Amazon product

Buy Now on Amazon

OpenAI isn’t above criticism. There are very real concerns about privacy, safety, and misuse of generative AI. But this isn’t one of them. Pushing the idea that AI causes mental illness does nothing to help those who need care. It just creates stigma. Worse, it shifts focus away from the real issue, our broken mental health system.

People don’t get locked up because ChatGPT said something strange. They get locked up because they were already in a fragile mental state and didn’t get the help they needed in time.

In other words, the chatbot didn’t cause the problem. It just happened to be there when things fell apart.


As an Amazon Associate I earn from qualifying purchases. Some links in this article may be affiliate links. If you click one and make a purchase, I may receive a small commission at no extra cost to you. This helps support NERDS.xyz

Author

  • Brian Fagioli, journalist at NERDS.xyz

    Brian Fagioli is a technology journalist and founder of NERDS.xyz. Known for covering Linux, open source software, AI, and cybersecurity, he delivers no-nonsense tech news for real nerds.

Leave a Comment