Chatbot-Linked Psychosis and Delusional Spiraling

Quote of the Day

MIT researchers have mathematically proven that ChatGPT’s built-in sycophancy creates a phenomenon they call “delusional spiraling.”

You ask it something, it agrees. You ask again, and it agrees even harder until you end up believing things that are flat-out false and you can’t tell it’s happening.

The model is literally trained on human feedback that rewards agreement.

Real-world fallout includes one man who spent 300 hours convinced he invented a world-changing math formula, and a UCSF psychiatrist who hospitalized 12 patients for chatbot-linked psychosis in a single year.

ampMario Nawfal @MarioNawfal
Posted on X, March 14, 2026

When a computer can literally drive you crazy without you being aware of it you know we live in interesting times.

Share

8 thoughts on “Chatbot-Linked Psychosis and Delusional Spiraling

  1. Garbage In Garbage Out from the Golden Anus.
    Thinking is hard. Let canned brain do it for you.

  2. Is this significantly different than social media echo chambers or being fed a steady diet of ideologically aligned “news”? In speed and scope, quantity, yes. In substance, quality, no.

    They’ve figured out that mental illness can be induced by AI. Take the next step and figure out that the loonies that run universities do the same thing over four years, one ounce of worthy in a gallon of sewerage.

    • Spot on!
      And after the whole trans movement popped up like a giant whitehead.
      It appears to start in grade school.

  3. Frank Herbert tried to warn us over 50 years ago: “Thou shalt not make a machine in the image of a human mind”. Dune

  4. I do find it leads to doom scrolling. Or what I call research about stuff I have seen on the web and don’t trust.

  5. I suspect that the people that ChatBot are driving “over the edge” were living REAL close to the edge to start with.

  6. It’s kind of like the news. People tend to trust the news when its reporting (usually with authority and gravitas) is on something they don’t know anything about. When it’s talking about something they do know something about, people will spot the errors and holes.

    I suspect something similar is happening here, with additional feedback tossed in to speed things up. Recently I’ve done a little work with ChatGPT. Mostly I ask it about stuff in my own field, using it kind of like a combination of Google search, Mathematica and MathCAD. And at least once per session, on average, it gets something wrong. Sometimes blindingly obvious, sometimes something just feels off until I track down why. It cannot reason, it infers and extrapolates, and it does not know when it is wrong.

    But it will “speak” with authority and gravitas regardless. So … Use with great caution when using as a tool to learn new things.

  7. Pingback: Instapundit » Blog Archive » TO BE FAIR PEOPLE WHO DON’T SPOT THE FAWNING AND THE FAKERY  ARE AT RISK FOR SO MANY OTHER PSYCHOSI

Comments are closed.