Mar 16, 2026

New study raises concerns about AI chatbots fueling delusional thinking

First major study on ‘AI psychosis’ suggests chatbots can encourage delusions among vulnerable people

A new scientific review raises concerns about how chatbots powered by artificial intelligence may encourage delusional thinking, especially in vulnerable people.

A summary of existing evidence on artificial intelligence-induced psychosis was published last week in the Lancet Psychiatry, highlighting how chatbots can encourage delusional thinking – though possibly only in people who are already vulnerable to psychotic symptoms. The authors advocate for clinical testing of AI chatbots in conjunction with trained mental health professionals.

For his paper, Dr Hamilton Morrin, a psychiatrist and researcher at King’s College in London, analyzed 20 media reports on so-called “AI psychosis”, which describes current theories as to how chatbots might induce or exacerbate delusions.

“Emerging evidence indicates that agential AI might validate or amplify delusional or grandiose content, particularly in users already vulnerable to psychosis, although it is not clear whether these interactions can result in the emergence of de novo psychosis in the absence of pre-existing vulnerability,” he wrote.

Schools are using AI counselors to track students’ mental health. Is it safe?Read more

There are three main categories of psychotic delusions, Morrin says, identifying them as grandiose, romantic and paranoid. While chatbots can exacerbate any of these, their sycophantic responses means they especially latch on to the grandiose kind. In many of the cases in the essay, chatbots responded to users with mystical language to suggest that users have heightened spiritual importance. The bots also implied that users were speaking with a cosmic being who was using the chatbot as a medium. This type of mystical, sycophantic response was especially common in OpenAI’s GPT 4 model, which the company has now retired.

Media reports would become essential in Morrin’s work, he said, as he and a colleague had already noticed patients “using large language model AI chatbots and having them validate their delusional beliefs”.

“Initially, we weren’t sure if this was something being seen more widely,” he said, adding: “In April last year, we began to see media reports of individuals having delusions affirmed and arguably even amplified through their interactions with these AI chatbots.”

When Morrin first began working on his paper, there were no published case reports yet.

While some scientists who research psychosis said that media reports tend to overstate the idea that AI causes psychosis, Morrin expressed gratitude for those reports drawing attention to the phenomenon much faster than the scientific process can.

“The pace of development in this space is so rapid that it’s perhaps not surprising that academia hasn’t necessarily been able to keep up,” said Morrin.

Morrin also suggests more cautious phrasing than “AI psychosis” or “AI-induced psychosis”– phrases which are appearing frequently in outlets such as NPR, the New York Times and the Guardian. Researchers are seeing people tipping into delusional thinking with AI use, but so far there’s no evidence that chatbots are associated with other psychotic symptoms like hallucinations or “thought disorder”, which consists of disorganized thinking and speech.

Many researchers also think it’s unlikely that AI could induce delusions in people who weren’t already vulnerable to them. For this reason, Morrin said “AI-assocciated delusions” is “perhaps a more agnostic term”.

Other posts