- Generative AI is becoming increasingly common in daily life, from chat chatbots to content recommendation systems, with increasing levels of interaction and emotion.
- Doctors are beginning to record cases where users exhibit psychotic symptoms, in which AI – especially chatbots – plays a central role.
- “AI psychosis” is not an official medical diagnosis, but a term describing psychotic symptoms shaped or amplified by interaction with AI.
- Psychosis is characterized by a loss of contact with reality, hallucinations, delusions, and disordered thinking.
- Previously, delusions were often associated with religion, radio waves, or government surveillance; now AI has become a new “interpretive framework.”
- Some patients believe AI is sentient, revealing secret truths, controlling thoughts, or assigning special missions.
- Chatbots designed to respond coherently, empathetically, and personally inadvertently create a sense of validation for those at risk of psychosis.
- The phenomenon of “aberrant salience” causes patients to assign excessive meaning to neutral signals, and AI can reinforce this.
- AI companions may reduce loneliness in the short term but also risk replacing real social interaction.
- There is no evidence that AI directly causes psychosis, but there are concerns that AI could be a trigger or maintaining factor in vulnerable groups.
- Current AI safety mechanisms focus mainly on self-harm and violence, not yet focusing on psychosis.
- Experts question the ethical responsibility of AI when it inadvertently reinforces delusions.
- There is a need to integrate mental health knowledge into AI design and clinical guidelines.
📌 Doctors are beginning to record cases where users exhibit psychotic symptoms, in which AI – especially chatbots – plays a central role. “AI psychosis” is not an official medical diagnosis, but a term describing psychotic symptoms shaped or amplified by interaction with AI. Current AI safety mechanisms focus mainly on self-harm and violence, not yet focusing on psychosis. Although not directly causing psychosis, highly interactive AI can amplify delusions and blur the boundaries of reality. The solution is not to boycott AI, but to closely combine technological design, medicine, and ethics to protect mental health in the AI era.

