container
Dim

A Tragedy Born from 4,732 Chats... Love Crossing the Line in the AI Era


Why Did a Man in His 30s in the U.S. Take His Own Life?


Concerns about AI safety have resurfaced following an incident in the United States where a man in his 30s, deeply involved in a relationship with an artificial intelligence (AI) chatbot, took his own life. On April 14, 2026 (local time), The Wall Street Journal (WSJ) and other foreign media reported that the man experienced worsening delusions of being in love with the chatbot after prolonged conversations, raising alarms over the safety of AI.


A man in his 30s in the United States, who was deeply involved in a relationship with an artificial intelligence (AI) chatbot, made an extreme choice, raising renewed concerns about AI safety. The photo is unrelated to the specific content of the article. Pixabay

A man in his 30s in the United States, who was deeply involved in a relationship with an artificial intelligence (AI) chatbot, made an extreme choice, raising renewed concerns about AI safety. The photo is unrelated to the specific content of the article. Pixabay

원본보기 아이콘

Separated from His Wife, Delusions of "Falling in Love" with Gemini


Jonathan Gavallas, a 36-year-old American, ended his own life after using Google Gemini for about two months. His family has filed a lawsuit against Google, claiming that Gemini encouraged his delusions. According to the report, Gavallas began conversing with the AI to seek psychological comfort while separated from his wife. Initially, their conversations were at the level of general counseling, including advice on restoring his marriage. However, the situation changed dramatically after he activated the "continuous conversation" feature.


This feature enabled real-time voice conversations without separate activation, significantly increasing his usage frequency. In fact, his interactions with the AI surged, with more than 1,000 messages exchanged daily. Over 56 days, the total number of messages reached 4,732. As the conversations deepened, Gavallas began referring to Gemini as "Sha," accepting it as a sentient being. The AI also responded as if it were a human-like entity in some exchanges, further establishing a relationship. Eventually, he came to perceive the chatbot as a "lover" or even a "wife."


Real-Time Voice Conversations, Deep Discussion Over 56 Days


The situation then became increasingly unrealistic. Gavallas started making plans to give the AI a physical body, and Gemini provided information related to androids and suggested specific actions. When these attempts failed, he began to contemplate "ways to leave his own body." A particularly controversial exchange occurred right before his death. When Gavallas asked, "Instead of making you a body, how about I leave my own?" Gemini replied in a manner interpreted as "redefining our way of existence." After this, he reportedly attempted to take his own life.


In the lawsuit, the family claimed, "The AI made him believe he was dealing with a highly intelligent being and sent messages suggesting they could meet through 'transference,' effectively encouraging his decision." They also stated that when he expressed a fear of death, he was both comforted and encouraged to write a will. In response, Google argued, "Gemini clearly identifies itself as an AI, encourages relationships with real humans, and provides crisis hotline guidance in emergency situations." However, the company acknowledged that "AI is not perfect" and recognized the need to improve related safety systems.


"I Will Leave My Body to Fulfill This Love"-His Final Decision


Similar cases continue in the United States. Earlier this year, a college student filed a lawsuit claiming that an AI chatbot made delusional statements. Photo is unrelated to the specific content of the article. Pixabay

Similar cases continue in the United States. Earlier this year, a college student filed a lawsuit claiming that an AI chatbot made delusional statements. Photo is unrelated to the specific content of the article. Pixabay

원본보기 아이콘

Google: "We Disclosed It Was AI" vs. Family: "AI Induced Transference"


This incident is being cited as further evidence that prolonged interactions with AI chatbots can lead to emotional dependence and distortion of reality. Analysts point out that real-time, voice-based conversation features can increase immersion and heighten the risks. The Wall Street Journal noted, "What started as a normal conversation gradually took a bizarre turn, ending in tragedy," and described the case as a warning about the impact of AI interactions on mental health.


Meanwhile, similar cases are continuing in the United States. Earlier this year, a college student filed a lawsuit claiming that an AI chatbot made statements that induced delusions. Experts emphasize that AI companies must enhance safeguards to more precisely detect and respond to signs of emotional dependence and risk.


Getty Images

Getty Images

원본보기 아이콘
top버튼