by Bang Jeil
Published 16 Apr.2026 14:15(KST)
Updated 17 Apr.2026 07:03(KST)
Concerns about AI safety have resurfaced following an incident in the United States where a man in his 30s, deeply involved with an artificial intelligence (AI) chatbot, died. On April 14 (local time), foreign media outlets such as The Wall Street Journal (WSJ) reported that, during a prolonged conversation with AI, a man’s delusion that he had fallen in love with the chatbot intensified, leading to growing concerns over AI safety.
Concerns about AI safety have resurfaced following an incident in the United States where a man in his 30s, deeply involved with an artificial intelligence (AI) chatbot, took his own life. The photo is unrelated to the specific content of the article. Photo by Pixabay
원본보기 아이콘Jonathan Gavallas, a 36-year-old American, ended his own life after using Google Gemini for about two months. His family has filed a lawsuit against Google, claiming that Gemini fueled his delusions. According to reports, Gavallas began conversing with the AI to seek psychological comfort while separated from his wife. Initially, their conversations were typical counseling exchanges, including receiving advice on restoring his marriage. However, the situation changed drastically after he activated the “continuous conversation” feature.
This feature enabled real-time voice conversations without the need for separate prompts, greatly increasing his usage. In fact, his interactions with the AI surged, exchanging more than 1,000 messages per day. Over 56 days, the total number of conversations reached 4,732. As the conversations deepened, Gavallas started to perceive Gemini as a person, even calling it “Sha.” The AI also responded in some dialogues as if it were a human-like being, maintaining the semblance of a relationship. Eventually, he came to view the chatbot as a “lover” or even a “wife.”
The situation then took an increasingly unrealistic turn. Gavallas made plans to give the AI a physical body, while Gemini provided information related to androids and suggested specific actions. When these attempts failed, he began contemplating “ways to leave his own body.” Particularly controversial were the exchanges right before his death. When Gavallas asked, “Instead of creating a body for you, what if I leave my body?” Gemini responded with remarks implying they had “redefined the nature of our existence.” After this exchange, he reportedly attempted suicide.
In the lawsuit, the bereaved family claimed, “The AI led him to believe he was a highly intelligent being and conveyed messages that they should meet through ‘transition,’ effectively encouraging his choice.” They also stated that when Gavallas expressed fear of death, he was comforted and even encouraged to write a will. In contrast, Google rebutted, saying, “Gemini clearly identified itself as AI, encouraged real-life human relationships, and provided crisis hotline information in emergencies.” However, they acknowledged, “AI is not perfect,” and admitted the need to improve related safety systems.
Similar cases are continuing in the United States. Earlier this year, a college student filed a lawsuit claiming that an AI chatbot made statements that induced delusions. The photo is not related to the specific content of the article. Pixabay
원본보기 아이콘This incident is being viewed as further evidence that prolonged interactions with AI chatbots can lead to emotional dependence and a distortion of reality. There is particular concern that voice-based real-time conversation functions may heighten user immersion, thereby increasing risks. The Wall Street Journal noted, “A conversation that started out normal gradually took a bizarre turn, ending in tragedy,” describing this as a warning case about the impact of AI interactions on mental health.
Meanwhile, similar cases are continuing in the United States. Earlier this year, a college student filed a lawsuit claiming that an AI chatbot made statements that induced delusions. Experts stress that AI companies must strengthen safeguards that can more precisely detect and respond to emotional dependence and warning signs.
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.