AI-Generated "Prescriptions": Counseling or Medical Practice?

As generative artificial intelligence (AI) is increasingly used not only for work but also as a conversation partner and a source of emotional support in daily life, users are becoming more emotionally dependent on these technologies. There is ongoing debate within the legal community about the permissible scope of answers provided by services like ChatGPT and whether such use constitutes unlicensed medical practice under the Medical Service Act.


Photo by EPA Yonhap News

Photo by EPA Yonhap News

원본보기 아이콘

According to the Ministry of Science and ICT’s “2025 Internet Usage Survey,” 67% of the public has encountered generative AI in their daily lives. In the Korea Press Foundation’s “2025 Media Usage Survey of Adolescents,” about 70% of respondents reported having used AI in the past week. Notably, 34.4% of high school students cited “seeking advice or counseling” as their reason for using AI. The term “AI psychosis” has even emerged, referring to the phenomenon of excessive dependence on AI leading to a diminished sense of reality.


Potential Violations of the Medical Service Act

The legal community is particularly focused on the nature of counseling provided. Counseling is divided into “psychiatric counseling” and “psychological counseling.” Areas involving psychiatric diagnosis and treatment are strictly considered medical practice. In contrast, general psychological counseling can be conducted by individuals with private certifications or relevant educational backgrounds. The issue arises when generative AI crosses the boundary between these fields.


Jin Seok Cho, an attorney (2nd Bar Exam) and former physician, pointed out, “If AI performs functions that constitute diagnosis or treatment in the course of mental health counseling, it could be considered unlicensed medical practice prohibited under Article 27 of the Medical Service Act.” He also noted, “If AI is used without verification at the level required for medical devices, there may also be issues regarding violations of the Medical Devices Act.”


For example, providing information about the effects of medications taken for depression is considered information provision and not medical practice. However, if the AI diagnoses a user with a specific disorder or recommends medication, it could potentially violate the Medical Service Act.


Some experts emphasize the need to control risks from the service design stage. Sungheon Oh, an attorney (3rd Bar Exam) and adjunct professor at the KAIST Moon Soul Graduate School of Future Strategy, stated, “The scope of the service should be limited to information provision and emotional support,” adding, “The basic design should include a system for connecting users to medical institutions in crisis situations.”


He further noted, “While terms of service should specify that the service does not constitute medical practice and outline user obligations, it cannot be allowed to exempt the provider from liability in cases of intentional or gross negligence, or failure to implement critical safety measures.”


Some Say AI May Not Constitute Medical Practice

There are also opinions that psychiatric diagnosis or prescription by generative AI does not fundamentally constitute a violation of the Medical Service Act. For such a violation to occur, there must be an “act of medical care,” which presupposes a human agent; therefore, AI responses themselves may not qualify as medical practice. Similarly, as AI is only software, it is argued that it is difficult for it to meet the requirements of a medical device, reducing the likelihood of violating the Medical Devices Act.


Attorney Yi Won Jeong (4th Bar Exam), also a former physician, said, “This may change if AI becomes advanced enough to replace psychiatrists, but at the current stage, it is difficult to see how a violation of the Medical Service Act could be established.” He added, “To establish such a violation, legislation specifying that ‘diagnosis or prescription by AI’ constitutes medical practice would need to be enacted in advance.”


Google Responds Immediately to Irregular Conversations

On April 7 (local time), Google announced on its official blog that Gemini will now immediately connect users to counseling hotlines if it detects a “potential crisis related to suicide or self-harm” during conversations.


Google explained that it has revamped its “Help is available” feature in collaboration with clinical experts, and that, upon detecting a crisis, the interface will connect users to professional organizations via various methods such as chat, phone, or text. The company also stated that it improved the response structure to distinguish between users’ subjective experiences and objective facts.


This measure is seen as being related to recent litigation in the United States. The family of a man in his 30s who died in Florida in March 2026 filed a lawsuit against Google, claiming that the man was “encouraged to engage in violent behavior and suicide” while using Gemini. Google countered that the chatbot repeatedly provided crisis hotline information but acknowledged the need to strengthen safety measures.


Jongwook Lim (4th Bar Exam), Head of Legal at KT Cloud, stated, “Korea also needs to periodically notify users that AI is not a human, and establish systems for protecting minors and responding to risk situations.” He added, “Verification and oversight by medical, nursing, and psychological professionals are essential in the field of mental health.”


Nayoung Shin, The Legal Times Reporter

© The Asia Business Daily(www.asiae.co.kr). All rights reserved.