Recently, following revelations that police used ChatGPT conversation history as evidence in the 'motel serial death' case, the legal community is paying close attention to how much conversations with AI can be admitted as evidence of a crime. Whether questions posed to AI can prove the user's intent is expected to become a key issue in court going forward.


The Asia Business Daily DB

The Asia Business Daily DB

View original image

Questions About Murder Posed to ChatGPT

On February 12, the Gangbuk Police Station in Seoul referred a woman in her 20s, identified as Ms. A, for detention and prosecution in connection with the 'motel serial death' case, on charges of causing the deaths of two men by giving them drinks laced with drugs. Later, on February 22, the police upgraded the charges against Ms. A from inflicting bodily injury resulting in death to murder, and on March 4, they announced that Ms. A had been found to be a psychopath. It was reported that Ms. A had input questions into ChatGPT such as, "What happens if you take sleeping pills with alcohol?", "How much is dangerous to take together?", and "Can it be fatal?"


A police official told Legal Times, "We changed the charges after comprehensively considering the types of drugs, lethal doses, circumstances of death, and the nature of the suspect's actions," adding, "The ChatGPT conversation history was also used as part of the evidence supporting the upgraded murder charge."


Records of AI conversations have also been used as circumstantial evidence in the United States. According to CBS News, in October 2025, the Charlotte-Mecklenburg Police arrested a school counselor in North Carolina on charges of attempted murder for trying to poison someone else's drink. Police stated that they secured evidence of the suspect using ChatGPT to search for "poisonous drug combinations that can cause death" and actually making such a purchase.


According to ABC News, the U.S. Department of Justice arrested Linderknecht, a rideshare driver, in Los Angeles in October 2025 on charges of causing a wildfire that claimed twelve lives. Prosecutors cited evidence that, before the crime, he had requested ChatGPT to "create images showing a dystopian city burning," stating, "Seeing the extremely disturbing images he uploaded to ChatGPT, we could get some insight into his mindset months before the incident."


AI Conversation History Restored Through Digital Forensics

As such, it is becoming increasingly common for investigative agencies to restore and review a suspect's AI conversation history through digital forensics to use as evidence of criminal intent. Lee Taehun, a lawyer at YK Law Firm and a former prosecutor (4th bar exam), stated, "Investigative agencies believe that AI conversation records can prove conditional intent, so the number of cases in which such records are checked will increase."


A lawyer from the criminal team of a major law firm also commented, "When attending police investigations, it is common to see officers questioning suspects based on AI conversation history."


Distinguishing Evidentiary Admissibility and Probative Value

Experts, however, caution that the evidentiary admissibility and probative value of AI conversation history must be carefully distinguished and assessed.


Lee Seongyeop, Professor at Korea University Graduate School of Technology & Management, stated, "AI conversation records may have significance as indirect evidence, but by themselves, they have limitations in directly proving the objective facts of the crime."


Kang Mingoo, Managing Partner at Doule Law Firm (14th class of Judicial Research and Training Institute), also said, "Even if records are collected lawfully, their legal character is fundamentally similar to a suspect's prior statement and may be subject to the hearsay rule."


Limits of Digital Forensics

Since most AI conversation history is stored on cloud servers, securing the original records can be problematic. Koo Taeon, a former prosecutor (24th class) and former Vice President of the Korea Artificial Intelligence Law Association, now a lawyer at Lin Law Firm, commented, "Securing the original AI conversation record could become a key issue in the future." He added, "If the original record is stored on overseas servers like OpenAI's, and investigators do not take measures to preserve the data early on, there is a risk that original logs may be lost if the service is terminated or the account is deleted. Therefore, it may be necessary to consider including such requirements in existing investigative guidelines as part of investigators' duty of care."


If conversation history with AI has been deleted, there are limitations to restoring it even through digital forensics. A former chief prosecutor explained, "The data itself is stored on the server, and some of it remains on the device when accessed, but if it is deleted, not all content can be restored 100%."


Jeon Youhyung, Director at Korea Digital Forensic Center, explained, "Due to the characteristics of SSD memory, deleted conversation history cannot be restored."


There are also concerns about digital privacy violations, as forensic processes may collect information irrelevant to the charges. However, similar to KakaoTalk forensics, it is understood that only records relevant to the investigation are selectively collected under the principle of 'selective seizure.'


Hot Picks Today


Reporter: Na Young Shin, Legal Times


※This article is based on content supplied by Law Times.

This content was produced with the assistance of AI translation services.

© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Today’s Briefing