Generative AI Privacy Policies Found Lacking... Personal Information Commission Calls for Greater Transparency

Personal Information Commission Holds Meeting to Improve Privacy Policies
Representatives from 11 Companies Including Meta, OpenAI, Naver, and SK Telecom Attend

The personal information processing policies of generative artificial intelligence (AI) companies have been found lacking in terms of appropriateness, readability, and accessibility.


The Personal Information Protection Commission held a "Discussion Forum for Improving Personal Information Processing Policies in the Generative AI Field" on the 4th at the Korea Press Center in Jung-gu, Seoul. Representatives from 11 generative AI companies and related experts attended the forum. Photo by Kyungjo Noh

The Personal Information Protection Commission held a "Discussion Forum for Improving Personal Information Processing Policies in the Generative AI Field" on the 4th at the Korea Press Center in Jung-gu, Seoul. Representatives from 11 generative AI companies and related experts attended the forum. Photo by Kyungjo Noh

원본보기 아이콘


The Personal Information Protection Commission announced the results of its assessment of personal information processing policies across seven sectors at the "Meeting for Improvement of Personal Information Processing Policies in the Generative AI Sector," held on March 4, 2026, at the Korea Press Center in Jung-gu, Seoul.


Since 2024, the commission has been evaluating the processing policies established and released by leading services that utilize new technologies (such as AI and autonomous driving) or handle large-scale sensitive or personal information.


Last year, the assessment covered areas such as connected cars, edtech, smart homes, generative AI, telecommunications, reservation and customer management, and healthcare applications. The average score was 71 points, which was 13.1 points higher than the 57.9 points recorded in the first year for sectors like big tech and online shopping.


However, when looking at each category, the generative AI sector scored 36.6 points on the appropriateness index, 3.7 points lower than the average. The readability (12.5 points) and accessibility (11.2 points) indices were the lowest among all sectors evaluated. For all three indices, overseas operators lagged behind domestic operators by up to more than 15 points, with scores of 28.3, 10.8, and 9 points, compared to 44.6, 14.3, and 13.4 points for domestic companies, respectively.


According to the commission, some generative AI services listed the “items of personal information processed” in overly broad terms or failed to specify the legal basis for processing. Some services described the retention period for personal information ambiguously. There were also instances where third-party provision was disclosed using vague terms such as “partners” or “service providers,” without clearly identifying the recipients.


A commission official stated, "In addition, there were cases where guidance on exercising data subject rights was provided only in English, or where handling of personal information-related complaints was delayed. Some mobile apps also required users to log in or go through multiple steps to access the privacy policy, indicating a need for improvement in accessibility."


The commission has begun supporting generative AI companies to draft more concrete and user-friendly privacy policies. At the meeting, generative AI companies and experts discussed topics such as methods for documenting prompt input information processing and training usage, clarifying the legal basis for processing, ensuring alignment with global policies, and enhancing the effectiveness of procedures for users to exercise their rights. Eleven companies attended, including Google, Meta, Microsoft, OpenAI, Naver, Kakao, SK Telecom, LG Uplus, NC AI, Scatter Lab, and RYUTEN Technologies.


Song Kyunghee, Chairperson of the Personal Information Protection Commission, stated, "The information handled by AI is expanding beyond text to include location and movement paths, voice and video data, as well as behavioral patterns and contextual information. The scope and methods of data processing have become more complex, making it increasingly difficult for ordinary users to understand."


She added, "In such a complex data environment, transparency that helps users anticipate how their personal information is processed is a key foundation for building social trust. I hope that discussions about privacy policies, which serve as a gateway to understanding the AI ecosystem, will strengthen human efforts toward responsible AI, going beyond simple notifications."


Based on the discussions at this meeting, the commission plans to revise its privacy policy guidelines for generative AI companies, publish the updated version next month, and hold a briefing session. For companies that require improvement, recommendations will be issued, followed by a reassessment in 2027.

© The Asia Business Daily(www.asiae.co.kr). All rights reserved.