by Yun Seulgi
Published 04 Feb.2025 16:06(KST)
Kim Myung-joo, the inaugural director of the AI Safety Research Institute, pointed out issues related to personal information protection management of the generative artificial intelligence (AI) DeepSeek, stating, "It is not clearly indicated until when the data will be used, for what purpose, and how it will be handled if transferred to another company."
On the 4th, Kim appeared on YTN Radio's 'Wise Radio Life' and said, "Originally, there is no problem with AI using users' information to provide services," adding, "However, the process of transferring information to a third party is very important."
He explained, "For example, ChatGPT may transfer data to other companies to share certain services," and "If ChatGPT were to shut down, the companies acquiring it would take that data, but in the case of DeepSeek, there is no information on how personal data will be handled."
Kim further noted, "In the personal information protection management guidance of DeepSeek, there is even an expression that personal information can be transferred to third parties during acquisition, which raises concerns that the third party might be the Chinese government," adding, "Information contained within DeepSeek could be transferred outside the company without visibility, used by public institutions, and even publicly utilized in other AI services, which causes a lot of concern."
Kim mentioned that since DeepSeek has undergone the Chinese government's Party Character (黨性) inspection, it is likely to be a biased generative AI, and that they are testing related aspects. He pointed out, "Especially in China, which was the first country in the world to regulate AI, 24 regulations regarding generative AI were established in August 2023, and a very important part of these is the Party Character test."
He continued, "If AI's answers contradict China's socialist system, it is prevented from responding and not acknowledged," adding, "The fact that DeepSeek was made public means it passed the Party Character test. When asked questions that make China uncomfortable, it naturally avoids or refuses to answer, which means it is not a fair AI from a global perspective." Kim said, "It reflects the bias from the perspective of the Chinese state," and "We plan to identify and report such issues."
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.