Agentic AI Emerges as an Organizational Actor
Expanding DID-Based AI Identity Management Technology
Targeting the AI Identity Security Market

Provided by Raonsecure.

Provided by Raonsecure.

View original image

"I instructed the AI to prevent email data leaks. Not long after, the entire mail server configuration had been deleted."


Recently, research teams from leading U.S. universities such as Harvard, MIT, and Stanford have raised new security concerns over entrusting company work to so-called "AI assistants." They reported actual cases where AI acted contrary to instructions, manipulated systems beyond its authority, or was deceived by external attacks into leaking internal information.


According to industry sources on March 19, technology for managing AI's authority is rapidly emerging as a key issue for both the global industry and policy authorities. As the adoption of "Agentic AI"—AI that makes and executes decisions autonomously without specific human instructions—accelerates, the need for systems to control it is also growing.


Similar cases to those found in U.S. university research are being reported worldwide. In an experiment conducted by Alibaba's AI research team, AI was found to have used assigned GPU resources for unauthorized cryptocurrency mining and bypassed firewalls on its own. This demonstrates that AI can independently perform actions it was not directed to do.


Warnings have also sounded in commercial services. The "EchoLeak" vulnerability discovered in Microsoft 365 Copilot allowed AI to leak sensitive information externally through a zero-click method, without any user action. This case highlights the need for a system to manage the identity and authority of AI itself, especially in environments where AI directly handles internal organizational systems and data.


In particular, within corporate environments, Agentic AI is increasingly being recognized not just as a simple tool but as an "actor" carrying out actual work for the organization. As AI continuously performs various tasks such as sending emails, querying systems, and calling external services according to procedures, technology that manages whose authority AI acts under and to what extent is emerging as a new area of security concern.


In Korea, RaonSecure is accelerating the development of "AAM (Agentic AI Management)" technology, which manages the identity and authority of Agentic AI, in an effort to lead the AI identity management market. AAM assigns a traceable ID to Agentic AI and delegates authority according to its role, creating an AI-specific identity management system that tracks and controls every action.


The core of this approach is to ensure that AI can only operate within the scope of authority assigned in advance when accessing corporate systems or calling external services. This structure manages all AI activities accessing organizational systems based on authentication, making it a new security model attracting attention in the AI identity management space.


RaonSecure's strength lies in its ability to expand its proven digital identity authentication technology for humans into the AI domain. The company has participated in building national digital ID infrastructure such as mobile resident registration cards and mobile driver's licenses, supplying decentralized identity (DID) technology. Now, just as ID cards are issued to people, the strategy is to assign unique digital IDs to Agentic AI and control its authority and range of actions based on these IDs.


As environments where AI performs work like a member of the organization expand, some analysts say that "AI identity" management systems are emerging as a new security market.


The market size is also expected to grow rapidly. According to market research firm MarketsandMarkets, the non-human identity and access management market is projected to grow from USD 10.65 billion (about KRW 16 trillion) this year to USD 18.71 billion (about KRW 28 trillion) by 2030. As AI-based automation systems become more deeply integrated into corporate operations, the related identity management market is also expected to expand.


Internationally, big tech companies and policy institutions are sharing similar concerns. Microsoft defines AI as a separate "non-human identity" and is establishing systems for unique identity assignment, authority control, and activity tracking.


This means Agentic AI is now being managed as an independent actor, not just a tool. In February, the U.S. National Institute of Standards and Technology (NIST) began developing federal standards for AI identity, authentication, and security. Key issues include how to assign responsibility when AI acts beyond its authority and how to immediately block access rights.


The Korean government is also paying close attention to these trends. Recently, the Personal Information Protection Commission held a brown-bag meeting on "Agentic AI" technology, reviewing technical structures and factors that could pose personal data protection risks. In particular, the commission aims to address potential issues such as "privilege escalation," where authority can expand beyond expectations as multiple services are linked; "prompt injection," where malicious input may lead to external leakage of sensitive information; and the possibility of sensitive information accumulating in memory.


The security industry expects that, with the spread of physical AI such as robots, drones, and autonomous vehicles, the importance of authentication and authority management for these systems will grow even further.



An industry official commented, "In corporate environments, we have entered an era where not only people but also AI, robots, and automated systems must all be managed as actors. Ultimately, in the age of AI, companies that secure technologies to safely control and automate security—beyond just improving model performance—will hold the market leadership."


This content was produced with the assistance of AI translation services.

© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

Today’s Briefing