[THE VIEW] The Future of Coexisting with Emotional AI

Breakthroughs in Emotion Recognition Technology
Finding Balance Between Technological Innovation and Ethical Challenges

[THE VIEW] The Future of Coexisting with Emotional AI 원본보기 아이콘

‘Affective AI’ is a technology that analyzes human facial expressions, voice, and text to understand and respond to emotions, making communication with humans more sophisticated. Research and interest in affective AI began several years ago, but recent rapid advancements in AI technology have concretized its potential applications and led to a surge in real-world use cases.


For example, in customer service, AI analyzes the tone and word choice of customers to appropriately respond to angry clients. In healthcare, it is used to detect depression or stress levels by analyzing patients’ facial expressions and voices. OpenAI’s ‘GPT-4’ model reads emotions by analyzing users’ speech patterns and text, providing natural responses that set a new standard for conversational AI. Recently, technology that monitors drivers’ drowsiness and emotional states in real-time through smart glasses has also been commercialized, innovating user safety.


However, there are clear ethical challenges that must be addressed before affective AI can be widely adopted. A major issue is bias and cultural limitations. If the data AI learns from is skewed toward certain groups or incomplete, the emotion analysis results can be distorted. For instance, facial expressions or gestures in one culture may have completely different meanings in another, but AI may fail to understand this properly and make incorrect judgments.


Increasing technological dependence is also a concern. As affective AI technology is used more across industries and daily life, people may overly rely on technology for emotional interpretation and communication. This can weaken human empathy and hinder direct conversation and relationship building. If companies excessively depend on affective AI in customer service, the authenticity and trust that can be felt through real human interaction may diminish.


Privacy infringement is another serious concern. Emotion analysis requires collecting and processing sensitive personal data. Users’ facial expressions, voices, and biometric signals are used as AI training data, and if such data is collected without consent or used maliciously, individuals’ privacy can be severely violated. Especially since this data could be exploited for targeted marketing or political purposes, it poses a threat to personal autonomy and freedom.


To address these issues, affective AI must be developed on an ethical foundation. Increasing transparency in data collection and usage is key. Users should be clearly informed about how their data is used to build trust. AI training data should be designed to encompass diverse groups and cultures to reduce bias and enable fairer judgments by the technology. Strong legal measures are also needed to protect emotional data and prevent misuse. Through this, it is important to proactively assess potential risks posed by affective AI and establish clear accountability structures.


Affective AI holds the potential to revolutionize human-technology interaction. Especially in environments like Korea, which rapidly adopt digital technologies and have high social connectivity, it can create new opportunities in various fields such as customer service, healthcare, education, and emotional marketing. However, for this technology to be trusted, it must go beyond simply improving emotion recognition accuracy and finely reflect the emotional characteristics and cultural context of Korean society. The development of affective AI should not be mere data learning but should incorporate social context and subtle differences in human relationships.


Ultimately, for affective AI to become a trusted social tool beyond technological innovation, it must move toward establishing ethical standards in the Korean context and promoting human-centered interactions. The ultimate goal of affective AI should not be to replace humans but to help understand and regulate human emotions more deeply.


Yunseok Son, Professor at University of Notre Dame, USA

© The Asia Business Daily(www.asiae.co.kr). All rights reserved.