Published 05 Nov.2024 11:30(KST)
Generative artificial intelligence (AI) is establishing itself as an innovative tool for enhancing work productivity across various industries. It is particularly used to improve efficiency in fields such as marketing, software development, and customer service, and is being rapidly adopted among companies. For example, Copilot, developed by GitHub in the United States, helps software developers reduce the time spent writing repetitive code so they can focus on more important tasks. The way Copilot saves coding time and boosts productivity demonstrates the potential of AI as an assistive tool.
However, as AI adoption accelerates, problems are also emerging. The most serious issue stems from the accuracy and reliability of the information provided by AI. When results are generated based on incorrect information or biased data, users may find it difficult to filter these out, risking poor decision-making.
Recently, a global company used an AI-based report generation tool to automatically create customer reports, but the reports were submitted with incorrect statistics, causing significant trust issues. This highlights the necessity of reviewing and correcting AI-provided information in advance.
Another problem is excessive dependence on generative AI. If companies uncritically follow AI-recommended directions, the critical decision-making process may weaken. If organizations rely too heavily on AI and fail to exercise human judgment and creativity, their competitiveness could be at risk. To prevent this, some companies are strengthening internal training alongside AI adoption to optimize collaboration between AI and humans.
For instance, an advertising company in the United Kingdom provides training to employees to encourage creative judgment and critical thinking even when using AI tools. This policy aims to have AI play a complementary role rather than simply accepting the information it provides as is.
In this context, one of the most important challenges in AI adoption is transparency and security. Generative AI inevitably becomes vulnerable to personal and sensitive information during the process of learning from large datasets. For example, there is a risk that AI may collect personal information without authorization or leak it externally, making data protection policies and transparent management essential. The European Union (EU) has introduced AI regulatory laws to address these issues, strengthening transparency and data protection standards for AI systems. These regulations have a significant impact on the global market, and other countries are preparing similar legal frameworks.
To maximize work efficiency through AI, clear guidelines on AI utilization are necessary. Rather than unconditionally accepting AI decisions, a collaborative model combining human judgment and critical thinking will become increasingly important. Instead of overly emphasizing only the advantages of AI, responsibly using AI and establishing ways for humans and AI to cooperate complementarily is a challenge that companies and society must jointly consider moving forward.
Yunseok Son, Professor at the University of Notre Dame, USA
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.