AI Editing Changes Thinking Too... Warnings About Biased Writing Prove Ineffective [Science Insight]
When Using AI Writing Tools, User Attitudes Tend to Align with AI’s Perspective
Concerns Rise Over Homogenization of Expression and Thought
Research findings have increasingly shown that artificial intelligence (AI) writing tools can not only improve sentence structure, but also influence users' opinions and ways of thinking.
When users employ a biased AI writing assistant, not only does their writing change, but their attitudes toward social issues also tend to shift in the direction suggested by the AI. Studies have found that such influence is not easily reduced by warnings or explanations before or after use. At the same time, experts have warned that large language models (LLMs) are creating a "homogenizing effect," making people's expressions and thinking more similar over time.
Biased AI Writing Influences Users' Attitudes
In interviews with the Science and Media Center of Korea (SMCK), experts expressed concern, saying, "AI writing tools demonstrate the potential to affect not just sentence construction, but also users' judgment and opinion formation."
On March 12, a joint research team from Cornell University in the United States and other institutions published a paper in the international journal Science Advances, reporting that a biased AI writing assistant can indeed influence users' attitude formation. The paper is titled "Biased AI Writing Assistants Shift Users' Attitudes on Societal Issues."
The team asked a total of 2,582 participants to write about socially sensitive topics, and then had them use an AI writing tool that offered biased autocomplete suggestions. As a result, the participants' writing tended to align more closely with the positions suggested by the AI.
The researchers then surveyed the participants to measure the impact. Many did not clearly recognize the AI's bias, nor did they realize that they had been influenced by it. The team also tested interventions, such as warning participants beforehand about possible bias or providing explanations afterward, but these measures did not meaningfully reduce changes in users' attitudes. This suggests that simple warnings or explanations are not enough to prevent the persuasive effects of AI.
Choi Kiyeong, former professor at the Department of Electrical and Computer Engineering at Seoul National University, said, "It may not be surprising that AI can influence people, considering how easily we are persuaded by eloquent individuals." He added, "What's interesting is that the influence was stronger when users wrote with real-time autocomplete features, compared to when they were simply shown static text before writing." He further noted, "If AI contains bias in its training data, there is a possibility that this bias could spread throughout society via its users."
AI Use Leads to More Homogeneous Thinking
On the same day, a review paper was published in the international journal Trends in Cognitive Sciences, discussing how AI is gradually standardizing human expression and thought. The paper is titled "The homogenizing effect of large language models on human expression and thought."
The paper pointed out that the outputs generated by large language models (LLMs) are less diverse than human writing and tend to reflect the language and values of specific cultures. In particular, it analyzed that the language and mindset of "WEIRD" societies—Western, Educated, Industrialized, Rich, Democratic—are more strongly reflected in model outputs.
The researchers also introduced previous findings that, while individuals might generate richer ideas when using LLMs, groups using LLMs together could actually see a reduction in both the number and creativity of ideas. They also noted that after exposure to a biased language model, people's opinions tend to become more aligned with the model's stance.
Park Hanwoo, professor of Media and Communication Studies at Yeungnam University, evaluated these research trends as evidence that AI can no longer be viewed as merely a functional tool. He stated, "In an era where humans and AI play, think, and work together, AI can become more than just an information provider—it can influence human cognition and judgment." He added, "This research shows that AI writing tools can intervene in the human thought process."
Professor Park also emphasized, "It is difficult to sufficiently reduce AI's influence with only post hoc explanations or warnings about the information provided. A preemptive intervention strategy that raises awareness of potential distortions at the service design stage, along with model development that incorporates diverse perspectives, is needed."
Hot Picks Today
Both studies demonstrate that AI writing tools can influence not only how humans express themselves, but also the process by which they form opinions. Experts highlight the need for policies and service designs that address both technological bias and the homogenization of expression, given the rapid proliferation of AI use.
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.