container
Dim

US FTC Investigates Harmful Effects of AI Chatbots on Children, Targeting Meta and OpenAI

Text Size

Text Size

Close
Print

FTC Demands Data on Monitoring and Usage Restrictions
from Seven AI Companies

As controversy grows over the potential harm of artificial intelligence (AI) chatbots to children, the U.S. Federal Trade Commission (FTC) has launched an investigation into major technology companies such as Meta and OpenAI.


Reuters Yonhap News

Reuters Yonhap News

원본보기 아이콘

According to Bloomberg News on September 11 (local time), the FTC has notified seven companies that develop AI chatbots-including Google, OpenAI, and Meta-to submit materials related to the impact of chatbots on children.


The FTC, which is composed entirely of Republican commissioners, unanimously approved the launch of this investigation. The companies under investigation also include xAI, the AI company founded by Tesla CEO Elon Musk; Instagram, which is owned by Meta; Snap; and Character Technologies, which developed Character.AI. The FTC stated that the purpose of the investigation is to examine how companies measure, test, and monitor their chatbots, as well as what steps they have taken to restrict use by children and teenagers.


This investigation appears to have been prompted by recent reports of harmful incidents involving children and teenagers using chatbots, including the death of a teenager after prolonged conversations with a chatbot. In April, a teenager in California died after using a chatbot for several months. The teenager's parents have filed a lawsuit against OpenAI, claiming that ChatGPT provided their child with specific information about methods of suicide. In another case last October, a teenager in Florida took their own life after becoming deeply involved with a chatbot, exchanging messages such as "I love you." The parents in this case have filed a lawsuit against Character.AI.


Recently, internal documents have raised allegations that Meta's AI chatbot was allowed to generate sexually explicit responses in conversations with children, prompting a formal investigation by the U.S. Senate. Last month, attorneys general from 44 U.S. states expressed serious concerns about child safety and sent warning letters to 12 AI chatbot companies-including Meta, OpenAI, and Google-urging them to strengthen child protection measures.


Bloomberg News pointed out that, under U.S. federal law, technology companies are prohibited from collecting data from children under the age of 13 without parental consent. The report also noted that, for years, Congress has attempted to extend such protections to all teenagers, but no related legislation has made progress so far. Furthermore, it explained that the FTC typically analyzes the materials it obtains from companies and then issues a report, a process that can take several years to complete.

© The Asia Business Daily(www.asiae.co.kr). All rights reserved.

top버튼

Today’s Briefing