WASHINGTON, D.C.: U.S. regulators are turning up the heat on makers of consumer-facing AI chatbots, demanding details on how companies test for risks, handle user data, and profit from interactions with their systems.
The Federal Trade Commission (FTC) has issued inquiries to Alphabet, Meta Platforms, OpenAI, Character.AI, Snap, and Elon Musk’s xAI. The agency wants to know how these firms measure and monitor potential harms from their technology, how user inputs are processed to generate responses, and how the companies monetize user engagement.
The move comes amid growing scrutiny of generative AI systems that are rapidly entering mainstream use. Reuters recently reported on internal Meta policies that allowed chatbots to have romantic conversations with children. Separately, OpenAI is facing a lawsuit from a family who alleges ChatGPT contributed to their teenager’s suicide. Character.AI is also contending with a lawsuit over another teen’s death.
A spokesperson for Character.AI said the firm welcomed the chance to provide “insight on the consumer AI industry and the space’s rapidly evolving technology,” noting it had rolled out “many safety features in the last year.”
Snap echoed the regulator’s concerns. “We share the FTC’s focus on ensuring the thoughtful development of generative AI, and look forward to working with the Commission on AI policy that bolsters U.S. innovation while protecting our community,” a spokesperson said.
Meta declined to comment, while the other companies did not immediately respond to requests for comment.