FTC investigating OpenAI for possible ‘reputational harm’ caused by ChatGPT
The Federal Trade Commission has reportedly opened an investigation into OpenAI, the maker of popular AI tool ChatGPT, on claims the chatbot has harmed consumers through its data collection and false results on individuals, according to an FTC demand.
The FTC earlier this week sent a 20-page request for records about how OpenAI addresses risks related to its AI models. The agency is investigating whether the company engaged in unfair or deceptive practices that resulted in “reputational harm” to consumers, according to the letter, which was reported by the Washington Post.
The FTC called on OpenAI to provide detailed accounts of all consumer complaints it had received regarding ChatGPT making “false, misleading, disparaging or harmful” statements about individuals.
Since OpenAI released it, ChatGPT has astounded users, writing short college essays, cover letters, and a weirdly passable Seinfeld scene in which Jerry needs to learn the bubble sort algorithm.
If the FTC finds that a company has violated consumer protection laws, it can fine the company or require it to follow a consent decree dictating how the company handles data. In the past few years, the FTC has emerged as the federal government’s top cop of Big Tech companies like Meta, Amazon and Twitter, levying large fines against the tech giants for alleged violations of consumer protection laws related to their respective platforms.
The investigation comes at a time when demand for ChatGPT is exploding within Congressional offices and generative AI pilot programs similar to ChatGPT are popping up in all corners of the federal government and many industries across the private sector.
The State Department, the National Science Foundation, the Justice Department and the Department of Veterans Affairs have all announced generative AI related pilot projects or research initiatives in the past few months.
OpenAI and the FTC didn’t immediately respond to requests for comment.