FTC vs. OpenAI & ChatGPT over accuracy and privacy concerns
The agency is particularly concerned about third party plug-ins
The FTC (Federal Trade Commission) has launched an investigation against OpenAI and its acclaimed artificial intelligence system, ChatGPT. This investigation is based on concerns that ChatGPT may be harming users by generating and publishing false information.
As advances in AI continue, it is crucial to ensure transparency and accuracy in these technologies, and the FTC is seeking to determine whether OpenAI has met these expectations.
The Wall Street Journal has reported that the FTC has sent a civil subpoena to OpenAI, initiating a thorough investigation. The commission is focused on determining whether OpenAI has employed unfair or deceptive practices related to the privacy and security of user data. To gather information, the FTC has asked OpenAI to respond to a detailed questionnaire.
Product Description and Functioning
The FTC seeks a detailed explanation of the operation of OpenAI's artificial intelligence models, including ChatGPT and Dall-E, as well as their interactions with third parties through plug-ins.
In addition, information is required on the origin of the data used to train GPT-4, as well as procedures for refining the model and combating the spread of misinformation.
One of the FTC's primary points of interest is to assess the steps taken by OpenAI to address and mitigate the risks associated with generating false, misleading, or derogatory statements about real people.
This concern arises due to ChatGPT's ability to fabricate information supported by non-existent studies, which could have negative implications for users.
Complaint Logging and User Safety
The FTC has asked OpenAI to share ChatGPT-related complaint logs, especially those involving the publication of false, derogatory or harmful statements about individuals.
In addition, information has been requested about a security breach that exposed personal information about users of this chatbot. The FTC seeks to address concerns related to the privacy and security of OpenAI's products.
The CAIDP complaint and legal implications
The FTC's investigation stems from a complaint filed in March by the Center for AI and Digital Policy (CAIDP). According to CAIDP, the model powered by GPT-4, used in ChatGPT, poses a risk to privacy and public safety, in addition to violating federal consumer protection law by lacking independent evaluations.
In the event that OpenAI is shown to have violated the law, the FTC has the authority to impose fines or limitations on future versions of the chatbot.