OpenAI, the American artificial intelligence company behind ChatGPT, has been fined €15 million (approximately $15.7 million) by Italy's data protection authority, known as the Garante. This hefty penalty follows concerns related to the organization’s handling of personal data.
The investigation, which began back in March 2023, uncovered significant flaws in how OpenAI managed the collection and processing of user data. According to the Garante’s findings detailed in its December 20 statement, OpenAI failed to adequately notify the agency about a data breach earlier this year and did not establish a proper legal basis for processing user data to train its AI chatbot. This was described as a violation of principles surrounding transparency and the obligation to inform users adequately.
One of the more alarming revelations from the Garante involved OpenAI’s lack of age verification mechanisms, which put minors—especially those under the age of 13—at risk of accessing inappropriate content generated by the AI. The agency expressed, "OpenAI has not provided mechanisms for age verification, with the consequent risk of exposing minors under 13 to responses unsuitable for their level of development and self-awareness.”
To address these issues, the Garante has mandated OpenAI to execute a six-month public awareness campaign, utilizing various Italian media platforms such as radio, television, and online outlets. The campaign's focus will be to educate the public about ChatGPT's data collection practices and the rights users have under the European Union’s General Data Protection Regulation (GDPR). Through this campaign, users are expected to learn how they can reclaim control over their data and opt out of having their information utilized for AI training.
OpenAI's collaborative attitude during the investigation has been noted as a factor contributing to the relatively reduced fine, as the company demonstrated willingness to work with Italian regulators. Interestingly, during this time, OpenAI shifted its European headquarters to Ireland, which has now become the lead authority overseeing its compliance with EU regulations.
This ruling does not occur in isolation, as regulatory scrutiny over AI technologies is intensifying globally. The Garante's decision follows Italy's earlier actions where it became the first Western country to temporarily ban ChatGPT due to privacy concerns. Following the implementation of certain transparency measures by OpenAI, the ban was lifted, and the chatbot was reinstated on April 29 of the same year.
OpenAI has expressed its intention to appeal against what it labeled as a "disproportionate" decision handed down by the Italian watchdog. The growing scrutiny facing the tech giant is indicative of the broader scrutiny generative AI technologies are undergoing both on sides of the Atlantic. Governments and regulatory bodies across various jurisdictions are engaged in discussions and deliberations aimed at establishing comprehensive frameworks for the responsible use of AI technologies.
Meanwhile, the impending European Union AI act poses another layer of regulation for AI systems, which aims at setting comprehensive rules governing artificial intelligence technologies. With this legislation and the Garante’s decisive ruling, companies like OpenAI may face stringent compliance measures moving forward.
Overall, the actions taken by the Garante can be seen as both enforcement of users’ rights and as part of the larger global conversation about the ethical and responsible use of AI technologies. Many industry experts and stakeholders will be watching closely as OpenAI navigates its legal challenges and implements the corrective measures mandated by the Garante.