Italy's data protection authority, known as Garante, has fined OpenAI, the maker of ChatGPT, €15 million ($15.66 million) over its handling of personal data.
The fine, which was announced recently, follows findings from nearly one year ago when the authority discovered the generative artificial intelligence application was processing users' information to train its service, violating the European Union's General Data Protection Regulation (GDPR).
According to the Garante, OpenAI failed to inform the authority about a security breach occurring back in March 2023. It was determined the company did not have adequate legal grounds for processing personal information for training ChatGPT.
Adding to the concerns, the Garante indicated the company had jeopardized principles of transparency and failed to communicate necessary information to users. The Garante raised alarms especially relating to the lack of age verification mechanisms, which it noted could expose children under 13 to potentially inappropriate responses.
Alongside the financial penalty, OpenAI has been ordered to conduct a six-month-long communication campaign across various media channels, including radio, television, newspapers, and the internet. This campaign aims to clarify how ChatGPT functions, what data it collects from users and non-users alike, and the rights users have—including the right to object, rectify, or delete their data.
"Through this communication campaign, users and non-users of ChatGPT will have to be made aware of how to oppose generative artificial intelligence being trained with their personal data and hence be effectively enabled to exercise their rights under GDPR," the Garante added.
This incident marks a significant moment, as Italy was the first country to impose a temporary ban on ChatGPT back in March 2023, primarily due to concerns over data protection. Almost four weeks later, access to the AI model was restored after OpenAI addressed the issues raised by the Garante.
Following the recent fine announcement, OpenAI expressed its intent to appeal, describing the penalty as disproportionate. The company argues the fine is nearly 20 times the revenue generated from its services within Italy during the relevant time period. OpenAI emphasized its commitment to providing beneficial artificial intelligence solutions, asserting they prioritize users' privacy rights.
Interestingly, the ruling aligns with the European Data Protection Board's (EDPB) previous opinion stating AI models processing personal data unlawfully, yet later anonymized prior to their deployment, do not violate GDPR. The EDPB clarified, "If it can be demonstrated the subsequent operation of the AI model does not entail the processing of personal data, the GDPR would not apply. Therefore, the unlawfulness of the initial processing should not impact the model's subsequent operation."
Earlier this month, the Board also released new guidelines concerning the handling of data transfers outside non-European countries to comply with GDPR standards. This guidance indicates, "Judgments or decisions from third countries' authorities cannot automatically be recognized or enforced within Europe," reinforcing the need for compliance with EU data protection regulations.
With rising scrutiny and regulatory actions surrounding data privacy, the incident involving OpenAI highlights the significant challenges tech companies face as they navigate the complex terrain of compliance with GDPR.