Today : Jul 09, 2025
Technology
20 March 2025

Privacy Group Noyb Files Complaint Against OpenAI Over Defamatory AI Claims

A Norwegian user discovers ChatGPT falsely claiming he was convicted of child murder, raising legal concerns.

OpenAI is facing a new privacy complaint in Europe, with its AI chatbot ChatGPT under scrutiny for generating alarmingly false information. Privacy rights advocacy group Noyb is at the center of this controversy, supporting a Norwegian individual who discovered ChatGPT fabricating a heinous narrative that claimed he was convicted of murdering his children.

This latest complaint highlights ongoing concerns about the inaccuracies propagated by AI systems. Previously reported issues encompassed ChatGPT generating false personal data, including incorrect birth dates and biographical details. However, the stakes have now been raised, as the fabricated narrative included chilling claims of child murder.

“The GDPR is clear. Personal data has to be accurate,” stated Joakim Söderberg, data protection lawyer at Noyb, emphasizing that users deserve the right to have erroneous information rectified. Under the European Union’s General Data Protection Regulation (GDPR), individuals have access rights ensuring their personal data is correct and presentable.

As pointed out, confirmed violations of the GDPR can lead to serious repercussions, potentially incurring penalties as steep as 4% of an entity's global annual turnover. This concern is echoed by the past actions of Italy’s data protection authority, which temporarily restricted ChatGPT's access in Spring 2023, eventually fining OpenAI €15 million for illegal data processing.

Noyb's recent actions may serve to draw attention to the need for stricter regulation of generative AI tools. The organization has made a formal complaint to the Norwegian data protection authority, calling for intervention based on the disturbing nature of the AI-generated content.

In illustrating their point, Noyb shared screenshots exemplifying the incident, in which ChatGPT gruesomely claimed Arve Hjalmar Holmen, the individual in question, was sentenced to 21 years in prison for slaying his children. While this assertion was entirely fabricated, the AI did include some truths about Holmen, such as the number of his children and their genders, which added a layer of disturbing realism to the falsehoods.

A spokesperson for Noyb remarked, “We did research to ensure that this wasn’t just a mix-up with another person,” reinforcing the thoroughness of their investigation. Despite efforts to validate the claim through newspaper archives, they could not find any basis for the fabricated accusations against Holmen.

This case serves as a stark reminder of the risks associated with the use of AI language models, which operate through predicting the next word based on vast datasets. Such hallucinations, as the occurrence is known, can have dire consequences, especially given the real-person contexts they may be associated with.

Notably, following an update to ChatGPT's underlying AI model, it seems to have ceased generating potentially defamatory stories about Holmen, now utilizing the internet to search for more accurate information. Yet, concerns linger that false data may have been retained within the AI model itself.

“The fact that someone could read this output and believe it is true is what scares me the most,” Holmen expressed, drawing attention to the personal emotional toll such hallucinations can impose.

Despite ChatGPT's recent updates to mitigate risks, controversies surrounding fabricated narratives highlight the pressing need for AI developers to recognize their responsibility under existing data protection laws. Noyb's lawyer Kleanthi Sardeli remarked, “Adding a disclaimer that you do not comply with the law does not make the law go away,” reiterating AI companies' obligations to adhere to GDPR requirements.

The complaint against OpenAI is not isolated, as Noyb points to other instances where ChatGPT has fabricated harmful information about individuals. For example, previous reports have documented cases involving a major wrongly implicated in a bribery scandal and a journalist falsely accused of child abuse.

With respect to the current complaint, Noyb asserts that hallucinations like these could lead to significant reputational damage for the affected individuals. “If hallucinations are not stopped, people can easily suffer reputational damage,” added Sardeli.

OpenAI has been contacted regarding the complaint, but has yet to respond publicly. As of now, Noyb is pursuing accountability through the Norwegian data protection authority, as the organization contends that OpenAI’s U.S. entity should also be held accountable for the ramifications of its AI product on European users.

This current complaint follows a previous Noyb-initiated GDPR case from April 2024, concerning issues similar to those raised in this latest incident. That earlier complaint was directed to Austria but was subsequently referred to Ireland’s Data Protection Commission (DPC) for handling. Currently, the DPC’s review of the initial complaint remains ongoing, with no definitive timeline communicated for its conclusions.

As privacy rights organizations push for clearer regulations and protections regarding how AI systems handle personal data, the outcomes of these complaints could set significant precedents in the evolving landscape of generative AI technology.