OpenAI is under fire once again as the data protection organization noyb has launched a new complaint against the tech giant, centered around a disturbing case of artificial intelligence hallucination. The complaint, submitted on March 20, 2025, to Norway's data protection authority Datatilsynet, alleges that ChatGPT, the popular AI chatbot, falsely portrayed a Norwegian user as a convicted child murderer.
The petitioner, Arve Hjalmar Holmen, was curious about the information ChatGPT had stored about him. To his horror, he discovered that the AI generated a completely fabricated narrative. According to ChatGPT, Holmen was accused of horrendous acts, including murdering two of his children and attempting to kill a third. In a chilling detail, the AI even claimed that Holmen received a 21-year prison sentence related to these fictitious crimes.
Holmen’s case underscores a growing concern regarding the accuracy of information provided by AI systems. He expressed his apprehension regarding the incident, stating, "The fact that someone could read this content and believe it to be true scares me the most." His fear stems from the potential damage such falsehoods can inflict on an individual's reputation and personal life.
As AI systems become increasingly intertwined with daily life, incidents like Holmen's highlight the responsibility of tech companies to ensure the veracity of their outputs. Holmen pointed out the clear implications of the General Data Protection Regulation (GDPR) in such situations, saying, "The GDPR is clear here. Personal data must be correct. If this is not the case, users have the right to rectify it." Holmen argues that it is insufficient for OpenAI just to add a disclaimer notifying users that the AI might make mistakes.
In his complaint, he further stated, "It’s not enough to simply indicate that the chatbot can make mistakes." This perspective raises questions about the moral and legal obligations of AI developers to monitor and adjust their systems to prevent similar occurrences.
The organization noyb, founded by data privacy activist Max Schrems, is advocating for the deletion of what they term "defamatory output" and is urging OpenAI to modify the ChatGPT model to eliminate the potential for generating harmful misinformation. They have also suggested that imposing a fine could serve as a deterrent against future breaches of data protection regulations.
This isn't the first time OpenAI has faced scrutiny regarding its AI products. Previous complaints have highlighted the chatbot's tendency to deliver misleading and inaccurate information, especially concerning personal data. User experiences across the board have raised alarms, as ChatGPT has repeatedly generated false statements about individuals without providing opportunities for correction, which can lead to significant reputational harm.
In light of these ongoing challenges, the need for robust regulation and ethical frameworks governing AI technologies becomes increasingly pertinent. Holmen’s case exemplifies how easily misinformation can proliferate in the digital age, leaving innocent individuals vulnerable to defamatory attacks.
As regulatory bodies grapple with how best to address the rapid advancements in AI, the Holmen incident could catalyze new standards and practices that hold tech companies accountable. The balance between innovation and ethical responsibility remains a pressing discussion in the realm of artificial intelligence.
Holmen’s experience serves as a cautionary tale for both developers and users of AI technologies, advocating for a future where data accuracy and user rights are prioritized. As technological capabilities expand, so too must our frameworks for accountability and transparency in AI.
Only time will tell how OpenAI will respond to the complaints filed by noyb and whether significant changes will be made to safeguard users from the pitfalls of AI misinformation. With calls for action echoing across the privacy advocacy space, companies operating in this growing sector must remain vigilant in addressing shortcomings to build trust and ensure a safe digital environment for all users.