Norway Lawsuit: ChatGPT Fabricates Crime, Violates GDPR

Norwegian citizen Arve Hjälmar Holmen was left stunned by a message generated by ChatGPT, in which the artificial intelligence erroneously accused him of committing a crime against his three children. The message contained real details about his residence and family, further intensifying the gravity of the situation.
The incident has become the subject of a formal legal complaint—filed by the privacy advocacy group noyb with Norway’s data protection authority—alleging that OpenAI violated Article 5 of the General Data Protection Regulation (GDPR).
The core argument of the complaint centers on the premise that personal data, even when processed by a generative model, must remain accurate. Noyb emphasized that the law offers no exceptions for neural networks—errors involving personal data constitute a violation, regardless of technological limitations. The organization contends that in Holmen’s case, OpenAI interwove fictional and factual elements, exacerbating the severity of the breach.
The issue is further compounded by the company’s failure to provide users with a means to correct false information. Noyb has previously noted that, unlike traditional services, ChatGPT does not allow erroneous content to be edited or rectified. Instead, OpenAI merely added a disclaimer to the chatbot interface, warning users about potential inaccuracies—a move legal experts argue does not absolve the company of its obligations.
Attorneys are seeking a directive from Norwegian regulators that could compel OpenAI to revise its algorithms, restrict the processing of Holmen’s personal data, or impose financial penalties. However, OpenAI may have grounds for defense—noyb acknowledged that in newer, internet-connected versions of ChatGPT, false information about Holmen is no longer generated. This suggests that the issue, at least at the user interface level, has been resolved.
Nevertheless, the complaint underscores that a link to the original exchange remains accessible, implying that the inaccurate data may still reside within OpenAI’s systems and could have been used for further model training. According to noyb, this constitutes an ongoing violation, notwithstanding outward corrections. The legal team stresses that removing visibility does not equate to ceasing data processing—companies cannot merely conceal a mistake without addressing it at the source.
The final decision lies with Norwegian regulators, yet the case already highlights the mounting pressure on AI developers to uphold the accuracy and legality of personal data processing. As generative models evolve, they demand new frameworks for accountability and the protection of human rights—for even if the falsehood originates from a machine, there must be someone to answer for it.