ChatGPT Slammed with Privacy Complaint Over False Defamatory Claims

justineanweiler.com – OpenAI is facing another privacy complaint in Europe over its viral AI chatbot’s tendency to hallucinate false information — and this one might prove tricky for regulators to ignore.
Privacy rights advocacy group Noyb is supporting an individual in Norway who was horrified to find ChatGPT returning made-up information that claimed he’d been convicted for murdering two of his children and attempting to kill the third.
Earlier privacy complaints about ChatGPT generating incorrect personal data have involved issues such as an incorrect birth date or biographical details that are wrong. One concern is that OpenAI does not offer a way for individuals to correct incorrect information the AI generates about them. Typically OpenAI has offered to block responses for such prompts. But under the European Union’s General Data Protection Regulation (GDPR), Europeans have a suite of data access rights that include a right to rectification of personal data.
Another component of this data protection law requires data controllers to make sure that the personal data they produce about individuals is accurate — and that’s a concern Noyb is flagging with its latest ChatGPT complaint.
“The GDPR is clear. Personal data has to be accurate,” said Joakim Söderberg, data protection lawyer at Noyb, in a statement. “If it’s not, users have the right to have it changed to reflect the truth. Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn’t enough. You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true.”
Confirmed breaches of the GDPR can lead to penalties of up to 4% of global annual turnover.
Enforcement could also force changes to AI products. Notably, an early GDPR intervention by Italy’s data protection watchdog that saw ChatGPT access temporarily blocked in the country in spring 2023 led OpenAI to make changes to the information it discloses to users, for example. The watchdog subsequently went on to fine OpenAI €15 million for processing people’s data without a proper legal basis.
Since then, though, it’s fair to say that privacy watchdogs around Europe have adopted a more cautious approach to GenAI as they try to figure out how best to apply the GDPR to these buzzy AI tools.
Two years ago, Ireland’s Data Protection Commission (DPC) — which has a lead GDPR enforcement role on a previous Noyb ChatGPT complaint — urged against rushing to ban GenAI tools, for example. This suggests that regulators should instead take time to work out how the law applies.
And it’s notable that a privacy complaint against ChatGPT that’s been under investigation by Poland’s data protection watchdog since September 2023 still hasn’t yielded a decision.
Noyb’s new ChatGPT complaint looks intended to shake privacy regulators awake when it comes to the dangers of hallucinating AIs.
Understanding GDPR and Its Relevance
Understanding the General Data Protection Regulation (GDPR) is essential in today’s digital landscape, especially with the increasing integration of AI technologies in data processing and decision-making. The GDPR, a comprehensive data protection law enacted by the European Union, aims to protect individuals’ rights regarding their personal data. It ensures that personal data is processed lawfully, fairly, and transparently, providing individuals more control over their personal information. A key principle of the GDPR is data accuracy, which requires organizations to ensure that the personal data they hold is accurate and kept up to date. This is particularly relevant when dealing with automated systems like AI, which must balance innovative advancements with strict compliance to these regulations.
The recent case involving OpenAI’s ChatGPT underscores the GDPR’s significance. In this incident, ChatGPT falsely accused a Norwegian man of a grave crime, sparking legal actions under the GDPR’s data accuracy provisions. As highlighted by noyb, OpenAI allegedly breached GDPR by mixing factual details with fabricated narratives, thus failing to maintain data accuracy—an essential requirement outlined in Article 5(1)(d) of the regulation. Such breaches not only heighten the necessity for companies to enforce rigorous data accuracy checks but also reaffirm GDPR’s role in holding organizations accountable for the integrity of data processed by AI systems.
AI hallucinations, where AI systems produce incorrect or misleading information, further complicate GDPR compliance. These occurrences pose severe challenges to data protection by potentially propagating misinformation and causing reputational harm. As detailed by privacy experts, ensuring data accuracy is critical to avert such risks. They argue that developers must design and implement AI systems capable of generating factually accurate data, thus aligning with GDPR mandates. This alignment not only protects individual rights but also fosters trust in AI technologies. Therefore, the regulation emphasizes the responsibility of AI developers to regularly audit and update the data their systems generate and use.
The GDPR’s relevance extends beyond legal frameworks, influencing the broader societal and ethical considerations associated with AI technologies. It compels organizations to prioritize ethical AI development, ensuring systems are transparent and accountable. By embedding GDPR principles in AI systems, organizations can enhance public confidence and encourage the responsible development of AI technologies. This is especially vital as AI continues to permeate various sectors, from healthcare to finance, where the stakes of data inaccuracies are considerably high. Thus, the GDPR remains a cornerstone in balancing technological progress with human rights protection in the era of advanced AI.
The Role of noyb in Data Protection
noyb, the European Center for Digital Rights, plays a pivotal role in safeguarding data protection rights, especially in challenging the practices of tech giants like OpenAI. Their vigilance in monitoring and identifying potential violations of the GDPR underscores their commitment to enforcing data accuracy and accountability among AI developers. By filing complaints and litigating strategically, noyb acts as a key advocate, ensuring individuals’ data rights are respected and upheld in an increasingly digital world. This organization brings to light crucial issues regarding data mismanagement and misinformation, pushing for corrective actions and enhancements in AI systems to prevent future inaccuracies and false information dissemination. With a broad influence across Europe, noyb’s actions in cases like the recent OpenAI incident demonstrate their critical role in driving improvements in digital privacy standards and policies.
In the face of the growing prevalence of AI systems, noyb’s intervention is not only timely but essential for setting a precedent in data protection enforcement. Their complaint against OpenAI emphasizes the need for rigorous oversight and compliance with GDPR requirements, particularly concerning data accuracy and rectification. The incident involving ChatGPT has highlighted the flaws in AI models, specifically regarding ‘hallucinations,’ where AI-generated content falsely implicates individuals in serious allegations. noyb’s proactive stance ensures these AI systems are held to the same standards of accountability as other data processors, advocating for realistic corrective measures rather than superficial fixes like output blocking. By doing so, noyb pushes for systemic change in the way AI technologies are developed and regulated, striving for responsible AI that prevents reputational harm due to inaccuracies. This continuous advocacy by noyb signals to technology companies the importance of maintaining transparency and diligence in AI model development, ensuring that data subjects’ rights are protected at all levels.