ChatGPT Faces Privacy Complaint Over Defamatory Hallucinations

Elena Morales Elena Morales March 20, 2025

A privacy rights group has filed a complaint against OpenAI after ChatGPT generated a false and defamatory story, reigniting concerns over AI-generated misinformation and privacy violations


ChatGPT Under Fire for Generating Defamatory Misinformation

Artificial intelligence continues to revolutionize industries, but concerns over its accuracy and ethical implications persist. OpenAI’s ChatGPT has come under scrutiny again after reportedly fabricating a false and defamatory story about an innocent individual. A European privacy advocacy group has now filed a formal complaint, arguing that such AI-generated misinformation poses significant risks to privacy and personal reputations.

Complaint Filed Over AI Hallucinations

According to recent reports, the privacy rights organization NOYB (None of Your Business) has lodged a legal complaint against OpenAI with European data protection authorities. The complaint follows an incident where ChatGPT allegedly invented a fictional narrative falsely accusing a real person of child murder. The AI model, which generates text based on vast amounts of data, sometimes produces “hallucinations,” or completely fabricated information that appears factual.

Privacy experts argue that these AI-generated hallucinations highlight serious flaws in OpenAI’s model, particularly concerning the reliability of chatbot responses. NOYB contends that OpenAI has failed to implement adequate safeguards to prevent such harmful misinformation from being disseminated.

ad

Legal and Ethical Implications

The incident has reignited debates around AI regulation, misinformation, and privacy. Under the General Data Protection Regulation (GDPR) in Europe, individuals have the right to request correction or deletion of inaccurate information. However, enforcing such regulations on generative AI models remains a challenge, as AI does not “store” information in the same way as traditional databases.

Legal experts warn that this case could set a precedent for AI accountability. If OpenAI is found liable for defamatory content generated by ChatGPT, it could lead to stricter AI regulations worldwide, affecting tech companies developing large language models.

Growing Concerns Over AI Misinformation

This is not the first time AI-generated content has caused controversy. Several instances of ChatGPT fabricating events, misquoting sources, or producing biased content have raised concerns among lawmakers, researchers, and businesses. While OpenAI has taken steps to minimize inaccuracies, AI’s inherent unpredictability remains a pressing issue.

The risk of misinformation becomes particularly dangerous when it involves false accusations, medical advice, or financial recommendations. Experts suggest that AI companies must prioritize transparency, accountability, and stricter safeguards to prevent harm.

OpenAI’s Response and Future Challenges

In response to the growing backlash, OpenAI has reaffirmed its commitment to improving ChatGPT’s accuracy and mitigating the risks associated with AI hallucinations. The company has continuously updated its model with reinforcement learning techniques, fine-tuning it to reduce misinformation. However, completely eliminating hallucinations remains a difficult challenge in AI development.

As governments worldwide push for stricter AI regulations, OpenAI and other tech giants face mounting pressure to ensure their AI systems operate within legal and ethical boundaries. The outcome of this privacy complaint could influence how AI-generated content is regulated in the future, shaping the next phase of artificial intelligence governance.


Lead Editor at Gloobeam.com, bringing over a decade of experience in journalism, editorial leadership, and global news coverage. With a background in political analysis and investigative reporting, Elena has worked for top-tier media outlets across North America and Europe. Her expertise spans politics, law, and business, making her a key figure in shaping Gloobic’s commitment to delivering accurate, timely, and insightful news. Known for her sharp editorial eye and dedication to unbiased reporting, Elena leads a team of journalists focused on bringing the world’s most important stories to the forefront. Outside of work, she’s passionate about travel, photography, and advocating for press freedom.

Tech