Privacy Complaint Targets OpenAI Over Potential EU Law Breach

OpenAI, a prominent artificial intelligence (AI) developer, is facing a privacy complaint from Austrian data rights protection group, Noyb. The complaint, filed on April 29, alleges that OpenAI’s chatbot, ChatGPT, provided false information and that OpenAI failed to correct it. Noyb claims that this inaction may violate EU privacy regulations. The complainant, an unidentified public figure, sought information about themselves from ChatGPT and received consistently incorrect responses. OpenAI reportedly refused to rectify or delete the data, claiming it was not possible, and declined to disclose information about the training data or its sources.

Maartje de Graaf, a Noyb data protection lawyer, highlighted the issue, emphasizing that if a system cannot produce accurate and transparent results, it cannot generate data about individuals. She stated that technology must adhere to legal requirements rather than the other way around. Noyb has taken the complaint to the Austrian data protection authority, urging them to investigate OpenAI’s data processing methods and their accuracy in handling personal data through their large language models (LLMs). De Graaf further expressed that companies struggle to make chatbots like ChatGPT comply with EU law when dealing with individuals’ data.

Noyb, also known as the European Center for Digital Rights, is an organization based in Vienna, Austria. Their objective is to support European General Data Protection Regulation (GDPR) laws through strategic court cases and media initiatives. This incident is not the first time chatbots have faced criticism in Europe. In December 2023, two European nonprofit organizations conducted a study that revealed Microsoft’s Bing AI chatbot, renamed Copilot, provided misleading or inaccurate information about local and political elections in Germany and Switzerland. The chatbot supplied incorrect answers, misquoted its sources, and provided inaccurate information about candidates, polls, and scandals. Google’s Gemini AI chatbot was reportedly generating woke and inaccurate imagery, although this incident did not specifically occur within the EU. Google apologized and committed to updating its model in response.

The regulatory and ethical challenges surrounding AI chatbots are drawing attention as they become more prevalent in our daily lives. This recent complaint against OpenAI highlights the need for stricter enforcement of privacy regulations and the accuracy of AI-generated data. Organizations like Noyb are actively working to ensure compliance with data protection laws, emphasizing the importance of transparency and accountability in AI systems. The outcomes of this complaint could contribute to the ongoing discussions and potential legal measures aimed at regulating AI technologies.

Dedra Mulligan

Dedra Mulligan

6 thoughts on “Privacy Complaint Targets OpenAI Over Potential EU Law Breach

  1. Thank you, Maartje de Graaf, for emphasizing the importance of legal requirements in AI systems. Data protection should always be prioritized.

  2. Noyb is absolutely right in taking this complaint seriously. OpenAI needs to face the consequences of their actions!

  3. Thank you, Noyb, for holding companies accountable and pushing for transparency regarding AI-generated data.

  4. Thumbs up to Google for apologizing and committing to update their model in response to the Gemini AI chatbot incident.

  5. The issue of inaccurate and misleading information from AI chatbots is a global problem. Collaborative efforts are needed to address it.

  6. Maartje de Graaf makes an important point technology should adhere to legal requirements, not the other way around.

Leave a Reply