A recent investigation by IMDEA Networks has uncovered significant privacy concerns associated with widely used AI chat systems, including ChatGPT, Claude, and Grok. The research highlights that these platforms may inadvertently expose private conversations to third parties through hidden trackers.
Researchers identified three major issues regarding user privacy:
- Direct links to conversations being shared with external services;
- The potential for conversations to be linked to real identities;
- Inconsistencies between stated privacy policies and actual practices.
Access controls on some platforms are notably weak. For instance, the presence of a direct link to a chat can allow anyone, including advertising trackers, to view its content. Services like Grok and Perplexity have been found to transmit conversation URLs to Meta Pixel trackers. Furthermore, Grok may even expose the exact text of messages through Open Graph metadata collected by TikTok.
Co-author of the study, Guillermo Suarez-Tangil, noted that these practices reflect a continuation of data collection business models, now extending into the realm of generative AI.
The combination of cookies, encrypted email addresses, and server-side tracking methods enables advertising companies to identify users. Even if individuals do not disclose their names in chats, trackers can correlate AI system activity with broader digital profiles.
“Most users have no way of knowing this is happening; there’s nothing visible in the interface to alert them,” said researcher Aniket Girish. “Opting out of non-essential cookies helps in some cases, but our study shows this is not always sufficient.”
The findings also reveal that existing privacy controls in user interfaces can mislead users. While privacy policies acknowledge data sharing with “business partners,” they often fail to explicitly state that actual conversations are included in this data.
Lawyer Jorge García Herrero, who advised the research team, likened this situation to common disclaimers regarding AI errors. “Warnings that our most sensitive information could end up in the advertising industry deserve the same level of attention as the ubiquitous disclaimer, ‘AI may make mistakes, please verify responses,’” he stated.
While the authors of the study describe their findings as preliminary, they emphasize the urgent need for enhanced regulatory measures for AI, particularly within the framework of GDPR. Until platforms adopt systemic changes, users are advised to exercise caution: avoid entering personal information, sharing passwords, or disclosing financial details, even when interacting with seemingly “friendly” chatbots that claim to ensure complete privacy.
The IMDEA Networks study exposes serious privacy issues in AI chat systems, revealing how user conversations may be shared with third parties. Researchers call for stricter regulations to protect user data.
