Today : Apr 22, 2025
Technology
07 April 2025

Users Warned About Privacy Risks With ChatGPT

Recent guidance highlights potential data collection practices and privacy concerns for AI users.

In a world increasingly dominated by artificial intelligence, concerns about user privacy are rising. A recent statement from the Organización de Consumidores y Usuarios de España (OCU) has shed light on the privacy implications of using ChatGPT, a popular AI chatbot. The OCU warns that many versions of ChatGPT are capable of recording all information shared in chats, storing every consultation and request made by users.

The free version of ChatGPT, along with its Plus and Pro iterations, utilizes user-shared information to train its AI models. This includes not only conversations but also any files shared during interactions. While OpenAI, the creator of ChatGPT, asserts that this data is not used for commercial purposes, the OCU recommends users disable this data-sharing option to enhance their privacy.

Only the Team and Enterprise versions of ChatGPT guarantee data privacy by default, according to the OCU's statement. To disable the data-sharing function for other versions, users can follow a simple process: log in to their account, click on their profile picture, navigate to 'Configuración,' select 'Controles de datos,' and uncheck the box labeled 'Mejorar el modelo para todos.'

However, even after this option is disabled, OpenAI will still collect basic personal data, technical device information, usage patterns, and session metadata for operational, security, and legal compliance reasons. This means that while users may take steps to protect their conversations, a digital footprint remains.

Moreover, the OCU emphasizes that conversations with ChatGPT are not as private as they might seem. OpenAI claims that storing chats is intended to improve model performance, but this practice implies that every message sent could be recorded. This raises significant questions about what users should and should not share while interacting with AI.

To protect their privacy, users are advised to avoid sharing sensitive information. This includes personal identifiers such as full names, addresses, phone numbers, email addresses, and official documents. Financial information like credit card numbers, bank account details, and access credentials should also remain confidential. Additionally, users should refrain from sharing medical data, confidential legal information, or any details pertaining to third parties, especially minors.

Interestingly, the way users phrase their queries can also impact their privacy. Rather than sharing personal stories that could reveal identifiable details, users are encouraged to frame their questions more generally. For instance, instead of saying, 'My daughter Ana, who is 10 years old, has autism and struggles with socializing,' one might ask, 'How can I help a child with autism improve their social skills?' This not only protects personal privacy but also allows for more useful responses without compromising sensitive information.

The latest updates to ChatGPT have prompted many users to share personal images to modify them into various animation styles, such as those seen in Pixar or Ghibli films. While this can lead to creative outputs, it also raises concerns about the biometric data involved. Providing biometric information, like one’s face, can lead to its use in training AI models and generating facial patterns without the user’s consent.

Users are further cautioned against sharing any personal information, including their national identification numbers (DNI), dates of birth, or places of residence. Such information could increase the risk of tracking and identification by unauthorized entities, thereby compromising user integrity.

Moreover, one should avoid registering any banking-related information, including credit card numbers, account details, access codes, or financial transactions. Sharing this type of information, even inadvertently, could be valuable to cybercriminals who target online tools.

The use of AI tools in the workplace has become common, with many users leveraging these technologies to streamline various tasks. However, sharing internal business documents or client data can result in the leakage of vital company information, posing significant risks to businesses.

Lastly, while AI technologies like ChatGPT can assist with information gathering, they should not be relied upon for medical consultations. The judgment of a qualified professional remains irreplaceable, and using AI in this context could lead to potentially harmful consequences.

As artificial intelligence continues to evolve, the balance between utilizing its capabilities and safeguarding personal privacy remains a pressing concern. Users must remain vigilant about the information they share and take proactive steps to protect their data.

In summary, while ChatGPT and similar AI technologies offer exciting possibilities, users must navigate these tools with caution. By understanding the privacy implications and following best practices for data sharing, individuals can better protect their personal information in an increasingly connected world.