The rapid advancement of technology has transformed society, bringing both remarkable benefits and significant vulnerabilities. Among the most notable innovations is ChatGPT, an application developed by OpenAI that has gained widespread popularity. However, as users embrace this tool, they must also grapple with the implications of data privacy and security.
ChatGPT's latest update allows users to replicate the artistic style of Hayao Miyazaki, a feature that has quickly gone viral on social media. To create images in the Ghibli style, users simply upload a photo to ChatGPT and specify their request. While this feature has delighted many, it has also raised concerns about the potential risks associated with sharing personal images.
ESET Latinoamérica warns that participating in this trend can lead to fraud or identity theft if the technology is compromised or if users accept unfavorable privacy conditions. Fabiana Ramírez, a security researcher at ESET, emphasizes that biometric data, which can include facial recognition and other personal identifiers, is increasingly used for various purposes, from access control to banking authentication.
"A cybercriminal could impersonate individuals using stolen biometric data," Ramírez stated, highlighting the serious repercussions of compromised personal information. Financial data, such as credit card information, can fetch between 8 and 22 dollars on the dark web, while social media accounts, like Facebook, can sell for around 3.4 euros, making them lucrative targets for hackers.
In May 2024, the Australian company Outabox suffered a significant data breach that exposed sensitive biometric data from systems used in bars and clubs across Australia. The breach revealed that data collected included facial recognition information, scans of driver’s licenses, signatures, club membership details, addresses, birth dates, and timestamps of visits to various establishments. This incident underscores the vulnerability of biometric information, which, unlike passwords, cannot be easily changed once compromised.
Daniel Arias, a delivery manager at Business IT, noted, "Biometric information cannot be changed like a password and is vulnerable to theft or leakage." The incident at Outabox serves as a stark reminder of the potential consequences of inadequate data security measures.
Furthermore, David Gonzales, a security researcher at ESET Latinoamérica, pointed out that generative AI models rely heavily on vast amounts of data, much of which contains personal, sensitive, and confidential information. As AI technology continues to evolve, the mass collection of data without informed consent poses significant challenges, particularly regarding privacy and security.
The Organization of Consumers and Users (OCU) has also raised alarms about ChatGPT, warning that the application stores user data obtained through conversations, documents, and photos. The OCU recommends that users disable a specific option in ChatGPT to enhance their privacy. This option, if left enabled, allows OpenAI to use shared information—even from paid applications—to train its AI models.
According to the OCU, the only paid versions of ChatGPT that ensure complete data privacy are the Team version, costing 25 dollars per month, and the Enterprise version, which has a negotiable cost. In contrast, the free version and the Plus ($20/month) and Pro ($200/month) versions permit the use of conversations and uploaded files to improve the AI's performance.
To disable the data-sharing option, users can follow a few simple steps: access their ChatGPT account, click on their profile picture, select 'Configuración', then 'Controles de Datos', and deactivate 'Mejorar el modelo para todos'. Once this option is turned off, the conversations will no longer be used to train the AI, according to the OCU.
However, even with this option disabled, OpenAI continues to collect personal information for operational, security, and legal reasons. This includes basic data such as names, email addresses, payment methods, and technical details like IP addresses.
The OCU also highlights the new European AI regulation that came into effect in February 2025, which prohibits manipulative practices based on user vulnerabilities, such as age, disability, or socio-economic status. The OCU urges the Agencia Española de Supervisión de Inteligencia Artificial (AESIA) and other authorities to bolster resources and inform consumers about their rights to ensure that tools like ChatGPT comply with privacy laws.
As technology continues to evolve, the importance of safeguarding personal information cannot be overstated. Users must remain vigilant about the data they share and take proactive steps to protect their privacy. This includes reviewing privacy policies, ensuring that shared information is legally protected, and using official applications to minimize risks.
With the rise of generative AI and its integration into everyday applications, the potential for misuse of personal data is a growing concern. The trend of creating Ghibli-style images through ChatGPT may be fun, but it serves as a reminder of the underlying risks associated with sharing personal information online.
In conclusion, as users navigate the exciting yet perilous landscape of new technologies, understanding the implications of their digital footprint is essential. By taking informed steps to protect their data, users can enjoy the benefits of innovation while safeguarding their privacy.