On April 16, 2025, Norton issued a warning regarding the privacy risks associated with using artificial intelligence (AI) to generate images in the style of Studio Ghibli. As AI technology advances, its ability to produce highly sophisticated content has captivated users worldwide. However, hidden risks are emerging that many may overlook, primarily the inadvertent exposure of personal information.
Experts from Norton, including Iskander Sanchez-Rola, the Director of AI and Innovation, caution that while these AI-generated images can be personalized and entertaining, they also pose significant risks to user privacy. For instance, an image styled like a Ghibli character featuring a child in front of a school could unintentionally reveal the name of that school—an identifiable detail that could be visible to many online.
Historically, earlier AI models struggled to accurately process text within images, often distorting or obscuring written content. However, recent advancements have enabled these systems to reproduce text with high fidelity, making sensitive information such as school names, street signs, and logos far more visible.
While the entertainment value of AI-generated images is undeniable, users must remain aware of the potential risks involved in sharing sensitive information through these tools. As AI technologies become increasingly integrated into content creation, it's crucial for consumers to understand how their personal data might be used or exposed.
In his remarks, Sanchez-Rola emphasized the importance of vigilance when using AI tools. "If you were once cautious about posting certain photos for privacy reasons, you should be even more careful now," he stated. He outlined several key areas of concern that users should consider:
- Enhanced Natural Language Processing: If the chat history option is not disabled, interactions with AI models may be automatically saved and used to improve the system, potentially being stored indefinitely.
- Sharing with Third Parties: Some platforms may share user data with a "select group of trusted service providers," which, while not selling user data, still raises concerns about the potential for your information to be transferred to unknown companies.
- Data Storage and Retention: Data is often "de-identified" to anonymize information, but these records are securely stored, adhering to local data protection regulations.
The risks associated with using AI tools continue to evolve, with cybercriminals now able to exploit these technologies to expose personal data, creating realistic phishing attacks and sophisticated malware. Such developments highlight the necessity for users to be proactive in protecting their information.
To help users navigate these risks, Sanchez-Rola offers several best practices:
- Avoid Sharing Sensitive Information: Users should refrain from sharing private or confidential details when interacting with AI tools.
- Review Privacy Policies: Always check how your information is treated and stored on AI platforms.
- Use Strong Passwords: Ensure that your accounts are secured with robust, unique passwords.
- Stay Informed: Keep up with the latest trends in AI and cybersecurity to avoid scams or threats.
- Utilize Security Software: A reliable cybersecurity solution, such as Norton 360 Deluxe, can help protect devices from malware threats.
As AI tools hold the potential to transform the digital landscape, users are urged to employ them cautiously. Protecting one’s privacy when using these technologies is paramount to ensuring a safe online experience.
In a related development, on the same day, Anthropic announced significant enhancements to its AI, Claude, aimed at improving productivity for business users within Google Workspace. These updates, introduced on March 26, 2025, include a new "Ricerca" (Search) function that allows Claude to conduct multiple searches and provide comprehensive answers with appropriate citations, similar to ChatGPT's Deep Research.
The integration of Claude with Gmail and Calendar enriches the contextual awareness of the AI concerning user work and scheduling. This feature enables users to retrieve meeting notes from past sessions and identify relevant documents for additional information.
Furthermore, Claude enhances transparency by providing in-line citations, allowing users to verify the sources of the information presented. For Claude Enterprise administrators, an option to enable cataloging is now available, improving the quality and accuracy of information retrieval through Retrieval Augmented Generation (RAG) techniques.
The search functionality is currently in beta for Max, Team, and Enterprise plans in the United States, Japan, and Brazil. The beta version of the Google Workspace integration is accessible to all paid Claude users, although administrators must enable it for their domains before users can connect their accounts.
Additionally, Claude's web search functionality, initially launched in the US, has now expanded to users in Brazil and Japan, allowing a wider range of users to benefit from its capabilities, especially for organizations relying on Google Workspace.
On April 15, 2025, Trend Micro announced its Gold sponsorship of the OWASP Top 10 for LLM and Gen AI Project, a significant initiative aimed at addressing emerging AI security risks. This partnership underscores Trend Micro's commitment to advancing AI security, ensuring a secure foundation for the transformative power of AI.
As generative AI continues to reshape industries at an unprecedented pace, securing these powerful systems is essential for responsible innovation. The OWASP Top 10 for LLM and Gen AI Project, launched in May 2023, seeks to tackle urgent concerns regarding adversarial attacks, data leakage, prompt injection, and governance risks in generative AI applications.
The growth of this initiative—from a small group of security professionals and AI researchers to over 600 contributing experts from more than 18 countries—highlights its critical importance. Trend Micro's sponsorship aids in sustaining momentum for research and development of essential security frameworks.
For Trend Micro, this partnership is strategically valuable as it aligns with the Trend Vision One roadmap, which prioritizes addressing the OWASP Top 10 for LLM and Gen AI vulnerabilities. This ensures that their platform remains compliant with the latest security standards.
In conclusion, as AI technologies evolve, the importance of safeguarding personal privacy and enhancing security measures cannot be overstated. Users must remain vigilant and informed to navigate the complexities of AI safely.