The rapid rise of generative artificial intelligence (AI) tools in professional settings is reshaping how businesses operate, but it’s also stirring significant concerns about data security and privacy. From Google Drive’s integration of AI-powered features to the emergence of offline AI applications like AI Edge Gallery, companies and users are grappling with the balance between innovation and protecting sensitive information.
A recent study by cybersecurity firm Harmonic Security revealed that 8.5% of employee queries to generative AI tools—including popular platforms like ChatGPT, Copilot, and Gemini—contain sensitive data. This includes client information, authentication credentials, and confidential internal documents. The surge in AI usage among employees, often unaware of the risks, has led to inadvertent disclosures of critical information.
One key factor amplifying these risks is the widespread use of free versions of AI tools, which typically lack robust security features. These free platforms may even use the input data to further train their models, raising the stakes for companies sharing proprietary or personal data. Adding to the complexity, many specialized AI platforms have sprung up, often developed by small teams leveraging open-source models. While innovative, these platforms might not match the stringent security protocols of established enterprise solutions, creating potential vulnerabilities for users.
Amid these challenges, international standards like ISO/IEC 42001 have become increasingly important. This pioneering framework focuses on AI system management with an emphasis on transparency, accountability, and data protection. It encourages organizations to adopt rigorous governance practices, including risk assessments, continuous monitoring, and stakeholder engagement, to safeguard their AI deployments.
To mitigate risks, companies are advised to train employees on best practices when inputting data into AI tools, restrict the use of unsecured free versions, and prioritize enterprise-grade solutions with strong security guarantees. Implementing real-time monitoring systems to detect possible data leaks and adopting AI governance standards like ISO/IEC 42001 can further reinforce defenses against inadvertent data exposure.
Meanwhile, Google Drive has been at the forefront of integrating AI functionalities to enhance user experience. Features such as video transcription, improved search within videos, and faster synchronization speeds have been rolled out, primarily targeting paying subscribers and professional users. Central to these innovations is Google’s Gemini AI, which helps summarize documents and videos, enabling users to quickly extract key points or draft emails based on document content.
However, these capabilities come with privacy concerns. For instance, Gemini’s ability to analyze meeting videos and generate summaries means it could access sensitive corporate information. Last year, Gemini faced accusations of analyzing Google Drive documents without explicit authorization, though Google clarified that it does not store data from these summaries. Still, such incidents have fueled skepticism and caution among users about entrusting highly sensitive or personal information to AI tools.
In a bid to address privacy issues, an experimental application called AI Edge Gallery has emerged, offering a novel approach to AI usage. Available now on Android via GitHub and planned for iOS, AI Edge Gallery allows large language models (LLMs) to run entirely offline on smartphones. This means user data never leaves the device, significantly enhancing privacy and control over personal information.
AI Edge Gallery is more than just an offline chatbot. It comes with four default models, including three variants of Google’s Gemma-3 and one from Alibaba’s Qwen series, with the option to import additional models from platforms like Hugging Face. The app offers three main features: 'Ask Image,' which analyzes images locally without sending data elsewhere; 'Prompt Lab,' catering to creatives and developers with tools for text synthesis, rewriting, and code generation; and 'AI Chat,' a traditional chatbot experience that does not require an internet connection.
Despite its promise, AI Edge Gallery has limitations. The speed of responses depends heavily on the smartphone’s hardware, and the models vary in size from about 555 MB to over 4.4 GB, demanding significant storage space. Google has optimized these models for mobile use by reducing parameters and ensuring compatibility with ARM chips, but this sometimes results in less nuanced or creative outputs compared to cloud-based versions. Currently in alpha and targeting early adopters, the app lacks real-time voice interaction and is not yet widely available through official app stores.
The developments in AI tools—from cloud-based services like Google Drive’s Gemini to offline solutions like AI Edge Gallery—highlight a broader tension between harnessing AI’s transformative potential and safeguarding data privacy. As businesses increasingly rely on AI to streamline operations and boost productivity, the imperative to implement strong data governance frameworks and educate users becomes more urgent.
In the evolving landscape of AI, the question isn’t just about what these technologies can do, but how safely and responsibly they can be integrated into daily workflows. By adopting standards like ISO/IEC 42001, investing in secure AI platforms, and exploring innovative privacy-preserving applications, organizations can navigate the challenges ahead while unlocking the benefits of AI innovation.