Today : Feb 26, 2025
Business
26 February 2025

AI Usage Soars Among Enterprises, Raising Data Security Concerns

With employees relying on public GenAI assistants, companies grapple with risks and compliance issues.

A recent survey conducted by TELUS Digital has revealed alarming insights about the use of generative AI (GenAI) tools by enterprise employees. According to the findings, nearly 70% of workers access these AI assistants, including popular platforms such as ChatGPT and Microsoft Copilot, through personal accounts—raising significant concerns over data security and compliance.

Perhaps most troubling is the fact over half (57%) of employees admitted to entering sensitive information—such as personal and company data—into these public GenAI tools. This has resulted in the proliferation of 'shadow AI', where employees' use of unauthorized tools can create hidden enterprise risks, leaving IT and security managers unaware of potential threats.

The survey, conducted among 1,000 adults working at large companies, revealed specifics about the types of data employees are entering. Some 31% reported sharing personal data, including names and contact information, 29% entered unreleased product details, and 21% admitted to inputting customer information. Financial data wasn’t immune either, with 11% of respondents disclosing confidential company financial details, like revenue and budgets.

This risky behavior occurs even though 29% of employees acknowledged company policies prohibiting the input of sensitive information. Yet, there seems to be little enforcement—42% said there are no repercussions for not adhering to these guidelines. Alarmingly, many companies are not providing their employees with adequate training or guidelines for using GenAI safely; only 24% indicated their companies require mandatory AI training, and 44% claimed their firms lack any AI policies.

Despite these issues, employees find GenAI beneficial for enhancing their productivity. A staggering 60% of respondents said using AI helps them work faster, and 84% expressed their desire to continue using such tools. The motivations range from increased creativity to the ability to handle repetitive tasks more efficiently, showcasing the undeniable impact AI has on workplace performance.

Bret Kinsella, General Manager of TELUS Digital's Fuel iX™ platform, emphasizes the productivity benefits of GenAI, noting, "Generative AI is proving to be a productivity superpower for hundreds of business tasks." He highlights the challenges presented by employees who choose to utilize personal AI tools when their companies fail to provide proper resources.

Further complicate matters is the inherent risk posed by the non-deterministic nature of AI models. The outputs can be unpredictable, and there’s always the danger of employees prompting AI to generate inappropriate or harmful content. This unpredictability underlines the importance of maintaining regulatory compliance—especially as new legislation, like the European Union’s AI Act, mandates organizations to have risk management systems and data governance frameworks.

Hesham Fahmy, Chief Information Officer at TELUS, shares insights on the responsibility to safeguard both employee and customer data. "Our commitment to secure and responsible AI meant we needed a solution...while always maintaining privacy and security guardrails to protect customer trust," he stated. The Fuel iX platform is purpose-built to offer organizations the flexibility and compliance needed to implement GenAI securely.

The TELUS survey reflects a broader issue at play across many enterprises: the duality of wanting to embrace AI's potential for operational efficiency, juxtaposed against the urgent need for secure protocols. Without effective risk management strategies, companies could find themselves exposed to compliance pitfalls and security vulnerabilities.

Interestingly, the survey also noted how employees are supplementing company-provided AI initiatives with personal tools. Even among those who have access to corporate GenAI assistants, over 22% also made use of personal accounts. This shadow usage presents additional risks—not only do employees gain unauthorized access to sensitive data, but organizations lose visibility and control over valuable information flow.

Given these insights, companies must take proactive measures to educate their workforce about the risks associated with GenAI. Implementing comprehensive training programs and clear guidelines can help employees navigate the complex AI terrain safely and responsibly. The consensus is clear: to fully reap the benefits of AI, organizations need to balance empowerment and security.

Moving forward, businesses aiming to integrate GenAI must strategize around both operational advantages and necessary precautions. Fuel iX serves as one potential solution, offering the necessary tools to secure the use of AI within corporate boundaries. The challenge lies not only in deploying such technologies but also ensuring compliance with rapidly changing regulations.

By fostering awareness and providing adequate tools for security, organizations can not only mitigate risks but also cultivate trust among employees and customers alike. The road to successful AI integration is fraught with challenges, but the potential rewards make it a venture worth pursuing.