On January 21, 2026, two new reports shed light on a growing tension in the digital world: while technology is making shopping and work more convenient, it’s also raising big questions—and even bigger worries—about privacy. According to the Capgemini Research Institute’s annual global consumer trends report, about two-thirds of consumers say technology has improved their shopping experience. But there’s a catch: many of those same people feel that over-personalization and tracking have crossed the line from helpful to invasive.
It’s a classic modern dilemma. On the one hand, shoppers love the convenience of tailored recommendations and digital assistants that remember their preferences. On the other, they worry about just how much companies know about them—and what’s being done with all that personal data. Seven in ten consumers told Capgemini they’re concerned that their information could be used for hyper-personalized content, making them feel exposed. And just over half said they’d be willing to switch retailers if it meant getting stronger privacy protections.
These findings echo a well-known phenomenon dubbed the "privacy paradox." According to a 2023 study by Google and Carnegie Mellon University, most people say they care about privacy but still end up sharing their data—often because they feel they have no real choice. It’s a trade-off: convenience and personalization versus privacy risk. The decision depends on context, incentives, social norms, and the perceived benefits, as a 2025 study by researchers at Princeton University, the University of Oxford, and Luohan Academy found.
But the story doesn’t end with consumer preferences. The rise of artificial intelligence—especially generative AI—has added new layers of complexity. More than three-quarters of consumers want to set boundaries for digital assistants, Capgemini found. And two-thirds say they trust AI more when it explains why it’s making certain recommendations or taking specific actions. Transparency, it seems, is key to winning trust.
Yet, trust is in short supply. Seventy-one percent of consumers worry about how generative AI is using their personal information. And two-thirds expect brands to clearly disclose when advertising is generated by AI. As Nikos Bartzoulianos, group chief marketing officer for Electrolux Group, put it in the Capgemini report, the most important responsibility for businesses is "respecting and protecting users’ privacy and sensitive data." Companies that fail to do so risk losing customers to competitors who take privacy more seriously.
For businesses, these findings aren’t just theoretical. They have real-world implications for how companies design their digital experiences and manage customer relationships. According to DataGuard, brands can’t assume that what people say about privacy matches how they actually behave. Instead, the focus should be on earning trust through data transparency, informed consent, and robust security policies that reduce friction and uncertainty. It’s about making privacy a core part of the customer experience, not an afterthought.
But while consumers are wrestling with privacy choices, businesses face their own set of challenges—especially as they adopt new AI tools and systems. MCP (Model Context Protocol) servers and AI browser plug-ins have become common in the workplace, connecting everything from databases to ticketing systems and helping move data between AI models. However, these tools bring significant risks if not properly configured and secured.
A recent report from LayerX, also released on January 21, 2026, revealed that 20% of employees use AI browser extensions at work, and a whopping 58% of those extensions have high or critical permissions enabled. That’s a big red flag for data security. According to LayerX, 32% of data leaks happen because of session-memory leaks, auto-prompting to third-party models, and the mixing of cookies or identities. In plain English: if these tools aren’t set up correctly, sensitive information can slip through the cracks—sometimes without anyone noticing until it’s too late.
Andras Cser, VP and Principal Analyst for Security and Risk Management at Forrester, weighed in on the growing problem. "AI Agents in browsers present an increasing challenge for identity security and fraud management professionals," he said. Since these agents act differently from both humans and traditional machine identities, they need specialized risk assessments. The challenge is not just about authenticating and authorizing agents, but also about managing and protecting the information that flows between eCommerce portals, other sites, and the human users of AI agents. As Cser put it, "these are new challenges that need productized solutions."
The technical details are a bit wonky, but here’s the gist: MCP servers are supposed to connect AI models to the tools and data they need. AI browsers and extensions help move that data around. But if authentication and configuration aren’t handled carefully, MCP servers can’t tell users apart and may give everyone the same access privileges. That opens the door to data exfiltration—where sensitive data is pulled out of secure systems—and even allows low-privilege users (or malicious actors) to access high-value information. This "confused-deputy" problem can lead to serious breaches, where data is exposed or systems are altered without proper oversight.
What’s worse, a breach of either an MCP server or an AI browser doesn’t just affect one part of the system. It can create a wide, unmonitored path between user desktops, cloud applications, and back-end databases. Essentially, once the wall is breached, attackers—or even just careless users—can move data freely between different parts of an organization, making containment and recovery much harder.
These risks aren’t just theoretical. The LayerX report found that AI browsers can make more sensitive company and customer data available to AI models, while MCP servers provide deeper access to internal systems and databases. A breach of one could easily lead to a breach of the other, amplifying the impact. As more companies embrace AI-powered tools, the need for strict security protocols and user-specific configurations becomes even more urgent.
For consumers and businesses alike, the way forward is anything but simple. Consumers want the benefits of smart technology—convenience, personalization, and efficiency—but not at the cost of their privacy. Businesses, meanwhile, must balance the drive to innovate with the responsibility to protect customer data and maintain trust. The stakes are high: just over half of consumers say they’d switch retailers for better privacy, and brands that fail to adapt could find themselves losing both customers and credibility.
Ultimately, the digital world is being reshaped by powerful new tools and shifting expectations. Transparency, informed consent, and robust security are no longer optional—they’re essential. Companies that can navigate this tricky landscape, respecting both the promise of technology and the rights of their users, will be the ones who come out ahead.