At the bustling halls of CES 2026 in Las Vegas, the future of artificial intelligence (AI) was the talk of the town. Yet, amid dazzling product launches and bold predictions, one question lingered: can consumers truly trust the AI systems increasingly woven into their daily lives? Samsung Electronics, in its Tech Forum series, brought this question to the forefront with a panel titled "In Tech We Trust? Rethinking Security & Privacy in the AI Age." The session, held at The Wynn, gathered global experts to dissect the evolving relationship between trust, privacy, and the rapid spread of AI technologies.
The panel featured a heavyweight lineup: Allie K. Miller, CEO of Open Machine; Amy Webb, CEO of the Future Today Strategy Group; Zack Kass, Global AI Advisor at ZKAI Advisory and former Head of Go-To-Market at OpenAI; and Shin Baik, AI Platform Centre Group Head at Samsung Electronics. Each brought a unique perspective, but all agreed on one thing—trust is no longer a nice-to-have in AI; it's a must.
According to Samsung, trust must be built into AI systems from the ground up. Their approach, dubbed "trust-by-design," emphasizes transparency, user control, and security as the pillars of trustworthy AI. As Allie Miller put it, "When it comes to AI, users are looking for transparency and control. They want to be drivers of their own personalised experiences — to understand whether an AI model is running locally or in the cloud, to know their data is secure and to clearly see what is powered by AI and what is not. That level of visibility builds confidence." (Samsung Newsroom)
This philosophy is more than talk. Samsung showcased how its on-device AI keeps personal data local whenever possible, only tapping into the cloud for greater speed or scale when necessary. This hybrid approach, they argue, gives users flexibility without sacrificing privacy. The company also highlighted its Knox security platform, a system now embedded in billions of devices, and Knox Matrix, which enables products to authenticate and protect one another in a cross-device security framework. Shin Baik explained, "Trust in AI starts with security that’s proven, not promised. For more than a decade, Samsung Knox has provided a deeply embedded security platform designed to protect sensitive data at every layer. But trust goes beyond a single device — it requires an ecosystem that protects itself." (Samsung Newsroom)
Yet, as AI becomes more invisible—anticipating needs, curating routines, and operating autonomously—earning user trust is anything but simple. The panelists agreed that predictability, transparency, and meaningful user choice are essential. Users want visible signals of control rather than mysterious "black box" systems. Partnerships with industry giants like Google and Microsoft, Samsung argued, are also key for strengthening shared security research and interoperability across the AI ecosystem.
But what happens when these ideals are not met? The risks are real and immediate, as highlighted by a recent DW report. On January 9, 2026, it was revealed that some ChatGPT conversations—containing personal information—had appeared in Google search results, exposing user data to the public. The incident underscored the privacy implications of AI gone awry and offered a sobering reminder: even the most advanced platforms can inadvertently compromise user trust if privacy is not prioritized at every step. The report detailed how these conversations became publicly accessible, explained the privacy risks, and provided guidance for users to check if they were affected. (DW)
This breach is not an isolated event but part of a broader pattern. As AI platforms proliferate, so do concerns about data privacy and the security of personal information. The incident with ChatGPT serves as a cautionary tale for the entire industry, reinforcing the urgency of robust privacy protections and transparent data handling practices.
Meanwhile, 2026 is shaping up to be a landmark year for data privacy and regulation, as outlined in a comprehensive analysis by Osano. Children’s privacy and safety have become a primary focus following sweeping age assurance requirements implemented in 2025 in regions like the UK and Australia. In the UK, the Online Safety Act’s controversial age verification provisions took effect in the summer of 2025, while Australia imposed a blanket social media ban for under-16s, forcing platforms to verify users’ ages. In the U.S., California’s CCPA amendments, effective from the start of 2026, now classify under-16-year-olds’ data as sensitive personal information. These changes interact with the Digital Age Assurance Act, coming into force in 2027, which will require app developers to treat age information as "actual knowledge" of the user’s age.
But this push for greater protection comes with its own risks. As Osano points out, "Protecting children online is all well and good, but it requires businesses to know who is a child and who isn’t–that requires the collection of additional information (often sensitive information like biometric data). This violates the principle of data minimization and increases the risk of data privacy incidents." (Osano) In regions where age verification falls on individual service providers, the likelihood of over-collection and under-secured data rises, setting the stage for potential breaches involving children’s data in 2026.
Consent management is also in flux. With "consent fatigue" plaguing users, there’s a growing movement toward browser- and device-level privacy preference signals, such as the Global Privacy Control (GPC). California’s Opt Me Out Act, effective January 1, 2027, will require browsers to natively support universal opt-out mechanisms. Most comprehensive privacy laws across the U.S. now require businesses to honor such signals, with only a handful of states exempt. The EU is considering similar measures, as the Digital Omnibus package proposes amendments to require universal preference signals under GDPR.
Regulators are no longer satisfied with superficial compliance. In late 2025, Tractor Supply was fined $1.35 million under the CCPA for a non-functional opt-out webform that failed to stop third-party tracking, leaving consumers with the false impression their data was protected. As Osano noted, "Regulators now expect businesses to have seamless consent management. Enforcement actions increasingly focus on exceptions, edge cases, and 'privacy theater.'" (Osano) The lesson is clear: technical truth in consent management is now a regulatory expectation, not a bonus.
Legislators are also aiming to reduce the compliance burden without sacrificing protections. The EU’s Digital Omnibus package and the UK’s Data Use and Access Act are both advancing, with significant updates expected throughout 2026. These efforts seek to simplify regulations, provide new lawful bases for data processing, and relax certain requirements, all while maintaining robust privacy safeguards.
In the U.S., consumers are increasingly aware of—and willing to exercise—their privacy rights. The California Privacy Protection Agency reported that, of over 8,000 complaints received by late 2025, a significant majority involved requests to delete or limit the use of sensitive personal information. This trend is expected to accelerate, with businesses needing to document and efficiently process subject rights requests to avoid regulatory scrutiny.
As AI and privacy regulations evolve in tandem, the stakes for trust, security, and transparency have never been higher. The events of CES 2026, the ChatGPT data exposure, and the shifting regulatory landscape all point to a future where trust is not just desirable—it’s non-negotiable. The technologies and companies that prioritize user choice, clear communication, and genuine security will be the ones to earn and keep public confidence as AI becomes ever more invisible, yet ever more influential, in daily life.