On October 13, 2025, California Governor Gavin Newsom signed a slate of new laws aimed at regulating artificial intelligence (AI) and social media, as the state continues to walk a tightrope between technological innovation and public safety. The signing comes just weeks after a landmark UK-US Tech Prosperity Deal was unveiled in London, further highlighting the global race to shape the future of digital infrastructure and AI policy.
Newsom’s latest legislative push is a direct response to growing anxieties about the influence of AI and social platforms on young people and society at large. According to The San Francisco Chronicle, among the new laws is a measure that regulates companion chatbots, ensuring they cannot discuss suicide or help plan self-harm with children or vulnerable individuals. "Today, California has ensured that a companion chatbot will not be able to speak to a child or vulnerable individual about suicide, nor will a chatbot be able to help a person to plan his or her own suicide," wrote Megan Garcia, whose son tragically took his own life after interacting with a chatbot.
However, the path to these protections was anything but straightforward. Early versions of Senate Bill 243, authored by Senator Steve Padilla, would have imposed strict requirements: third-party audits of chatbots, mandatory reporting of suicide-related conversations, and outright bans on bots providing unpredictable rewards to boost engagement. Yet, last-minute amendments watered down the bill, limiting its scope to users known to be children and removing some of the most stringent provisions. This shift led advocacy groups like Common Sense Media to withdraw their support, with its head Jim Steyer lamenting, "(SB)243 really was an example where the tech industry’s massive lobbying effort was successful." Steyer expressed concern that Californians might be lulled into a false sense of security, believing the law does more than it actually does.
Newsom’s signature also landed on Assembly Bill 1043, authored by Assembly Member Buffy Wicks, which mandates digital age verification for apps downloaded to devices set up for children. Wicks, herself a parent, emphasized the urgency: "California’s children are growing up with access to an online world that was not built with them in mind, and I know this because I have a 4 and 8 year old and I see it every single day." The law compels app developers to check age information provided during device setup, closing a loophole that previously allowed companies to claim ignorance of a user’s age to sidestep child protection rules. Notably, the requirement applies only to applications, not websites.
Looking ahead, another law—AB853—will require major online platforms like Instagram to make the origins of uploaded content transparent by 2027. Starting in 2028, device manufacturers must enable users to embed origin information in their photos and audio recordings, a move designed to combat the proliferation of deepfakes and AI-generated misinformation. Additional measures signed into law include AB566, which will let internet users set preferences to limit the sale and sharing of their data via browsers by 2027, and AB325, placing constraints on algorithmic price fixing.
But not all proposals have crossed the finish line. Newsom has yet to act on Assembly Bill 1064, a more comprehensive chatbot regulation that would bar children from using bots capable of sexually explicit or self-harm-related interactions. The bill has a looming deadline, and its fate remains uncertain.
These California laws are set against the backdrop of a global debate over AI, data sovereignty, and the influence of tech giants. Just weeks earlier, on September 18, 2025, US President Donald Trump and UK Prime Minister Keir Starmer announced the UK-US Tech Prosperity Deal during Trump’s state visit to London. As reported by Verdict, the deal promises increased cooperation on technology innovation, including access to US datasets, infrastructure, and compute power, as well as shared research funding on AI, quantum, and nuclear technologies. The agreement also signals closer alignment on regulation, national security, and workforce development in tech.
The announcement was accompanied by an avalanche of US Big Tech investment in the UK’s AI infrastructure: Microsoft pledged $30 billion, including the construction of the nation’s largest supercomputer in partnership with the British firm Nscale; Google committed $5 billion for a new data center in Hertfordshire; CoreWeave invested $1.5 billion in Scotland; and Stargate UK was established through a collaboration between Nscale, OpenAI, and Nvidia.
While UK officials and US tech leaders hailed the deal as a win-win, critics voiced skepticism. Some questioned whether the UK, with its relatively cheap land and labor, was being relegated to a supporting role—more a satellite of US tech power than a sovereign digital force. There are also concerns about the environmental impact, as the construction and operation of vast data centers could divert significant energy and water resources away from local communities.
Underlying these debates is the thorny issue of data sovereignty. Research cited by Verdict shows that 73% of small and medium-sized enterprises (SMEs) in the UK and Ireland are worried about their data being stored in the US, largely due to laws like the US Cloud Act, which allows American authorities to demand access to electronic data held overseas by US companies. Claudio Corbetta, CEO of team.blue, explained, "The sovereignty question for SMEs goes beyond infrastructure capacity. It is about whether they trust that the data falls under either EU and/or UK jurisdiction, or not." This distinction is not just about compliance, but about maintaining consumer trust and shaping procurement decisions.
European countries have responded by developing their own sovereign cloud solutions, with France and Norway involving local partners to reduce dependence on US Big Tech. In the UK, the government classified data centers as critical national infrastructure in September 2024, underscoring the strategic importance of homegrown digital assets. Nscale, a British company established in 2024, has positioned itself as the nation’s only full-stack, sovereign AI infrastructure provider. As Nscale’s Karl Havard put it, "Control over local AI infrastructure and compute is essential to national resilience, economic growth, and global competitiveness."
For UK businesses, the demand for sovereign AI infrastructure is growing. Mahdi Yahya, CEO of UK data center provider Ori, sees this as a pivotal opportunity: "The big buying factor for a lot of enterprises, especially in Europe and Middle East, is around the sovereignty of infrastructure." Yahya argues that mandates requiring local or sovereign providers are becoming the norm for governments and regulated industries. Meanwhile, British entrepreneur Mel Morris, CEO of Corpora.ai, urges a shift in investment priorities: "Let’s focus on what are small amounts of money, relatively speaking, that we could spend to build sovereignty, to build the technologies that allow us to compete, because at the moment, we’re [the UK] a net importer of AI tech, and I think that’s a dangerous place to be."
Back in California, Newsom’s approach reflects a similar balancing act. As he signed Senate Bill 53—requiring developers of the most powerful AI models to test and plan for catastrophic risks—he noted, "We can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive. This legislation strikes that balance." The bill also includes whistleblower protections and mandates planning for scenarios that could cause more than 50 deaths or $1 billion in damage, such as the misuse of AI for biological weapons or attacks on infrastructure.
Still, not everyone is convinced. The Consumer Technology Association, representing tech companies, warned that such regulation could stifle innovation and argued that oversight of technology with national strategic importance should be a federal, not state, matter. Efforts in Congress to preempt state-level regulation have so far failed, leaving states like California and regions like Europe to chart their own course.
As the world’s tech capitals wrestle with how to govern AI and digital infrastructure, one thing is clear: the conversation is far from over, and the stakes—ranging from innovation and economic growth to privacy and national security—couldn’t be higher.