On September 4, 2025, Senator Josh Hawley (R-Mo.) took the stage at the National Conservatism Conference and issued a clarion call that’s echoing through the halls of Congress and Silicon Valley alike: repeal Section 230 of the Communications Decency Act. For those who haven’t followed the legal intricacies of tech regulation, Section 230 is the law that shields major tech companies from lawsuits over content created by users on their platforms. In the age of social media, it’s been a legal backbone for giants like Facebook, Google, and Twitter. But now, with the rapid rise of artificial intelligence—especially large language models (LLMs)—Hawley and a growing group of lawmakers see it as a shield that’s outlived its usefulness and, perhaps, become a danger in itself.
"The AI large language models have already trained on enough copyrighted works to fill the Library of Congress 22 times over," Hawley declared, his voice rising with conviction. "Let me just put a finer point on that—AI’s LLMs have ingested every published work in every language known to man already." According to reporting by Nexstar Media and other outlets, Hawley’s comments were directed not just at the tech companies, but at the authors and creators whose work he said had been swept up in this digital maelstrom without so much as a by-your-leave.
Looking out across a room filled with writers and thinkers, Hawley didn’t mince words. "As I look out across the room and see many authors, all of your works have already been taken. Did they consult you? I doubt it. Did they compensate you? Of course not. This is wrong. This is dangerous." His remarks, reported by multiple news organizations, were a pointed critique of the way artificial intelligence models are trained. The process, as Hawley describes it, involves scraping vast swaths of the internet—including copyrighted books, articles, and other creative works—to teach machines how to generate human-like text.
Hawley’s solution? Assign property rights to specific forms of data and create legal liability for companies that use it without permission. "Open the courtroom doors," he said. "Allow people to sue who had their rights taken away from them, including suing companies and actors and individuals who use AI." This isn’t just about AI, in his view—it’s about restoring a sense of ownership and justice to the people whose intellectual property fuels the digital economy.
It’s not the first time Section 230 has come under fire. Earlier in 2025, Senators Dick Durbin (D-Ill.) and Lindsey Graham (R-S.C.) signaled their intention to introduce legislation to sunset the law. Past proposals to roll back or repeal Section 230 have floundered, stalling out amid concerns about free speech, innovation, and the unintended consequences of opening tech companies to a flood of lawsuits. But with AI’s explosive growth, the debate has taken on new urgency.
Hawley’s criticisms go beyond legal theory. In July, during a Judiciary Committee hearing, he singled out Meta—the parent company of Facebook and Instagram—for what he called "willfully" pirating "droves of copyrighted content" to train its AI models. According to Hawley, Meta employees repeatedly warned management that their actions might cross the line into illegality. "They knew exactly what they were doing. They pirated these materials willfully, as the idea of pirating and copyrighted works percolated through Meta, to take one example. Employee after employee warned management that what they were doing was illegal. One Meta employee told management and I quote now, 'This is not trivial.' And she shared an article asking what is the probability of getting arrested for using torrents, illegal downloads, in the United States. Another Meta employee shared a different article saying that downloading from illegal repositories would open Meta up to legal ramifications. That’s a nice way of saying that what they were doing was exactly, totally, 100 percent barred by copyright law. Did Meta management listen? No. They bulldozed straight ahead."
It’s a damning accusation, and one that’s resonated with authors. Bestselling novelist David Baldacci, for instance, described his own sense of violation when he learned his books had been used to train AI. "Every single one of my books was presented to me… in three seconds. It really felt like I had been robbed of everything of my entire adult life that I had worked on." For writers who have spent years—sometimes decades—crafting their work, the idea that it could be vacuumed up by a machine in seconds, without acknowledgement or payment, is understandably galling.
The concerns aren’t limited to a single party or ideology. While Hawley, a Republican and former Missouri attorney general, is among the most vocal critics, Democrats like Durbin have also raised alarms about the unchecked power of tech companies. There’s a growing consensus that the law needs to catch up with the technology, though there’s still plenty of debate about the best way to do it. Should Section 230 be repealed entirely, or just reformed? Should new intellectual property rights be created for data and digital content? And how do you balance the rights of creators with the need for innovation and free expression online?
Critics of repeal warn that opening the courtroom doors too wide could have unintended consequences. If every tech company could be sued for every piece of content on their platforms, some argue, the result could be a chilling effect on speech, a wave of frivolous lawsuits, and a stifling of innovation. Others worry about the practicalities of enforcing new property rights in a world where data flows freely across borders and platforms.
Still, the sense of urgency is real. AI is moving fast—sometimes faster than lawmakers or courts can keep up. The stakes are high, not just for tech companies and authors, but for anyone who creates or consumes content online. As Hawley put it, "This is wrong. This is dangerous." Whether Congress will heed his call remains to be seen, but the debate is far from over.
For now, Section 230 remains on the books, and tech companies continue to rely on it as they build ever more powerful AI systems. But with lawmakers on both sides of the aisle signaling a willingness to revisit the law, and with public concern about the use of copyrighted material mounting, change could be on the horizon. The next chapter in the battle over tech regulation, copyright, and artificial intelligence is just beginning—and it promises to be a contentious one.
As the dust settles from Hawley’s speech, one thing is clear: the conversation about who owns our digital future is only getting started.