Today : Sep 08, 2025
Education
19 August 2025

AI Education Divide Widens As Misinformation Grows

Students and educators grapple with digital gaps, rising misinformation, and the urgent need for media literacy in the age of artificial intelligence.

On a breezy July afternoon in Princeton, New Jersey, a classroom buzzed with the energy of 30 high schoolers, many of whom had never before touched the world of artificial intelligence. These students, hailing from low-income families and communities often overlooked by the tech world, were gathered for a three-week, all-expenses-paid AI summer camp at Princeton University. Their mission was clear: learn how AI works, tackle real-world problems, and—perhaps most importantly—begin to close the digital divide that’s been quietly widening across North America’s education landscape.

This divide isn’t new. According to NPR, research has long shown that affluent suburban schools are more likely to offer computer science and technology classes than their counterparts in poorer, urban, or rural areas. Now, as artificial intelligence rapidly transforms the way we live and work, that gap is manifesting in even starker terms. Robin Lake, director of the Center on Reinventing Public Education at Arizona State University, told NPR, “The AI divide is starting to show up in just about every major study that I'm seeing.” Her research demonstrates that affluent districts are far more likely to provide AI training to teachers, while students in low-income and rural schools often lack both access to AI tools and the classroom rules that would teach them how to use such technology responsibly.

But the story doesn’t end with lack of access. Across Canada, a different trend is emerging. On August 19, 2025, several leading Canadian universities—including McGill University, University of Toronto, and York University—announced new initiatives embracing AI tools to enhance learning. As reported by the Studiosity survey, a staggering 78 percent of Canadian students used AI to study or complete their schoolwork in late 2024. Meanwhile, the Pan-Canadian Report on Digital Learning revealed that 41 percent of educators incorporated generative AI into student learning activities in 2024, a dramatic jump from just 12 percent a year earlier. These numbers paint a picture of a continent grappling with both the promise and the pitfalls of AI in education.

Yet, as AI becomes more ubiquitous in classrooms, new challenges are surfacing—chief among them, the threat of misinformation. According to a recent analysis by Essence, media literacy experts warn that AI’s ability to generate convincing, but sometimes false, information is making it harder than ever for students and the public alike to separate fact from fiction. “We often fall for [misinformation] because it’s done so well,” said Arionne Nettles, a digital journalism professor at Florida A&M University. Nettles emphasized that, in the past, a quick Google search could help verify a claim. But now, “Google is going to give you an automatic AI answer that pulls from everything. It’s pulling from misinformation and real information that can give you the wrong answer.”

That’s a sobering thought, especially as the United States faces what many are calling a literacy crisis. The National Assessment of Educational Progress (NAEP) reported in 2024 that the average reading score for grade 4 students was two points lower than in 2022—and five points lower than in 2019. Meanwhile, ProLiteracy estimates that 59 million American adults read at or below Level 1, indicating only elementary literacy skills. This troubling trend has been compounded by recent federal funding cuts that led to the shutdown of the Corporation for Public Broadcasting, a move that has left many educators and advocates scrambling to fill the gap in quality children’s programming and media literacy resources.

Into this vacuum, controversial organizations like PragerU have stepped. PragerU, a right-wing media group, has gained permission to be shown in K-12 public school classrooms in several states—including Florida, Louisiana, Texas, Oklahoma, and Arizona. Their content, which critics argue is rife with historical inaccuracies and “pro-American” propaganda, has reignited debates about the role of media literacy in education. Animated clips from PragerU’s children’s division, for example, have been accused of downplaying the destructive effects of enslavement and rewriting history in a “kid-friendly” format. As Essence reports, this has alarmed academics and researchers who see young users as especially vulnerable to such misinformation, particularly when it’s delivered through slick, easily accessible social media channels.

So, how should educators and families respond? Nettles suggests that media literacy must be treated as a learned skill. “Find a few reliable media sources that you can use to fact-check other information that you read elsewhere,” she advised. And before sharing a sensational story, “pause and see where it’s published. If you can’t confirm where a piece of information came from, don’t share it. It’s not worth it.”

Back at Princeton’s AI4ALL summer camp, students like Esraa Elsharkawy from Texas and Anthony Papathanasopoulos from Oregon are putting these lessons into practice. Esraa, who once viewed AI with suspicion, now sees it as a tool for empowerment. “Now I'm very supportive of AI 'cause I believe AI is a tool and that if we use AI as a tool to do, like, simple things, then we'll have, like, I guess, clearer minds to, like, think of, like, things that are way ahead of our league right now, like solving cancer, for example,” she told NPR. Anthony, whose rural community was devastated by wildfires, is eager to explore how AI-powered drones could help map and monitor forests to prevent future disasters. “I think having a rural background is really important to understanding how AI can be used,” he said.

Olga Russakovsky, a Princeton computer science professor and co-founder of AI4ALL, believes that bringing diverse voices into AI development is essential. “There's so much that this technology can do. There's so many problems in the world that it can address. And what we want to make sure is that this technology really benefits everybody,” she said. Esraa echoed this sentiment, noting the importance of representation: “Like, as a woman, Muslim hijabi, I'm not really represented in a lot of things. And so I wanted to, like, be one of the people who change AI, who shape AI for the future.”

As the digital and AI divides persist, stories like these offer a glimmer of hope. The challenge ahead is clear: ensure that every student—regardless of background or zip code—has the skills, resources, and critical thinking tools needed to thrive in an AI-powered world. With thoughtful investment, innovative programs, and a renewed commitment to media literacy, the next generation just might be up to the task.