Grand Pinnacle Tribune

Intelligent news, finally!
Education · 7 min read

AI Use Surges Among Students Raising New Concerns

As artificial intelligence becomes a fixture in classrooms and homes, researchers and educators debate its impact on learning, creativity, and childhood development.

Artificial intelligence is no longer just a futuristic concept or a tool reserved for tech-savvy adults—it’s rapidly becoming a fixture in the lives of children and teenagers across the United States. Recent studies and expert reviews released in March 2026 paint a complex picture of this trend, highlighting both the promise and the perils of AI’s growing role in youth education, creativity, and development.

According to a new study by the Rand Corporation, released on March 17, 2026, students in middle and high school are using AI for homework at unprecedented rates. The numbers are striking: between May and December 2025, the percentage of middle schoolers turning to AI for homework jumped from 30% to 46%. For high schoolers, usage soared from 49% to 63% over the same period. These figures suggest that AI tools are quickly becoming a staple in the academic toolkit of American youth.

But as AI’s presence grows, so too does skepticism. Rand’s research found that, by December 2025, more secondary students expressed concern that relying on AI for schoolwork could harm their critical thinking skills—an uptick from just ten months prior. This concern isn’t just academic; it strikes at the heart of what it means to learn and think independently in an age of intelligent machines.

Despite the surge in AI usage, school policies appear to be lagging behind. Only about one in three middle and high school students reported that their school had a schoolwide rule about AI use. Even fewer—just 9% of middle schoolers and 7% of high schoolers—said their schools explicitly permitted them to use AI for homework. The gap between student behavior and institutional guidance is widening, leaving many educators scrambling to catch up.

Interestingly, most students told Rand researchers that they didn’t see using AI for brainstorming or fact-checking as cheating. It’s a subtle but important distinction: while copying answers verbatim from a chatbot might cross a line, using AI as a springboard for ideas or as a digital encyclopedia feels more like a modern extension of traditional study aids. The question of what constitutes fair use versus academic dishonesty is becoming increasingly nuanced—and contentious.

The challenge of keeping up with AI’s rapid integration into classrooms isn’t unique to school administrators. According to a report published this month by Stanford University’s SCALE Initiative, the pace at which AI tools are being adopted in schools is far outstripping researchers’ ability to study their effectiveness. Of more than 1,100 research papers analyzed on AI in K-12 education, only 20 actually evaluated the impact of AI tools on students or teachers. The SCALE Initiative’s analysis notes, “Most discussions about AI in education focus on new tools, predictions about the future, or opinions about what schools should do next. While the research is early, it’s important that these education system decisions are grounded in evidence: What does the current causal evidence actually show?”

Among the handful of rigorous studies, a few trends have emerged. AI tools designed with guardrails—such as tutoring programs that provide hints or guided reasoning—appear to be more effective than generative chatbots that simply spit out answers. These guided tools can help students engage more deeply with material, rather than just shortcutting the learning process. On the teaching side, AI can help educators spend less time preparing lessons, improve instructional quality through automated insights, and offer real-time suggestions for more targeted student questioning.

Still, many questions remain unanswered. One glaring gap is the lack of high-quality causal studies on how AI actually affects student learning and development in real-world classrooms. As Heather Schwartz, vice president of Rand’s Education, Employment and Infrastructure division, put it in a statement on March 17, “These findings suggest schools need to be explicit about when and how AI can be used. Students are already using these tools. The question is whether schools can help them use AI in ways that deepen, rather than erode, their critical thinking.”

Meanwhile, a separate review study published earlier this month in the journal Pediatrics by experts at Children’s Hospital of Philadelphia (CHOP) took a broader look at AI’s impact on child development. The CHOP team, led by pediatrician Dr. Robert Grundmeier, found that AI is now woven into everything from children’s toys and games to social media, behavioral health programs, and school classrooms. Nearly two-thirds of teens surveyed by the Pew Research Center in fall 2025 reported having used an AI chatbot—a testament to how deeply embedded these tools have become in daily life.

For the youngest children, ages five and under, interactive AI storytelling programs and toys can support language development, vocabulary, and even foster better family interaction. Grundmeier described how an AI tool could help an exhausted parent generate an individualized bedtime story, promoting engagement even when creativity is running low. “So you as a parent can use an AI tool fairly easily to help generate an individualized story that you can then read to your child and promote that engagement in your household,” he explained.

Yet, there are risks. Young children may struggle to distinguish between AI and genuine human interaction. As Grundmeier cautioned, “When you’re interacting with artificial intelligence, although it can appear to be empathic, it can in many ways pretend to be human, it fundamentally is not human. It is just a lot of mathematics happening behind the scene.”

As children grow older and encounter AI at school or home, the technology’s potential to tailor education and address learning gaps becomes more apparent. However, experts warn of the dangers of “de-skilling”—where students lose abilities they once had due to overreliance on AI—and “never-skilling,” where they never learn certain skills at all because the AI does the work for them. Grundmeier summed up this risk: “We might worry about ‘never-skilling,’ meaning that they never learn how to do a particular task because they’ve actually asked the AI to do it for them as opposed to using the AI as a tool to help them learn.”

Teenagers, for their part, face a different set of challenges. As they turn to AI for information, they may struggle to identify misinformation or false interpretations. This is especially dangerous when consulting AI chatbots about sensitive mental health issues. Grundmeier noted, “There’s research that shows that some of these AI tools when discussing mental health care topics, they can provide really very bad advice, which really speaks to the need for more guardrail development to help ensure that these tools really support positive interactions and, if there’s an interaction that’s going badly, can transition to make sure that people are getting the help they need.”

On the positive side, teens are also using AI in creative ways—coaching themselves through difficult conversations, improving relationships, and exploring new interests. Parents, however, often feel out of their depth. “I hear a lot from parents is this idea of, ‘I don’t really understand this, it scares me. My child is getting exposed to it, but I don’t know how to guide them,’” Grundmeier shared.

In response to these challenges, states like Pennsylvania are launching AI literacy programs, safety standards, and reporting tools to help families and educators navigate this new landscape safely. The hope is that with clear guidance, robust research, and thoughtful safeguards, AI can be harnessed to support—not supplant—the critical thinking, creativity, and resilience of the next generation.

Sources