In classrooms across the United States, a quiet revolution is underway. The humble book report, once a rite of passage for generations of students, is rapidly vanishing. Take-home essays, too, are becoming relics of a pre-digital era. The culprit? Artificial intelligence. As of September 2025, educators from coast to coast report that student use of AI tools like ChatGPT has reached such heights that assigning traditional homework is, in the words of one teacher, "like asking students to cheat."
"The cheating is off the charts. It's the worst I've seen in my entire career," says Casey Cuny, a 23-year veteran English teacher and the 2024 California Teacher of the Year, according to the Associated Press. Cuny, who teaches at Valencia High School in southern California, no longer wonders if students will outsource their work to AI. "Anything you send home, you have to assume is being AI'ed." For him and many of his colleagues, the pressing question now is how schools can adapt, because the teaching and assessment tools that have worked for generations are suddenly obsolete.
This transformation is not just about technology—it’s about the very nature of learning and integrity. As AI becomes more sophisticated and more deeply woven into daily life, it is reshaping how students learn, how teachers teach, and even what it means to cheat. "We have to ask ourselves, what is cheating?" Cuny reflects. "Because I think the lines are getting blurred."
Cuny’s solution has been to bring nearly all writing assignments back into the classroom. He now monitors student laptop screens from his own desktop, using software that lets him lock down their screens or block access to certain sites. But he’s not just playing defense. He’s also integrating AI into his lessons, teaching students how to use it as a study aid—"to get kids learning with AI instead of cheating with AI."
It’s a pattern playing out far beyond southern California. In rural Oregon, high school teacher Kelly Gibson has also shifted to in-class writing. She’s introduced more verbal assessments, asking students to talk through their understanding of readings. "I used to give a writing prompt and say, 'In two weeks, I want a five-paragraph essay,'" Gibson told the Associated Press. "These days, I can't do that. That's almost begging teenagers to cheat."
The temptation is real. Take a once-typical English assignment: explain the relevance of social class in "The Great Gatsby." Many students now turn first to ChatGPT for help brainstorming. In seconds, the AI provides a list of essay ideas, examples, and quotes. It even offers to draft introductions or outline paragraphs. The process is so frictionless that the line between legitimate help and outright cheating can seem to disappear.
Students themselves are often unsure when AI use crosses that line. College sophomore Lily Brown, a psychology major at an East Coast liberal arts college, relies on ChatGPT to help outline essays and summarize dense readings. As she explains, "Sometimes I feel bad using ChatGPT to summarize reading, because I wonder, is this cheating? Is helping me form outlines cheating? If I write an essay in my own words and ask how to improve it, or when it starts to edit my essay, is that cheating?" Her class syllabi typically say, "Don't use AI to write essays and to form thoughts," but that leaves a lot of gray area—and students like Brown are left to navigate it largely on their own.
Admitting to any AI use can feel risky. Students often shy away from asking teachers for clarity, fearing that they could be labeled cheaters. Meanwhile, schools tend to leave AI policies up to individual teachers, leading to a patchwork of rules—even within the same building. Some teachers welcome tools like Grammarly, an AI-powered writing assistant, for grammar checks. Others ban it, noting that it can also rewrite sentences. "Whether you can use AI or not depends on each classroom. That can get confusing," says Valencia 11th grader Jolie Lahey. She credits Cuny with teaching her class how to use AI for studying—uploading guides to ChatGPT, having the chatbot quiz them, and explaining mistakes. But this year, her teachers have strict "No AI" policies. "It's such a helpful tool. And if we're not allowed to use it, that just doesn't make sense," Lahey says. "It feels outdated."
Initially, many schools tried to ban AI outright after ChatGPT’s late 2022 debut. But as the technology has matured and spread, attitudes have shifted. The term "AI literacy" has become a back-to-school buzzword, with new emphasis on balancing the strengths of AI with its risks. Over the summer of 2025, several colleges and universities convened AI task forces to draft detailed guidelines and provide new instructions to faculty and students alike.
The University of California, Berkeley, for example, emailed all faculty new guidance instructing them to "include a clear statement on their syllabus about course expectations" around AI use. The guidance offered three sample statements: one for courses that require AI, one for those that ban it completely, and one for those that allow some use. "In the absence of such a statement, students may be more likely to use these technologies inappropriately," the email warned, emphasizing that AI is "creating new confusion about what might constitute legitimate methods for completing student work."
Carnegie Mellon University has witnessed a surge in academic responsibility violations due to AI, but often students aren’t even aware they’ve done anything wrong. Rebekah Fitzsimmons, chair of the AI faculty advising committee at Carnegie Mellon’s Heinz College, points to cases like a student learning English who wrote an assignment in his native language and used DeepL, an AI-powered translation tool, to convert it to English. Unbeknownst to him, the platform also altered his language, triggering an AI detector. Enforcing academic integrity has become more complicated, Fitzsimmons notes, since AI use is hard to spot and even harder to prove. Faculty are now more hesitant to point out violations because they don’t want to accuse students unfairly. Students, for their part, worry that if they’re falsely accused, there’s no way to prove their innocence.
This summer, Fitzsimmons and colleagues helped draft new guidelines striving for greater clarity. Faculty have been told that a blanket ban on AI "is not a viable policy" unless instructors change the way they teach and assess students. Many have responded by eliminating take-home exams and returning to pen-and-paper tests or "flipped classrooms," where homework is completed in class.
Emily DeJeu, who teaches communication courses at Carnegie Mellon’s business school, has replaced homework writing assignments with in-class quizzes done on laptops using a "lockdown browser" that prevents students from leaving the quiz screen. "To expect an 18-year-old to exercise great discipline is unreasonable," DeJeu says. "That's why it's up to instructors to put up guardrails."
Across the country, educators are adapting. They’re integrating AI as a study aid while striving to maintain academic integrity. The lines may be blurry, and the rules in flux, but one thing is clear: the classroom is changing—fast. For students, teachers, and parents alike, the challenge is not just keeping up, but figuring out what learning really means in the age of artificial intelligence.