Across college campuses in the United States, a profound shift is underway in how professors and institutions respond to the rapid advance of artificial intelligence (AI) in the classroom. From Austin to Vermont and Atlanta, educators are rethinking their roles, their syllabi, and even the very purpose of education as AI tools like ChatGPT, Claude, and Gemini become ever more accessible and powerful. The result? A patchwork of policies and philosophies, each reflecting deep questions about academic freedom, student learning, and the future of higher education.
At St. Edward’s University in Austin, the approach stands out for its embrace of academic freedom. Unlike other local schools that impose blanket restrictions on AI use, St. Edward’s allows individual professors to determine their own policies. “Why is it important for professors to have that kind of academic freedom? Because every professor is an expert in a tiny, tiny piece of a field,” explained Gentry Atkinson, a computer science professor who teaches AI algorithms at all levels, according to reporting by The Austin Chronicle. “We trust our professors to understand the environment they’re working in.”
This decentralized approach has led to a mosaic of classroom experiences. Writing and rhetoric professor Drew Loewe, for instance, actively incorporates AI into assignments but requires students to disclose how they used the technology. He’s seen a measurable improvement in student performance. “I think educators need to learn to build their own tools, some of which I’ve done, that are programmed in ways that never give an answer,” Loewe said. “It keeps asking questions, working with them, making them think and develop.”
Meanwhile, English literature professor Brian Sheerin takes a starkly different tack. For Sheerin, AI undermines the core goals of his teaching. “I think for the kinds of classes I have, AI is not really the most appropriate tool,” he told The Austin Chronicle. “It kind of defeats the whole point of what I’m trying to teach, which is creative thought, creative expression and critical thinking.” Sheerin pointed out that, while AI can analyze literature or generate impressive art, the real value lies in the process—learning how to analyze and create, not just producing an end product. “AI can generate a painting for you and it’ll look better than anything you can do. But that’s not the point, the point is to build the skills of painting yourself, even if the end results don’t look very good.”
Atkinson, for his part, sees the current controversy as part of a familiar cycle. “No one being angry about the internet made the internet go away, right?” he noted. “It was there, it was something that had to be incorporated into every classroom because it was something that was immediately incorporated into every workplace.”
While the United States is still wrestling with how best to integrate AI into education, other countries—like China—have already normalized such tools in university settings. But within the U.S., the debate is far from settled, with each institution and even each department forging its own path.
At Middlebury College in Vermont, professors are also adapting to AI’s classroom presence, but with a nuanced, context-dependent approach. Professor Megan Mayhew-Bergman, for example, promotes a practical, non-sanctimonious attitude toward AI. “I want to make sure we're operating in reality, giving students what they need in this moment, and also pointing our writers and readers to a personal ethos that reflects the world they want to live in,” she wrote in a campus-wide email, as reported by The Middlebury Campus. Bergman plans to foster dialogue about AI through creative writing chats and analog reading and writing hours.
Professor Timothy Billings, also at Middlebury, draws a line between augmentation and automation. “Education specialists make a distinction between two fundamental uses for generative AI in coursework: 1) augmentation, i.e., uses that ‘augment’ or enhance learning, such as providing explanations of difficult material; and 2) automation, i.e., uses that ‘automate’ or substitute intellectual labor, such as the generation of written assignments or other material for submission. To put it simply, augmentation is allowed; automation is not,” Billings wrote in his syllabus.
Language instruction, too, has had to evolve. Lecturer Mairead Harris in the Chinese Department explained that beginners are barred from using AI, while advanced students may use generative technologies for limited purposes. In her 300-level class, students handwrite their initial essays, then type them up and can only use AI at the sentence level, submitting a log of their interactions. “What our goal is for beginning language, is we want students to develop their own Chinese voice in their brain,” Harris emphasized. “They need time to be wrong.”
STEM fields bring their own set of concerns. Professor of Biology Gregory Pask highlighted worries about AI’s imprecision and environmental impact. “I think a lot of people in STEM are looking at the inaccuracies and they might also be more in touch with the environmental impacts,” Pask told The Middlebury Campus. He stressed that the act of designing experiments and troubleshooting is where real learning happens.
This diversity of approaches is mirrored nationwide. According to NPR, a national survey in 2025 found that about 40% of higher education administrators and 30% of instructors now use generative AI daily or weekly—a dramatic increase from just 2% and 4% in the spring of 2023. Georgia State University professor G. Sue Kasun, for instance, uses Google’s Gemini AI chatbot for brainstorming course ideas and creating grading rubrics, but always verifies that outputs align with her learning objectives. “There were suggestions of offering different choices like having students generate an image, having students write a poem. And these are things that I could maybe think of but we have limits on our time, which is probably our most valuable resource as faculty,” Kasun told NPR.
New research from Anthropic, the company behind the Claude chatbot, reveals that professors worldwide are using AI for curriculum development, lesson design, research, grant writing, budget management, grading, and even building custom interactive learning tools. Of 74,000 higher education conversations with Claude analyzed by Anthropic, 57% related to curriculum development and 13% to academic research. Professors tended to automate tedious administrative tasks, while collaborating with AI on more creative teaching work. “It’s helping write the code so that you can have an interactive simulation that you as an educator can share with students in your class for them to help understand a concept,” explained Drew Bent, education lead at Anthropic.
But not everyone is convinced that AI’s growing role in academia is an unmitigated good. Grading, for instance, remains contentious: about 7% of analyzed AI conversations involved grading, but many faculty found AI grading least effective and worried about its impact on the educational process. Marc Watkins, a lecturer at the University of Mississippi, voiced concerns to NPR: “This sort of nightmare scenario that we might be running into is students using AI to write papers and teachers using AI to grade the same papers. If that's the case, then what's the purpose of education?” Watkins also warned against using AI in ways that could devalue professor-student relationships.
Kasun echoed the call for more institutional guidance, lamenting, “We are here, sort of alone in the forest, fending for ourselves.” Bent agreed, saying that while tech companies should partner with universities, “Us as a tech company, telling educators what to do or what not to do is not the right way.”
As AI becomes ever more embedded in academic life and policy, the choices being made now—by institutions, departments, and individual professors—will shape not just how students learn today, but the very nature of higher education for years to come.