Today : Feb 06, 2026
Education
06 February 2026

Universities Chart New Paths For Responsible AI Use

Marquette and Dartmouth unveil major initiatives to address ethical, educational, and operational challenges posed by artificial intelligence across campus life.

On February 5, 2026, two prominent American universities—Marquette University and Dartmouth College—announced major new initiatives to address the rapid rise and complex challenges of artificial intelligence (AI) in higher education. Their approaches, while distinct in focus and philosophy, reflect a growing consensus: AI is reshaping the academic landscape, and universities must adapt quickly, thoughtfully, and ethically.

At Marquette University, the launch of a university-wide Artificial Intelligence Task Force marks a significant institutional push to guide the responsible use of generative AI across all corners of campus life. According to Marquette Today, the task force brings together faculty and staff from a wide array of disciplines to examine how emerging AI tools are being employed in teaching, research, operations, and student success and wellness. The group is not just a think tank—it’s been charged with the practical mission of identifying where existing policies, procedures, and support structures need to be updated or adapted, all with the aim of ensuring that AI adoption at Marquette is effective, responsible, and aligned with the university’s Catholic, Jesuit mission.

The structure of Marquette’s AI Task Force is both broad and deep. It consists of an executive committee and five specialized work groups: teaching and learning; clinical-based teaching and learning; research; operations; and student success/wellness. The principles guiding their work are clear: uphold Jesuit mission and values, ensure leadership and campus alignment, promote ethical, inclusive, and responsible AI use, and maintain robust data management and security.

Faculty and staff engagement is central to Marquette’s approach. In the coming week, all university employees will receive a survey seeking input on how AI is—or isn’t—being used in their areas, from classrooms to clinical settings and administrative offices. The findings will help pinpoint where additional support, policy updates, or new resources may be needed as the university navigates the evolving AI landscape. This emphasis on consultation and transparency is meant to clarify shared expectations and identify concrete next steps as AI technologies continue to advance.

The research workgroup, chaired by Dr. Jeanne Hossenlopp, vice president for research and innovation, is forming two focused task groups: one dedicated to AI tools for research, and another to compliance, ethics, and responsible use. Hossenlopp is actively seeking interested faculty and staff to join these efforts, a move that underscores the university’s commitment to broad-based participation and interdisciplinary collaboration. The task force’s membership list reads like a who’s who of Marquette’s academic and administrative leadership, with representation from the Raynor Library, School of Dentistry, Business Administration, Nursing, Health Sciences, Law School, and more.

Meanwhile, at Dartmouth College, the focus is on equipping students with the skills and ethical grounding to thrive in an AI-driven world. As reported by the Valley News, Dartmouth has rolled out a new AI concentration within its engineering program, reflecting both student demand and the urgent need for AI talent in the workforce. Dean Doug Van Citters of Thayer Engineering School noted, “94% of employers are facing a shortage in AI talent, according to the World Economic Forum.” Dartmouth’s new concentration, he said, is designed to ensure students don’t simply jump into AI careers without a proper foundation. “We really want to be leaders in how we teach not just AI, but teach responsible use, and its responsible construction,” Van Citters emphasized.

The AI concentration aims to provide a comprehensive educational bedrock, covering not just technical skills but also ethical considerations, user and programmer intentions, and the limitations of the technology. Gene Santos, director of the Master of Engineering Program, explained that while many required AI courses already exist, the new concentration will expand their scope and accessibility. Funding for the initiative comes from Dartmouth’s operating budget, special endowments, and engineering scholarships, though the university is still assessing long-term funding needs. Recruitment for the new program is underway, with the engineering applicant pool described as strong and many AI-related classes already oversubscribed.

AI’s reach at Dartmouth extends well beyond engineering. At the Geisel School of Medicine, students are using an AI patient actor tool—a website that allows them to practice communication skills required for real-world medical encounters. Dr. Thomas Thesen, associate professor of medical education and one of the tool’s developers, explained that students can select a condition and personality for the AI chatbot, then interact with the “patient” by speaking into their computer. The tool, which has logged over 10,000 student encounters and attracted 155 educators, provides feedback based on customizable rubrics. “There’s a whole skill set about breaking bad news, like a cancer diagnosis or a son died, or something like that,” Thesen said. “There are whole frameworks on how you structure this conversation in the best way. So the AI is good for that.”

Despite its benefits, the AI patient tool is not without drawbacks. Thesen noted it can’t fully replicate the nuances of human interaction—body language, eye contact, and genuine social cues are missing. “Students who get nervous talking to people do not get authentic practice helping patients when they are able to do so in a comfortable environment by themselves,” he said. The tool is not intended to replace real actors entirely, and a team of students is working to correct potential racial biases, acknowledging that “large language models are built with all the internet data, the good, the bad and the ugly. So they reflect how society thinks, and society is racist, right?” Thesen added.

The tool itself is a local creation, developed by Dartmouth’s Dali Lab. Colin Wolfe, a junior in computer science and engineering, worked on the voice-to-voice function last spring. “It’s nice to know that some of your work has been deployed and is helping Geisel med students,” Wolfe remarked. Yet, Wolfe also expressed concerns about academic integrity, noting a rise in handwritten exams in the computer science department to counter dishonest use of generative AI for coursework. “It is sort of disheartening because people come to my office hours and sometimes they don’t understand a reasonably important foundational piece of knowledge they should know starting from like week one,” he said.

James Dobson, associate professor of English and Creative Writing, teaches a course titled “Critical AI” that explores both the cultural critique and technical fundamentals of AI. Dobson highlighted the mixed messages students receive: some faculty strictly prohibit AI, while others require its use for certain assignments. “We have uncertainty on the faculty side about what we think is appropriate. Students are using it all over the place with little sense of how it works and the tools they need to have some agency involved,” Dobson explained. The result? A patchwork of policies and a student body sometimes unsure of what’s acceptable.

Both Marquette and Dartmouth are grappling with the promise and peril of AI in higher education. Their efforts—one focused on institutional alignment and ethical oversight, the other on curricular innovation and practical tools—reflect the fast-moving, sometimes messy reality of AI’s integration into academia. As AI continues to evolve, these universities are betting that thoughtful engagement, broad participation, and a commitment to ethical principles will help steer the ship through uncertain waters.