Artificial intelligence (AI) is no longer just the stuff of science fiction or the exclusive domain of tech giants. It’s reshaping the way college students learn, the way societies prepare for conflict, and even the way we think about ethics in our rapidly digitizing world. This month, two events—one on a university campus in Atlanta and another in the academic journals of peace research—are shining a spotlight on the promise, peril, and pressing ethical questions that come with AI’s expanding influence.
On February 26, 2026, Oglethorpe University will host an event titled On Mutual Ground: Ethics and the Future of AI in Lupton Auditorium. Scheduled for 1:00 p.m., this gathering is more than just another academic panel—it’s a campus-wide dialogue designed to help students grapple with the ethical responsibilities surrounding artificial intelligence and its far-reaching implications for learning, work, and society. The event is part of the university’s ongoing On Mutual Ground series, which aims to foster open, inclusive conversations about challenging topics that matter to today’s students.
“According to a number of studies, AI use is most prevalent in the ages of people who match our student population, but there is little to nothing in the way of training or education on how to use AI in an ethical or helpful—as opposed to AI doing all of the work for someone—way,” said Dr. Jamie Iredell, a lecturer in Core Studies and a committee member for the event series. “We wanted a discussion that was student-focused, and that could help our students navigate this new and potentially problematic territory.”
Oglethorpe’s event will feature two experts who bring both technical know-how and ethical insight to the table. Dr. Edward L. Queen, director of the D. Abbott Turner Program in Ethics and Servant Leadership at Emory University, has dedicated much of his career to developing and implementing ethics programs across businesses, non-profits, and government agencies. He’s also an advisor on AI ethics for Merlynn Intelligence Technologies, making him a go-to voice on how emerging technologies intersect with questions of right and wrong.
Joining him is Avoilan Bingham, Atlanta Seed General Manager at Drive Capital and president of Atlanta Tech Week. Bingham is recognized as one of the region’s most prominent champions of artificial intelligence, serving as an organizer for the AI Tinkerers Atlanta chapter—a global community of engineers, developers, and tech experts passionate about building the next generation of generative AI tools. His focus is on helping industries adapt and develop business solutions as digital transformation accelerates.
But why is this conversation so urgent right now? As Dr. Iredell points out, today’s college students are at the epicenter of AI’s educational revolution. Whether they’re using AI-powered tools for research, encountering AI in the classroom through digital learning platforms, or collaborating with classmates who rely on AI for everything from essay drafting to data analysis, students are being shaped by these technologies—sometimes without even realizing it. And it’s not just the classroom that’s changing. The workforce is evolving at breakneck speed, with employers now expecting new hires to be familiar with AI, not just as a novelty, but as an everyday productivity tool.
While Oglethorpe’s event zeroes in on the ethical dilemmas facing students and educators, a recent publication from the Peace Research Institute Oslo (PRIO) is taking the conversation global. On February 12, 2026, PRIO released a paper discussing how artificial intelligence can support peace by improving conflict forecasting, scenario modeling, and decision making. The paper, authored by Håvard Hegre and previously published in Spanish by the Barcelona Centre for International Affairs (CIDOB), explores AI’s potential to strengthen the information basis for monitoring, anticipating, and ultimately taking action to prevent armed conflict.
"Artificial intelligence is able to perform complex tasks such as visual perception, recognition of meaning in text and decision making," the PRIO paper notes. These capabilities aren’t just technical feats—they’re tools that can provide governments, NGOs, and peacekeepers with the kind of risk analysis and scenario planning that could save lives. By harnessing AI’s ability to sift through vast amounts of data, identify patterns, and forecast potential flashpoints, decision makers can move from reactive to anticipatory action, reducing the suffering caused by war before it even begins.
According to PRIO, AI applications can help reduce suffering from armed conflict through anticipatory actions based on improved risk analysis. In other words, AI isn’t just about making our lives easier or our businesses more efficient—it could be a key player in the quest for global peace.
Of course, these advances come with their own set of ethical quandaries. Who decides what data is fed into AI systems? How do we ensure that algorithms aren’t perpetuating bias or misinformation? And who is accountable when an AI-driven decision has real-world consequences, whether in a classroom or on the battlefield?
These are precisely the kinds of questions that Oglethorpe’s On Mutual Ground series aims to tackle. The event’s format is designed to be inclusive and educational, encouraging students to voice their concerns, share experiences, and learn from experts who are shaping the future of AI policy and practice. As Dr. Queen’s work demonstrates, ethics isn’t just a theoretical exercise—it’s about molding citizens who are informed, engaged, and ready to navigate the gray areas that technology inevitably creates.
Bingham’s perspective adds another layer to the conversation, emphasizing the need for industries to keep pace with technological innovation. As president of Atlanta Tech Week and a leader at Drive Capital, he’s seen firsthand how businesses can harness AI to solve problems and drive growth. But he’s also quick to point out that this requires a new kind of literacy—one that blends technical skills with ethical reasoning.
“We’re at a crossroads,” said Bingham in a previous address to Atlanta’s tech community. “The choices we make about how we develop and use AI will shape not just our companies, but our communities. It’s up to all of us to ensure that innovation goes hand in hand with responsibility.”
As both the Oglethorpe event and the PRIO paper make clear, the stakes couldn’t be higher. From college campuses to conflict zones, artificial intelligence is rewriting the rules of engagement. The challenge now is to ensure that as we embrace the power of AI, we don’t lose sight of the human values that must guide its use.
In a world where technology is moving faster than our ability to regulate or even fully understand it, conversations like these aren’t just timely—they’re essential. Whether you’re a student, a policymaker, or just someone trying to make sense of the headlines, the future of AI is a story that belongs to all of us.