
April 17, 2025 - 12:00pm
AI has taken the world by storm. It’s reshaping the future of work and education through its automation, research and problem-solving and its ability to do it all within seconds.
During a panel discussion, “AI and the Humanities,” professors from the University of Oregon explored the impact of AI. The panel was sponsored by the Oregon Humanities Center as part of the center’s 40th anniversary events on the topic of “Humanities Matter(s).” The panel featured Ramón Alvarado, assistant professor of philosophy and data ethics coordinator; Mattie Burkert, associate professor of English and director of the Digital Humanities program; Colin Koopman, professor of philosophy; and host Paul Peppis, professor emeritus of English and former director of the Oregon Humanities Center. The panelists moved beyond superficial discussions of AI's capabilities and dove into the underlying assumptions and potential consequences of these technologies.
From their in-depth discussion, came five key takeaways about AI:
1. Understand AI on a deeper level
As AI grows, it’s important to look beyond the novelty of it and understand how it affects important parts of our lives in ways we can’t always see. Although it may seem lighthearted to use AI to see what you would look like as a Pixar character or write a fun short story, AI's impact goes deeper than just generating text and images. Alvarado emphasized the need to look at how AI processes unseen data.
“If you think that this thing can draw a good picture of you, wait until you see what it can do with your health data, your insurance data or your credit score. You won't be able to see it and that's even more concerning than what we can see,” he said.
2. Recognize AI's biases
Alvarado points out that "bias can happen at different stages of the machine learning pipeline." AI bias happens when machine learning systems produce unfair or inaccurate results from the data they are trained on. If the training data is reflecting human biases, then the AI is inaccurate. It is imperative for humans to be aware that although AI can be a very helpful tool, bias still exists.
3. Use AI as a knowledge-enhancing technology
It’s a helpful idea to primarily view AI as a tool for enhancing knowledge and understanding. The discussion demonstrated how AI can be used as a tool that guides and supports human decision-making. For example, Burkert highlighted how AI can transform humanities research through document recognition:
"You can see the basic concept here is that you could take something in historically specific handwriting, in a 16th-century secretary hand from a court scribe, and basically recognize it as easily as you can contemporary typography. That’s exciting and a game-changer for a lot of the kind of work I do,” she said.
While AI can make previously inaccessible historical texts accessible, opening new research possibilities, the panel also cautioned that while AI is only as reliable as the data it processes.
4. Consider AI's impact on society
Along with AI’s technological advancements comes potential ethical challenges. Peppis noted that AI raises "urgent issues of trust and democracy, safety and security, privacy civil rights and civil liberties." With AI using technology like facial recognition as well as storing personal data, Peppis’ insight shows the importance of addressing these concerns thoughtfully to ensure AI serves society in a way that upholds trust and fundamental rights.
5. Integrate AI responsibly in education
UO policy encourages using AI as a tool to assist with learning rather than replacing human effort. Students can use AI for brainstorming, generating outlines and organizing ideas, but according to the panelists, they should take AI-generated information with a grain of salt, and never use it to replace their own thinking and writing.
"Rather than working ourselves up trying to prove to the world what AI cannot do, we need to focus our attention on what AI can do, almost certainly will do, and in many domains, is already doing,” Koopman said. “What AI is doing happens to just be what we know how to make it do, which is also quite obviously that which we know how to make ourselves do."
Koopman's perspective encourages us to move beyond speculative debates about AI's limitations and instead focus on its practical applications.
In a rapidly evolving workforce, keeping these five key takeaways in mind can help ensure AI is used ethically and positively.
You can view the entire panel discussion here.
—By Harper Wells, College of Arts and Sciences