Thomas Cook’s final semester at Metropolitan State University of Denver serendipitously paralleled the rapid proliferation of easily accessible artificial intelligence. The Art major with a minor in Digital Media saw an opportunity to use today’s tools for his final project, with the support of Professor Kelly Monico — abiding by her requirement that no more than 50% of the project be AI-generated.
Building off a previous course project in which he combined the text-based generator InferKit with image-to-text AI program Midjourney to produce two distinct comic books, Cook saw the tech as a potentially invaluable creative resource and a way to question its broader social impact.
“It was the perfect time to experiment with something new,” he said. “AI is so powerful. And it’s here, so it’s important to understand all the potential it brings — both the good and the bad.”
AI generated image courtesy of Shutterstock
While some educators are embracing the potential of AI technology, many others are on edge.
The use of programs such as OpenAI’s ChatGPT — a program that instantly generates text blocks from user-based prompts — is, after all, being misused in academia, beating the bar exam’s evidence-and-torts portion; passing all three sections of the U.S. Medical Licensing Exam; even successfully completing a final at the prestigious University of Pennsylvania Wharton School MBA program.
MSU Denver, like other universities, is trying to walk the line between academic integrity and innovation. The University is kicking off a series of related conversations, beginning with “AI and the Future of the Academy” on Feb. 28.
“AI and the Future of the Academy”
“Creativity and Originality: A ChatGPT Conversation”
“The cows are out of the barn,” said Shaun Schafer, Ph.D., associate vice president for Curriculum, Academic Effectiveness and Policy Development. “From a policy standpoint, we live in a world where this (technology) exists – now, what should we do with it?”
One of the biggest challenges has been the rapid emergence of AI. Schafer noted he first received an email about ChatGPT at the end of November; today, it’s a common conversation in education circles, with responses ranging from embracing its utilization to an outright ban by New York’s public schools.
“I’ve never seen this pace of change within the academy,” Schafer said. “It took 50 years for the printing press to get from Germany to England, then another 400 before the advent of radio and almost 100 to where we have internet everywhere. We have to be ready for change we don’t expect.”
Though ChatGPT is having a moment, it’s been an iterative process, and ultimately ChatGPT is only as good as its ability to be used, said Steve Geinitz, Ph.D., assistant professor of Computer Sciences at MSU Denver.
“I think one of the biggest differences is the user interaction,” he said. “ChatGPT has used a reinforcement approach, where humans are interacting with the program and its huge data sets to get more desirable output.”
RELATED: Hello, world: AI comes to life
At its basic level, ChatGPT can be understood as a computer program — specifically, a machine-learning model — trained from all the information available on the internet up to a certain date, with the ability to create sentences in a natural-sounding format.
Like universities across the country, MSU Denver is focused on balancing the promise of AI technology with ensuring it isn’t misused.
The underlying architecture of such machine-learning models uses neural networks, modeled after how the human brain functions: Billions of little signals can take inputs, transforming and transmitting them across layers, building in complexity.
It has its limitations, however. The ability to incorporate updated, real-time information is limited. And there is the potential for abusive language, as seen in artificial-intelligence chatbot Tay’s decommissioned Twitter experiment and the procedurally generated “Seinfeld”-esque Twitch stream Nothing, Forever.
And though Microsoft is betting big with Bing’s incorporation of ChatGPT’s forthcoming fourth version, Geinitz views the upgrade as “more power, not necessarily a sea change” in how it functions.
He does see the potential for positive academic implementation of AI, with caveats. Geinitz noted he used GitHub’s CoPilot as an aid in code analysis, as well as Elicit, a language model he used for a literature review.
“It requires time to think about the best way to integrate these tools in our classes — but we can,” he said.
Geinitz and Schafer agreed that building policies around AI will require careful evaluation and flexibility to address how the technology is used in varying disciplines. In addition to coding analysis and a kick-start for art, some faculty members view it as “akin to the move from the slide rule to the calculator,” Schafer noted.
At the center of the debate, however, are AI’s ethics and efficacy in terms of validating work as authentic. Schafer noted that conversations around academic integrity and enforcement mechanisms are not new but will likely enter new terrain.
It’s a lot to tackle, but as Schafer noted, MSU Denver, with its history of challenging conventional thought in and outside the classroom, is the right place to have the conversation.
“We’re in a fun discovery phase and trying to see where the appropriate boundaries are,” he said. “The fundamental starting point is that we still want to produce creative, resilient minds and ensure that they’re equipped for whatever comes next. If AI is going to be a tool in that process, we want to figure out how best to use it.”
Monico, the Communication Design professor leading Cook’s 4D Foundations class, sees AI as a starting point and helpful tool to get out of creative blocks.
“It’s like a sketch,” she said. “I know I’m going to be able to make so much more of my own work because of it. … For me, it’s a technology that’s here to stay — so how do we use it to enhance our practice?”
Cook echoed that sentiment, as well as the larger concerns around workforce disruption.
“People are understandably worried about jobs being replaced, but it’s not the AI doing the replacing; it’s the people using AI,” Cook said. “We still need specific skill sets. And any tool can be good or bad, depending upon how it’s used.”