March 13, 2023

ChatGPT — Do we Adapt or Resist?

The growth rate of ChatGPT, the popular artificial intelligence chatbot, has been unprecedented for a consumer app, sparking a moral panic

Need to draft a cogent executive summary? Give me three minutes. Want to compose a Valentine’s Day card in the voice of Taylor Swift? Easy-peasy. Craft a snappy message for a dating app? Oh, yeah. Need to write a university essay on the plot of Middlemarch for tomorrow morning’s class? No sweat.

Like the rest of the world, we’re talking about ChatGPT — the artificial intelligence chatbot that reached 100 million users within two months of its November 2022 launch. After years of various iterations and developers honing its accessibility and human-natural language abilities, ChatGPT was released to the public late last fall by OpenAI, a private company backed by Microsoft.

And we haven’t stopped talking about it — and with it — since.

Just as we once feared that calculators would replace the need to take math or that laptops and cellphones would prove to be irresistible detractors in a classroom, Chat GPT has forced academic circles into a moral and technological panic — sparking heady conversations about honesty, manipulation and academic integrity. It all boils down to this: How will we develop AI to support human goals and how best do we educate people on how to use these new technologies in effective and ethical ways?

Dr. Sarah Elaine Eaton, MA’97, PhD’09, a Werklund School of Education associate professor and academic integrity specialist, believes the panic will subside over time, but that this technology does herald in “a brave new world . . . one with polarizing views.” In fact, Eaton says, “this is the most exciting creative disruption to hit higher education and society in a generation. The last thing that had this kind of impact was the internet. But this is a game-changer, honestly.”

sarah and beatriz

Dr. Sarah Elaine Eaton, MA’97, PhD’09, a Werklund School of Education associate professor with PhD candidate, Beatriz Moya, discuss the advantages and concerns surrounding this controversial AI technology

We recently caught up with Eaton and PhD student Beatriz Moya — whose research intersects academic integrity with the scholarship of teaching, learning and leadership — to discuss the advantages and concerns surrounding this controversial AI technology.

The Good, the Bad and the Ugly

  • Accuracy. The jury is still out, but experts say the best results are only accurate 50 to 60 per cent of the time. “If ChatGPT doesn’t know the answer, it makes stuff up,” explains Eaton. Remember, it’s a chatbot, it is not a sentient being, so it cannot differentiate between fake news and legitimate references. In other words, it just aggregates content from the Web. Adds Moya: “If you want to write an email or create an outline for a presentation, ChatGPT could likely do a very good job. But anything that requires complex critical thinking is still best done by humans. Most of the writing I’ve seen that is produced by ChatGPT is at a Grade 5 level. So, the odds of you getting an A on your university paper are slim.”
  • New ways of assessing knowledge. Besides defaulting to essay-writing, perhaps a greater range of tools to assess students’ knowledge will be created. “An essay might be considered ‘fit for purpose’ in, say, a subject like English or communications or journalism, but perhaps not in nursing or engineering,” Eaton says. “The question becomes, ‘Why are we having students write essays if writing essays is not part of what they should be expected to do at the end of their degree?”
  • What are some other innovative ways we could use to assess knowledge-acquisition? “Perhaps we will see more oral presentations or video recordings,” adds Moya. “Things where students can explain the rationale behind their work — because, with AI, it becomes even more important to be very transparent about our learning process. I think we are moving into a space where assessments can be more creative with more of a focus on the process. It’s not only what you need to learn, but how you learn. AI has brought us to a tipping point and the conversations we are now having about assessments and higher-order skills are very exciting and desperately needed.” In other words, we may shift the object of evaluation from the “product” to the “process” of student learning.
  • Cheating, plagiarism. Not unlike term-paper mills or contract cheating, Eaton worries that students might “abdicate their learning responsibility to the assignments themselves. I think these AI tools can be used as a starting point, as an assistant, but they can also replace actual learning. So, I worry about that.” Adds Moya: “In academia, it’s critical to know where the information is coming from. If we are to produce knowledge that is reliable, that builds from others’ work, we need legitimate sources and, as we know, the internet is full of just the opposite.” However, Eaton is quick to point out, “even as someone who allegedly specializes in plagiarism, I don't feel threatened by this technology. I think it's exciting, and I think it creates a lot of opportunities for people who may not favour writing as their way of communication. There are some of us who are trained and it’s natural to us. But there are so many reluctant writers out there, or people with learning disabilities such as dysgraphia, for example, or people who don’t speak English well. And this gives them power that they didn’t have before.”
  • Social justice and equity. Most data comes from developed countries, points out Moya, “which means much of the world is not represented in the answers that ChatGPT provides.” Underscoring this point is what Eaton refers to as “standard American English,” adding, “many of these tools have the power to diminish non-standard English voices and vernacular. Even the stylistic preferences that get flagged in grammar checks are, in some ways, sending a message to people who may not be critical language thinkers that there are errors when there are not. We need to be cognizant of this.”

There’s no question that ChatGPT has ushered in a whole new world of communication, so it’s no wonder it feels like we’re at an inflection point. It’s too early to know if this chatbot (and others following in its wake) will disrupt the human-digital interaction like Google did some years ago or the Gutenberg printing press did centuries ago. But we do know the adaptation process began unfolding in academic circles within mere weeks of its launch.

The answers to the issues we’re grappling with will not materialize overnight, but Eaton, Moya and many other academics believe we shouldn’t perceive ChatGPT as a threat, but rather analyze its advantages and use them to achieve educational objectives. Might this chatbot be the partner that helps us learn more, work smarter and faster? Time will tell.


  • Are you a University of Calgary alumni, faculty member or student? Interested in Academic Integrity and Artificial Intelligence? We invite you to participate in our survey. Click here to take the survey. This study has been approved by the University of Calgary Conjoint Faculties Research ethics board (REB22-0137).
  • Mark June 8 on your calendars. Dr. Phillip Dawson of Deakin University is coming to Mathison Hall to discuss the role generative artificial intelligence can play in learning assessment. With tools like ChatGPT now a part of life, work and civic engagement, Dawson believes educators must learn from previous technology panics and shift from worry about new technologies to embracing them and even incorporating them into learning outcomes. Fortunately, the field of education has a long history of success when it comes to making this transition. Learn more here