The first time I used artificial intelligence (AI) for something serious, I thought that all my problems were solved. I was working on my senior honors thesis, a data-heavy project that required me to go back and forth between ArcGIS Pro (a software for mapping and analyzing spatial data) and RStudio (a coding application for statistical analysis). As is typical for such projects, I frequently found myself stuck on accomplishing the next analytical steps.
So I turned to ChatGPT-3.5, the popular and free AI chatbot run by startup OpenAI. For statistical research projects, I already spend plenty of time googling solutions to software challenges; why not use a Google that can talk back?
For my first 10 minutes of ChatGPT usage, I was wowed. The AI knew both of my softwares well, and it not only had thorough answers to my questions, but could respond to clarifying questions and follow-up inquiries that were unique to my own project. Amazing, I figured. It should be much smoother sailing from here on out.
After another five minutes, I realized that I had been led down a rabbit hole, and I was attempting to use an ArcGIS tool that did not perform how ChatGPT had promised. It had hallucinated functionalities of the tool which did not actually exist.
“I apologize for any confusion earlier,” it told me when confronted.
I learned a lesson: while artificial intelligence can be impressively powerful, thereis a real learning curve in figuring out its limitations and effectively using it. Though AI quickly displayed its limitations, I was far more taken aback by its potential benefits.
Even if we confine ourselves to the realm of undergraduate coursework, the potential capabilities of AI seem head-spinning — more so than I think many of us realize.
Take the results of an informal experiment recently run by Harvard student Maya Bodnick. For a blog post on the website Slow Boring, Bodnick had seven of her Harvard instructors, spanning across disciplines, grade typical writing assignments that Bodnick had sent to them. Bodnick told her professors that the submissions could be written by either Bodnick herself or AI, but actually used unedited output from ChatGPT-4 (the newer, paid version of the chatbot) for all seven essays. ChatGPT’s essays earned a 3.57 GPA, including three A grades. Bodnick did not edit or amend the essays, but simply passed on ChatGPT’s output. It was that easy.
A couple months later, Brooklyn College professor of political science Corey Robin discussed a similar topic in The Chronicle of Higher Education. Robin described writing as an essential part of the learning process — a way of “ordering one’s world.” But when he saw the essays ChatGPT could write after a few iterative prompts and nudges, Robin came to believe that basically all of the essential value in the writing process could be replaced by AI-generated output. Robin resigned himself to assigning only in-class writing assignments instead of take-home essays.
These examples only represent an extreme of AI uses, in which we directly substitute AI-generated output for our own work. There exists a far richer spectrum of beneficial, insightful or productive AI uses — again, even if we restrict ourselves only to the realm of undergraduate coursework.
You could imagine many ways to make AI complement, instead of substitute, your own efforts. Take my earlier example of having ChatGPT help write code for statistical analysis on a large-scale research project: though it occasionally makes mistakes, AI will help me more smoothly pass over the countless little technical roadblocks of such projects. There are plenty of entirely different tasks — brainstorming research ideas, editing essays, summarizing a journal article — where AI could both meaningfully assist students and even expand coursework’s educational richness.
At the level of policy, however, we are only in the most preliminary explorations with AI. AI is certainly going to be well-used by students, if that is not true already. Ensuring that it is used well, however, is a daunting task with no obvious pathway to implementation.
At the institutional level, Macalester has remained mostly hands-off in establishing rules around AI use. Led by the DeWitt Wallace Library, a group of Macalester faculty and staff have created an extensive list of resources and guidelines for faculty. It is full of informative research on things like AI ethics and use cases, offering ideas for different course policies or in-class activities dealing with AI. Yet its purpose is more suggestive than prescriptive, and these ideas will take time to be meaningfully implemented in the classroom.
Individual professors have the opportunity to experiment, and some are doing so. As my colleague Emma Salomon reported for The Mac Weekly earlier this semester, some professors have built AI into their courses or encouraged students to use it in specific and thoughtful ways. Take one example of a clever AI-based learning opportunity: in computer science professor Paul Cantrell’s class, students asked ChatGPT to write a paragraph of their final paper, and then critiqued its output.
These are interesting first steps, and I am curious to see what we discover in the coming semesters. But for the most part, it seems that students are flying blind in their early experimentation with AI use.
If this is not already enough to grapple with, keep in mind that we are essentially playing catch-up with AI tools that are already outdated. I have only used Chat-GPT 3.5, which has been available for a year. It has none of the constantly-expanding package of additional features built into the more advanced, and paywalled, GPT-4 — and that’s ignoring all the other products built by OpenAI’s competitors. AI’s creators are improving these programs at a breakneck pace, and I don’t know that we can adjust fast enough.
As I’ve watched AI begin to bloom, transform and permeate, I’ve come back to the words of journalist Ezra Klein, who described living under these rapid changes as “the difficulty of living in exponential time.” As Klein wrote, “there is a natural pace to human deliberation. A lot breaks when we are denied the luxury of time.”
In our little Macalester world, we have yet to face any obvious disruption from AI. But inevitably, disruption will happen. It might just happen sooner, more powerfully and more unpredictably than we’re ready for.