Posted by on Feb 13, 2025

It is a Wednesday night in early February. About 40 of us sit around tables in one of our conference rooms. We’ve gathered for the first of four meeting to help craft a university policy around AI use in academic work. As a writing professor, I’ve been awash in research, anecdotes, white papers, and jeremiads about AI and student writing. AI is a vast, extraordinary frontier and the most incendiary topic in academic circles these days. It has spooked the herd.

An expert on the big, white screen Zooms in to give us a useful AI primer. I know that AI can be used for reading the MRI of my knee to help my doctor with her diagnosis. Its algorithms can be deployed on the logistical ju-jitsu of transport routes for goods I order online and have shipped to my doorstep. It is in my phone, calling itself Siri, and asking if it can help find a nearby Vietnamese restaurant. What I also know is that my imagination does not have the sufficient scope to grasp its range of uses now and its potential for the future.

But we’re not gathered to express awe at AI’s capabilities. We are here to talk about a specific AI subset—Large Language Models (LLMs), which generate writing. The expert on the screen explains that LLMS mine acres of digitized language scraped off the internet and draw mathematical correlations to predict what words should follow other words to create the neighborhoods we know as phrases, sentences and paragraphs. That’s the how. We are here in this meeting to talk about the what to do about it.

Since the LLM chatbot ChatGPT was publicly launched in November 2022, it (and the many other LLMs with lesser-known names) has seismically shifted the tectonic plates beneath the instruction of and creation of writing. The equivalent to yelling “FIRE” in a crowded movie theater is to yell “AI” in a gathering of university writing professors. The responses follow the same trajectory: disbelief, panic, and an immediate dash toward our idea of safety.

In our AI working group we discuss the merits of using LLMs; the terms convenient and timesaving lead the pack. We acknowledge the perfunctory nature of some of the writing in our daily lives: the bureaucratic emails and the form letters. We talk about LLM utilitarianism in synthesizing lengthy documents written in obtuse language, generating outlines, outsourcing the repetitive and monotonous tasks associated with writing. Most of the conversation centers on what this new technology will give us. I wonder about what it might take away.

I am grateful to LLMs. Not for the beige and bloodless LLM-generated writing students have submitted as original work, but for the prompt to re-examine my own relationship to writing, a an admittedly difficult process. What parts of this process am I willing to forfeit? And how might those forfeitures affect the other tasks I attend to as I ignite the alchemy that moves abstract thought into language?

“When we turn to an LLM to write for us, we are also inviting it to undertake the more fundamental task of articulation, and this is no small thing. Indeed, given the centrality of language to the human condition, we should wonder about the degree to which the outsourcing of the labor of articulation is the outsourcing of a fundamentally human activity.” So writes L.M. Sacasas in his Substack blog, The Convivial Society. Sacasas is a critic of digital technology who probes its moral and social consequences. He posits that the outsourcing of articulation has the possibility of leading to the bankruptcy of our hearts, minds and imaginations.

That fundamental task of articulation is what I undertake when I begin to write and step into the filmy abstract swirl of my thoughts to prowl through my vocabulary. I articulate when I alight on an association, when I construct a metaphor. I feel it in the often-agonizing searching, in the missteps, in the rifling through the palimpsests of the knowledge I have amassed.

Like Sacasas, I subscribe to the idea of knowledge as interwoven relationships. He writes: “I think of knowledge as something more personal, something that emerges within us as we take in the world from our own unique perspective but also as members of particular communities. In doing so, we construct relationships among the things we have come know (and not merely know about), these relationships are shaped by our history and our desires. And this knowledge, carried within, shapes our ongoing encounters with the world, building a cascading experience of “understanding in light of”, a poetic knowledge.”

The third AI meeting is next week. I’m unsure if I will attend. Instead of writing policy, my interest is in unspooling questions. With the reality of LLMs and their capabilities, how can we remake meaning in teaching and learning writing? How have we institutionally contributed to a focus on having written rather than writing? What does it mean to our ideas of who we are when we ask machines to generate writing that we say represents who we are?

My writing pleasures are manyfold. My writing agonies are persistent. What I have come to know is that it is in the doing of the thing—the doing of all of the things–where I most vividly encounter the essence of who I am.