Synopsis
The author experiences multiple encounters with AI discourse and struggles to make sense of the tension in their body that results. The author makes no argument and finds no answers; the author invites you, Reader, into the mess, where it sits there, looking at us.
—
Scene 1:
At the recent Rhetoric of Health and Medicine Symposium in Minneapolis, we crowd into a warm room on the UM campus to hear the closing keynote. In the open forum afterward, we sixty or so rhetoricians and educators worry over the state of our discipline and the state of health professional education. A question is raised: how are instructors integrating AI into their classrooms, and how can we help health professional students contend with the ways in which AI may transform medicine? Uneasily, someone responds: Those conversations aren’t happening in the classrooms on their campus, at least. Overall, pre-health professional education feels very slow to respond to the AI wave. The unsettled silence that follows signals grim recognition. Who are we to teach students how they should contend with this thing, when we ourselves aren’t sure how to respond?
—
Scene 2:
In a mandatory all-staff meeting at my university for those of us working in student support services, we sit through a one-hour presentation on how our university will be integrating AI into its workflows. (We are one of a handful of universities who have paid to partner with the creators of ChatGPT Edu.) We spend precious little time discussing our concerns and move quickly to sharing AI usage tips. My colleagues share that they use AI to help them draft mass emails to students, to correct their tone, to help them generate activities for the professional skills classes they teach.
I realize that the work that is being outsourced to artificial intelligences is the work of rhetoric: the part that requires us to think carefully about how we put language to work in the world, to attune to the feelings and interiority of the Other whom we are trying to reach with that language. In other words, we are giving AI the hard parts. How do I say this in a way that will be heard? How do I speak in a way that will reach You, across the ineffable distance between my subjectivity and yours? The beating heart of my discipline, I realize, has been relegated to the province of machines.
—
Scene 3:
The professional organization for which I serve on the executive council circulates its new statement on AI in writing education. The statement resigns itself to the inevitability of AI, and recommends numerous approaches that might ground our integration of AI ethically into our teaching. Teaching AI literacy to our students is framed as a moral imperative. If we won’t teach them how to use it safely and responsibly, who will? Cerebrally, I think they’re right. But my skin vibrates. I vote “No” on approving the statement. We get the results of the vote by email. The statement has passed. It goes live.
—
Scene 4:
I join a listserv for other educators and scholars across fields, across the world, whose wariness about AI ranges from hesitant to fiercely, righteously opposed. We connect and start to feel something like solidarity, like trust. Another subscriber circulates a blogpost by Anthony Moser entitled “I Am An AI Hater,” and some anxious knot in me unclenches as I read. I find my way to Fernandes, Sano-Franchini, and McIntyre’s thinking on AI refusal, and to Kistner’s reclaiming of the term “Luddite.” Things start to make sense. I start to see the possibilities here. And I start to worry.
—
Scene 5:
I start a rewatch of Star Trek: The Next Generation out of a desperate need for escape while Gaza starves and federal troops are deployed against peaceful protestors and millennia-old glaciers in Antarctica calve into the warming sea. In an early episode, the android Lieutenant Commander Data is asked to submit to a dangerous experiment so that a Starfleet researcher by the name of Maddox can unravel the secrets of Data’s positronic brain. Worried that the invasive procedure will destroy his memories, Data refuses. When Maddox contests that Data is Starfleet property and therefore cannot refuse, the conflict is taken to Starfleet’s court, where Captain Picard must make an argument for Data’s sentience.
The arguments end in a state of perplexity — no one can prove that Data is sentient, any more than they can prove the sentience of another human being, and yet Data feels emotions, experiences ego, and expresses sentimentality and self-awareness. Data is a machine, but clearly Data is not a thing — he is something else. Before she renders her judgment, the judge gazes at him and murmurs: “It sits there looking at me, and I don’t know what it is.” Data looks back at her, with his characteristic wide-eyed curiosity, and I am struck by the impasse between them, the incommensurable space between humanity and what humanity has wrought. Us, gazing into the machine; the machine, gazing back; neither of us quite sure what we see, but seeing, nonetheless.
—
Scene 6:
I teach an online public speaking course for a medical arts academy based out of California. In a recent discussion forum about public speaking ethics, I notice a pattern of similarity in my students’ responses. Across some seventy discussion posts I see the same syntactical structures, key phrases, and surface-level sentiments, and realize that I’m encountering real evidence of widespread AI usage in my students’ work.
I reach out to a colleague teaching the same course, and we worry over the problem of how to respond. I’ve had encounters with AI-generated student work, but it feels different this time, because these students will be nurses, imaging technicians, radiology techs, and some of them will go on to further schooling in other fields of medicine. Their ability to care for human beings will depend upon the skills they are presently outsourcing to AI. They will need the humanities to nourish their own hearts and souls in their practice of medicine. Am I failing these students in my inability to respond to their thought-intimacy with the machine? Is this a crisis? Is it nothing at all?
In an email exchange with our course director, we consider making the discussion prompts easier than they already are so that students won’t feel tempted to use AI. We consider making them so straight-forwardly simple that all that is required of the respondent is a pulse. We’re just looking for some confirmation that the student is there, on the other side of the screen. Just looking for some evidence of a heartbeat.
—
Works Cited
Berg, Ingrid. “It Takes a Village to Heal the Doctor: How the Humanities Helped Me Reclaim My Idealism in Practicing Medicine.” Synapsis, 13 December 2023. https://medicalhealthhumanities.com/2023/12/13/it-takes-a-village-to-heal-the-doctor-how-the-humanities-helped-me-reclaim-idealism-in-practicing-medicine/.
Fernandes, Maggie; Sano-Franchini, Jennifer; and McIntyre, Megan. “What is GenAI Refusal?” Refusing Generative AI in Writing Studies, Dec 2024, https://refusinggenai.wordpress.com/what-is-refusal/.
James, Ted A. “How Artificial Intelligence is Disrupting Medicine and What It Means for Physicians.” Harvard Medical School, 13 Apr 2023, https://learn.hms.harvard.edu/insights/all-insights/how-artificial-intelligence-disrupting-medicine-and-what-it-means-physicians.
Kistner, Clayton. “I am a proud Luddite in the classroom.” The Important Work, 9 Sept 2025, https://substack.com/home/post/p-173044857?source=queue.
“The Measure of Man.” Star Trek: The Next Generation, season 2, episode 9, 13 Feb 1989. Paramount+, https://www.amazon.com/gp/video/detail/B0CM3DBG3Z/ref=atv_dp_season_select_s2?jic=8%7CEgNhbGw%3D.
Moser, Anthony. “I Am An AI Hater.” moser’s frame shop, 26 Aug 2025, https://anthonymoser.github.io/writing/ai/haterdom/2025/08/26/i-am-an-ai-hater.html.
Rabbitt, Adelyn. “IU provides AI resources for students and faculty. Here’s what university experts think.” The Indiana Daily Student, 14 Sept 2025, https://www.idsnews.com/article/2025/09/iu-chat-gpt-edu-gen-ai.
Header image credits: Composite made by the author. Computer code courtesy of Markus Spiske; brain scan courtesy of MART PRODUCTION. Image of Data retrieved via screen cap.


