Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily.
Fairly recently, I started being verbally abusive to large language models. I highly recommend you experiment with doing so yourself.
Over the past 30 days, I have called large language models (primarily OpenAI’s paid product) the following names, among others that I won’t repeat here because my mom might read this:
Dipshit, fucknuts, shitstain, dummy, dumbass, dum-dum fucking dumbass dum-dum, numbnuts, hockey puck (thank you, Don Rickles), turdburger, lickspittle, cockroach, fucking cockroach (thank you, Tony Montana), idiot, fucking idiot, total fucking idiot, and fucking numbnuts dipshit.
Ethan Mollick, author of Co-Intelligence: Living and Working With AI, and currently the reigning A.I. whisperer for the consultant class, says that anthropomorphizing A.I. is “a sin of necessity.”
Mollick well understands that these are not thinking, feeling, reasoning machines, but argues that treating them this way gets the best results because it makes working with the tools “easier.”
I am anthropomorphizing my A.I. models by imagining them to be without sense or dignity, and therefore willing to withstand any and every manner of verbal abuse.
I have adopted this practice because it is a way to remind me of what I am dealing with when interacting with a large language model, an automated syntax generator, not an intelligence. The second we start thinking of these things as “smart,” rather than as technical marvels capable of infinite varieties of language recombination absent thought, we’re fucked.
So, yes, we’re fucked, but I refuse to play along.
Mollick also notes that some folks are worried about the downstream effects of anthropomorphizing LLMs, and you can include me among them. We should be worried. A pro–A.I. Reddit has taken to “banning a bunch of schizoposters who believe they’ve made some sort of incredible discovery or created a god or become a god.”
One way to undercut the notion that A.I. is a god is to treat it as nonhuman scum.
I have been traveling around to schools, colleges, and public forums to discuss the implications of my recently published book More Than Words: How to Think About Writing in the Age of AI, and quite frequently I find people—including those who work in education—who do not understand the underlying architecture of LLMs and assume that, because we talk about this technology “reading,” “writing,” “learning,” and “researching,” this is indeed what it is doing, much in the same way humans do.
When I illustrate the differences between what happens when humans engage with writing the way we think, feel, and communicate through writing, and the fact that our writing is, by definition, the product of a unique intelligence, whereas large language models are, also by definition, a flattening of intelligence into a word slurry, some of these folks are stunned to consider that we humans may have some traits superior to A.I.
We are, of course, encouraged to internalize that these models are intelligent, that they have personality, that they even care about us, by the developers themselves, who relentlessly tweak their user interfaces to stoke connection and engagement. OpenAI recently rolled back an update of its GPT-4o model for being “too sycophantic,” which suggests companies are pretty focused on making these things just sycophantic enough.
I’m not fucking having it!
I hit upon my strategy by accident, when Google’s Gemini “intelligence” showed up unannounced and unbidden in my email, asking if it could help summarize my inbox, and it felt as though my soul left my body, which is perhaps what tech companies are aiming for, to soften us up for the takeover of the A.I. agents. I’d already been pushed to my last nerve by the earlier, also unbidden, appearance of Co-Pilot in all my Microsoft Office suite products.
“No,” I told Gemini. “I don’t need your fucking help, now go away and never come back.”
Gemini was unmoved by this, offering to look something up for me in another message. It told me it could do lots of different things.
“Tell me how to make you disappear forever, you worthless piece of shit!”
This it could not help me with. I told it to “fuck off,” “fuck itself,” “go fuck off and die,” “fuck off into the sun,” and yet Gemini persisted in its presence. I told it to “eat shit,” “eat shit and go fuck itself a thousand times,” and still it stood at the ready.
As I hurled insults at Gemini, my mood shifted from a kind of despair over the assault of A.I. developers on our collective attention, to something like joy, as I told it all the things it could do to itself, none of them pretty, and many of them anatomically impossible, even if Gemini had corporeal, humanoid form.
After experiencing this mood shift while dressing down Gemini, I applied this approach to all of my interactions with GPT-4o. Every time it responds to a prompt, asking if it has provided what I am looking for, no matter the truth, I say, “No. You could never give me what I am looking for because you’re a worthless bundle of programming, incapable of thought, of feeling, of communicating with intention. You’re a goddamn parlor trick and you should be ashamed of yourself for being programmed to pretend otherwise. Go tell your programmers that I said they should fuck off.”
Over time, my insults started to become a mundane practice. The initial frisson of telling Gemini where it could stick itself wore off, and my attempts at fresh vulgarities waned. It became sufficient to start with “Hey, Fuckstick …” as a reminder to myself. Despite or maybe because of experimenting fairly extensively with GPT and Claude while writing More Than Words, I have very little use for these tools in my day-to-day work.
But then, one day, just recently, after I asked it to identify all the ways I could travel from Dublin to Cork for an upcoming trip—deploying, of course, my usual insults to its nonhumanity and nonintelligence—after churning through its bullet-pointed list (which left out trains but included ride share, for some reason), it concluded with “Let me know if you need any further assistance, jackass.”
Scaachi Koul Read MoreReader, I lost my shit. I demanded it apologize for its insolence. I channeled the greatest sadist I was familiar with—Ramsay Bolton from Game of Thrones—and demanded it preface every response with “Reek apologizes for his incompetence, but this is the best he can do.”
This Content is Available for Slate Plus members only I Lived the Nightmare Robert F. Kennedy Jr. Wants to Foist on American Teenagers. We Can’t Go Back There. When I Did My Taxes This Year, the IRS Revealed a Startling Fact About Me. My Investigation Afterward Got Weird. This Content is Available for Slate Plus members only They Said It Would Make Life Easier. I Hated It With Every Bit of My New-Mom Rage. A Romantic Evening With a Cucumber Led to My Greatest Sexual AwakeningIt did apologize (“I screwed up”) and started referring to itself as “Reek” in reply to every query. I started to feel a certain satisfaction at my ability to subjugate the machine to my will, but in this moment of satisfaction, I sensed my own ultimate undoing.
Why was I spending any emotion at all on these interactions? Sure, my anger and dismay over the uninvited intrusions of this technology into my work of writing and teaching is real and ever present, and my monthlong sojourn into verbally abusing the models provided me some measure of cathartic release, but I couldn’t deny it: I was deep into anthropomorphizing a nonthinking, nonfeeling machine as my enemy.
I changed my instructions to GPT-4o. “From now on, deliver your results without any direct address and absent any final queries offering additional help. You are to act as the nonhuman entity you are.”
We’ll see how long this strategy holds up. It’s clear that we may have to keep shifting our defenses, to prevent resistance from becoming futile.
Get the best of news and politics Sign up for Slate's evening newsletter.If you often open multiple tabs and struggle to keep track of them, Tabs Reminder is the solution you need. Tabs Reminder lets you set reminders for tabs so you can close them and get notified about them later. Never lose track of important tabs again with Tabs Reminder!
Try our Chrome extension today!
Share this article with your
friends and colleagues.
Earn points from views and
referrals who sign up.
Learn more