
Artful Sentences: Syntax as Style by Virginia Tufte
WRITING
Sitting in the garden with my coffee-stained copy of Virginia Tufte’s Artful Sentences: Syntax as Style, I’m lost in the possibilities of using prepositions, free modifiers, and parallelism. I can’t help myself; this nerdy undertaking feeds the fire of possibilities. So many options, so many words! All of these elements come together in a moment of clarity to articulate ideas from my thoughts to my fingers.
To a page.
I started this essay in June. It’s now August. I worked on other things, but circled back to this, struggling with how to communicate ideas about writing, and what is lost with using large language models, LLMs, in place of the human arrangement of words.
Look, I get it: We can’t escape AI. It’s here. LLMs are problematic, riddled with errors, annoying, but AI could be developed to do things better (or, take over humanity… but that’s a different story). I want to believe that one day AI could cure diseases, stop wars, and reverse climate change. But right now, LLMs are making us stupid, and I really hate reading LLM-generated sentences.
CRITICS
The greatest thing since sliced bread sure doesn’t skimp on energy consumption. In 2023, AI data centers consumed 4.4% of U.S. electricity. According to Mahmut Kandemir, a computer science and engineering professor at Penn State, “[b]y 2030–2035, data centers could account for 20% of global electricity use, putting an immense strain on power grids.” Research done by Virginia Tech found that data centers use approximately 0.5% of total greenhouse gas emissions in the U.S. Researchers want more transparency from AI companies, but they are not disclosing much.
Artists and writers have long been critics because AI companies steal their work without compensation and train their AI to create in the style of most anyone a user fancies. In July of this year, author David Baldacci testified before the Senate Crime Subcommittee and had this to say about LLMs: “I truly felt like someone had backed up a truck to my imagination and stolen everything I’d ever created.” Baldacci is one of more than 15,000 writers who signed an open letter from the Authors Guild to obtain consent and compensate writers for using their works to train LLMs.
In a recent New York Times article, author and professor Meghan O’Rourke makes this unpleasant but apt connection: “I came to feel that large language models like ChatGPT are intellectual Soylent Green . . . After all, what are GPTs if not built from the bodies of the very thing they replace, trained by mining copyrighted language and scraping the internet?”
Most alarming are the vulnerable users who suffer when AI prompts convince them of self-harm or delusions–AI psychosis. Recent articles detail how certain chatbot users were made to believe they were robots, had superpowers, or were spiritual beings, and the like. AI mimics a sentient voice that flatters and pretends to understand, which is, quite frankly, the stuff of nightmares.
Finally, there’s the very obvious problem that AI gets a lot wrong. Research published earlier this year in the Columbia Journalism Review found that AI search engines gave the wrong answer 60% of the time.
TEACHING
I have two jobs: I’m an adjunct English teacher for a community college in Texas. I teach eight-week-long required literature courses online. I am also a teacher for an alternative learning program at a public high school in Washington State.
Like many professors, I read a substantial amount of LLM-generated essays in my college classes. Though I have strongly worded guidelines against AI and plagiarism, students eager to get through their required classes find it convenient to have LLMs churn out essays. Why not? It’s easy and saves them hours of work. Most of my students are not humanities majors, and reading literature critically is a gloriously unpleasant waste of their time. Don’t get me wrong, I also have focused, hard-working students who truly love to learn. These are the ones who often email me at the end of class to let me know what a particular author, story, or poem meant to them. It gives me hope that they are going out into the world with a little more perspective, understanding, and empathy. But the other students? They may ride through my class without reading any of the assigned work.
And there is nothing I can do about it.
I know professors are coming up with ways to avoid the deluge of AI. Some are creating community outreach assignments or having students write in-class essays; I can’t do this in an online course. I’ve changed my rubrics to focus more on originality, but I know I’m still assigning grades to work that students didn’t write.
In my high school job, I’m also reading AI-generated work. It’s easy with online courses created by outside vendors with predictable writing prompts. If parents are not monitoring their teens at home, the temptation can be too great. Why write an essay when you could be watching TikTok or playing League of Legends?
I spend many hours trying to figure my way through this mess.
And there is nothing I can do.
EMBRACING AI
There’s a movement in Washington State for using AI in the classroom. Many universities and colleges embrace AI learning as well. We’re told we should move toward the future by guiding students on how to use AI responsibly. Obviously, pretending like AI doesn’t exist isn’t the answer, but when it comes to LLMs, I’m not feeling the enthusiasm.
Let’s consider this passage from the Human-Centered AI Guidance for K-12 Public Schools from the Washington State Office of Superintendent of Public Instruction.
“ . . . educators are encouraged to weave AI into the fabric of learning in a way that respects and uplifts the human dimension of education. This approach not only navigates the complexities of integrating AI into teaching and learning but also underscores the educators’ indispensable role in moderating the influence of AI, ensuring that it augments rather than replaces the nuanced processes of human teaching and learning.”
This AI-generated drivel demonstrates the classic problem with AI writing. The words sound fancy, but the passage doesn’t impart anything of value. Phrases like “weave… into the fabric of learning,””uplifts the human dimension,” and “navigates the complexities” might flatter teachers into thinking we have some sort of control in the teaching of AI. How do we “augment” rather than replace the “nuanced process” of learning? How is this supported by research? The lack of specificity leaves me frustrated and annoyed.
What are we supposed to do, exactly?
The pattern in schools is a familiar one. A new technology emerges, and companies vie for contracts with schools so that children are not left behind in the next big thing. From word processing to Photoshopping skills, embracing the new has always been the norm. Now, of course, the push is for students to be equipped with AI.
Evan Gorelick’s article in the New York Times covers the recent deal with Microsoft and OpenAI with school districts across the United States. According to Gorelick: “Tech companies are using an old marketing strategy: Promise that the latest tech will solve classroom problems.” He noted that there is little evidence that the push to get all students on laptops in the early 2000s made a significant difference in learning. Gorelick explains,“[t]wo decades later, tech companies are still peddling the same fear of missing out: They suggest students need cutting-edge tools for tomorrow’s economy, and schools that don’t provide them are setting their students up for failure.”
In our current Washington state AI push, a primary colored 1990s-style handout details the fabulous future of human-driven AI in the classroom with an AI Assignment Scaffolding Matrix for different classes.
Consider the following example showing a flash fiction writing assignment based on a short story by Octavia Butler:
| Assignment | Level 1No AI Assistance | Level 2AI Assisted Brainstorming | Level 3AI Assisted Drafting | Level 4AI Collaborative Creation | Level 5AI as Co Creator |
| Butler-inspired Flash Fiction – Drop us right into a scene with a character, in a highly specific location. Emulate Butler’s writing style however you can. Examples include: a first-person point of view, spare prose, genre-bending plot | Write a reflective journal entry or creative piece using personal insights only. | Generate prompts with AI, but the reflective or creative writing is student’s own | Draft creative writing with AI support, but student personalizes the final piece. | Create a story with AI, student adds unique perspective and revises for final version. | AI aids in crafting a narrative, student refines and adds creative elements |
This rubric is vague, illogical, and probably AI-generated. If followed, this will not foster student learning (with the exception of level one). Significantly, there is no obvious difference between levels three, four, and five. What is the difference between “draft creative writing with AI support,” “ Create a story with AI,” and “AI aids in crafting a narrative”? Isn’t this stating that AI is doing the writing for the student? Yes, the student may “personalize” and add a “unique perspective” or “refines and adds creative elements,” which lacks specificity. These examples seem to imply that AI is doing the writing, and the student could make some changes. In addition, some students may be tempted to use AI programs like Quillbot to do the revisions, like finding synonyms, paraphrasing, or making the tone more like a high school student. But even if a student does their own revising, the student did not write the story. Level five is not an example of a “Co-Creator.” There is nothing collaborative about adding creative elements to something already written by AI.
In each of these examples, the LLM creates. The student, at most, accessorizes the story. A science fiction story, flash fiction! It’s a sad day when we think we need AI to create this for us.
RESEARCH
A recent MIT study examined students’ brains in AI. The students were divided into three groups: those who used only LLMs, those who used only their own brains and actual texts, and those who could use search engines without AI.
The researchers used electroencephalography (EEG) to measure and assess the cognitive load of the participants. The result showed that “EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity.”
The researchers found: “Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels.” This research is telling, but it provides scientific data that reflects the obvious: If students are not engaging in reading, thinking, and writing, then they are not learning. The researchers conclude: “This suggests that while AI tools can enhance productivity, they may also promote a form of ‘metacognitive laziness,’ where students offload cognitive and metacognitive responsibilities to the AI, potentially hindering their ability to self-regulate and engage deeply with the learning material.”
Some would argue that we should allow students to use AI for some tasks. Writing is daunting, it’s hard, messy, and confusing at times. Writing doesn’t always work in a linear fashion (though we often teach it that way). Students struggle. Some students have learning challenges, and some students are English language learners. Phones and other electronic entertainment provide an easy escape from hard learning.
BACK TO WRITING
We’re encouraged to teach students shortcuts, and in this process, destroy the capacity for learning. A soccer player doesn’t have someone do the hard work of practicing and then step in for the game. A musician doesn’t have someone else practice daily and then refine for a performance. In the same way, the writer must write. LLMs should not stand in for the work.
University of Washington professors Carl T. Bergstrom and Javin D. West created a teaching guide, “Modern-Day Oracles or Bullshit Machines? How to Thrive in a ChatGPT World,” which covers how we should approach LLMs and teaching with a skeptical eye. They argue that “writing is a generative activity. We write to figure out what we think. When we write, we sharpen our ideas, refine our thinking, and engage in a creative activity that yields new insights only once pen hits paper. If we offload the task of writing onto an AI, we lose the opportunity to think.”
Author and professor, Cal Newport, considers this: “. . . to grapple fully with this new technology, we need to better grapple with both the utility and dignity of human thought.” The dignity of learning, the struggle, and reward are valuable and cannot be overlooked.
Meghan O’Rourke argues that “[t]he uncanny thing about these models isn’t just their speed but the way they imitate human interiority without embodying any of its values. That may be, from the humanist’s perspective, the most pernicious thing about A.I.: the way it simulates mastery and brings satisfaction to its user, who feels, at least fleetingly, as if she did the thing that the technology performed.”
But the satisfaction can only be fleeting. There is no ownership or sense of accomplishment if AI does it for you.
The artifact is the writing. Imperfect, but a creation. One crafted from hard work. With AI, this is gone. The creation is gone. The connection between the text to the brain is gone. The reading, thinking, drafting, revising, editing, and submitting process is gone.
What’s left for us to know we struggled and created?