The first time I ever used ChatGPT to help me write, I had an existential meltdown. Feeling simultaneously underwhelmed and horrified by the output of the editing I'd tasked ChatGPT with, I ran into my partner's office and declared my fears. "Civilizations might literally fall," I prophesized dramatically. Not because AI will outsmart and overrun us, nor because the displaced workforce will cause civil unrest (although that one's stressing me out, too). No, my fear was that with the advent of generative AI, societies will lose one of our most essential and human skills: the ability to write.
This idea haunted me for months. While there are many brilliant people in the world who are not brilliant writers, writing is perhaps our most powerful forcing function for formulating clear and structured thought. In speaking, we rely on volume, gestures, circumstances, and the generosity of our conversational partners to help us communicate what we mean. In more artistic forms of expression, we leave much up for interpretation, allowing the imprecision itself to create space for wonder and connection. But in the truly singular challenge of ordering words on a blank page, we are inevitably confronted with the difficulty and importance of trying to make sense.
Being able to make sense matters. Not just for persuading others, making well-reasoned decisions, and creating stable societal truths, but also for our individual ability to feel understood by others. It is agonizing to search for the words for your experiences and come up short. And it is dangerous for us to continue down the path of living in a world where "I am hurting," is often imprecisely and treacherously articulated as "I am angry."
Without a doubt, I believe that the rise of societies where people cannot reason or self-express would pose an existential threat to any chance of a peaceful, equitable world.
The good news is I don't believe we're doomed to living through that catastrophic rise. At least not because of generative AI.
Surviving AI Writing Panic
Over the past few months, I have dedicated considerable time to understanding how AI tools can and will be used to help people write. I've played with every tool I can find on the market, and I've dug up every resource I can about how educators, businesses, and policymakers are using them too. Little by little, my answer-seeking side quest has managed to chip away at my fear. However, it wasn't until I read Paul Graham's recent post on AI and writing that I realized how radically my perspective has changed.
Not only do I believe generative AI will not be the death of sound reasoning and writing, but I am cautiously optimistic that AI might help us nurture generations of better thinkers and writers than our educational system has been able to produce in a very long while.
“Good Writing” Is Not One Skill
The first thing that helped subdue my fear was some critical reflection on the nature of writing as a skill. We are quick to call people good writers or bad writers, but writing is not actually one monolithic thing. Anyone who writes a lot can tell you that "writing" is actually a multistep process of planning, drafting, editing, and revising. Lots of "good writers" are at least half-decent at all of these things, but crucially, plenty are not.
While some writers plan with clarity and ease but are terrible editors of others and themselves, other writers would sooner be raked over hot coals than write an outline but can edit the hell out of a haphazard first draft (I am potentially projecting). And while the outputs from all these people might be labeled "good writing," the skills that get them there are radically different. Drafting and planning are highly imaginative while editing is highly analytical, skill profiles we often place in opposition. Even editing and self-editing are different skills, the latter being mostly a test of humility and restraint.
"Good writing" is also highly contextual. A leader who can draft a clear and professional memo might be incapable of articulating the feeling of raindrops on skin. Meanwhile, best-selling novelists worldwide might loathe and struggle to write an Amazonian one-pager.
I'm belaboring the point because it's my newsletter and I can, but what I mean to say is that it's important for us to consider the skill of "writing" with more nuance if we're to understand how AI might sharpen or weaken it.
How People Actually Write With AI
Here's my challenge to the doomsayers: Ask fifty of your friends and colleagues how they're using ChatGPT to write (I did), and you'll find it's not as skillless as you might dread. When I talked to people in my life and in online communities about what they're using AI chatbots for in their writing processes, most of them described coaches and companions invited into only parts of the journey. And ask anyone who considers themself to be a good writer if they would ever put out untouched copy from ChatGPT or even the more personable Claude, and they will recoil at the thought. There is a resounding consensus amongst writers and nonwriters alike that AI writing is "fine but not great."
What encourages me about that judgment is not just hopeful naivety that technology will never write as well as humans. It likely will. What encourages me is that the very existence of that critique suggests that while people might not be drafting on their own, AI-generated writing has given rise to a new age of writers who are engaging in active, scrutinizing editing. And of all the writing subskills that develop our ability to think, editing—asking the question of "does this actually sound persuasive and make sense"—is perhaps the most valuable one that's seldom taught.
Moreover, successfully prompting AI to write for you, even if you make it draft and edit on its own and never review the output, still demands the ability to identify the appropriate tone, format, and objective for your writing. Ironically, getting passable writing out of AI tools requires feeding it precise and purposeful writing.
None of this is to say that it is impossible that people will become debilitatingly reliant on AI in their writing process as models advance. But what I've observed and personally experienced is that most people are not trying to fully eradicate writing from their lives: at worst, they're looking for some help supplementing their weaker writing skills while still exercising others and, at best, they're seeking instructional coaching that can actively develop their skills. Indeed, AI writing assistants are often used to unstick, unblock, and ultimately expose writers to new words, structures, rules, and examples of decent writing they would otherwise never encounter.
AI vs Effort: The Surprisingly Optimistic Part
There is such a sad defeatism about humans and where we are headed in assuming everyone will aim to reduce friction and minimize learning at all costs. And if that underlying pessimism is correct, we probably have bigger problems than people writing AI-assisted emails anyway.
However, the second revelation that pacified my fears about AI writing was that the historical and contemporary behaviors of institutions committed to cultivating critical thought and expression do not point to some human telos of cutting every intellectual corner we can.
My most personally resentment-laden data point for how educational institutions adopt and leverage technology is the perpetually hesitant integration of calculators into mathematics education. Despite the calculator's ability to dropkick the need for learning arithmetic into oblivion, educators at every level have been historically thoughtful about ensuring technology does not displace foundational understandings of basic math. From my first time seeing a fraction to my last differential equations exam, I longed for a teacher who was pro-scientific calculator. I would have even settled for not having to calculate exponents in my head on quizzes. Woefully, my hero never came.
I have never met a teacher who was genuinely careless about introducing students to technology that might undermine basic skill development. I don't know a language instructor who can't spot a Google-translated assignment or a math professor who never forced arithmetic—even at the highest levels of collegiate math, where they surely believed we could multiply.
When you turn over rocks to try to uncover nefarious uses of AI in writing education, you are inundated instead with inspiring and heartwarming examples of how individuals and institutions are using AI not just to help people become better writers but to introduce joy and emotional safety into writing so that more people may come to actually love it.
You'll find the 10 million students who use Quill's technology to learn everything from basic grammar constructs to how to critically "read for evidence." You'll find 60% of American school districts using Noredink to achieve rapid and meaningful improvements in standardized testing—even in special needs populations. And you'll discover initiatives like the Philadelphia Writing Project and National Writing Project that are embracing the transformative potential for AI to unlock personalized writing instruction that the vast majority of Americans are not afforded in schools today. And if you really don't believe that AI is making writing more accessible and more fun for kids and adults, spend a couple of hours in Sudowrite and tell me it didn't light sparks in the cobwebbed creative corners of your brain.
I've been the writing type since before I could read my own handwriting or pronounce half the consonants in my name (shoutout to the speech therapists of Washoe County). I also believe in my heart of hearts—if not purely out of self-affirmation—that being "good at writing" is one of the highest-leverage skills a person can have and something anyone can master if they try. To me, writing is precious, sacred, and essential.
So, know that I do not worry lightly or passively about the future of writers in the world. But after months of losing sleep, irreverently dismissing Grammarly Pro recommendations, doom-spiraling about the fate of civilization, and bickering with ChatGPT about tasteful, intentional uses of fragment sentences, I will leave you with this: when I asked my writing GPT to critique this newsletter, it did regretfully tell me that I need to "tighten my midsection." But more importantly, it told me I did a good job. And for the millions of people for whom writing has only ever been boring, painful, and out of reach... I will happily stare down all the potential doom and decay, knowing that next time they write, AI will make it a little more likely that they can walk away with the feeling they did a good job too.
Thank you—always—for reading. Talk soon.