What remains for us?
Technomoral virtues, Lewisian activities, and humanism in the age of AI
“The first action to be taken is to pull ourselves together. If we are all going to be destroyed by an atomic bomb, let that bomb when it comes find us doing sensible and human things - praying, working, teaching, reading, listening to music, bathing the children, playing tennis, chatting to our friends over a pint and a game of darts - not huddled together like frightened sheep and thinking about bombs. They may break our bodies (a microbe can do that) but they need not dominate our minds.”
— C. S. Lewis. “On Living in an Atomic Age”. 1948.
“Moral intention is insufficient without some accompanying insight into the type of being that is worthy of my becoming in this world…a lifelong effort to attend to, or be mindful of, my moral development and courageously direct it along the path I have chosen. The enduring exercise of this intention may not lead one to moral perfection, but does lead one by degrees to a more graceful, relaxed, harmonious, and joyful way of life.”
— Shannon Vallor, Technology and the Virtues. 2016.
A friend who teaches English composition at a highly ranked four-year college in the United States recently submitted his final grades for the semester. Alongside the flood of sunny relief. that accompanies these things, he mentioned a student who so clearly used ChatGPT in her essay that her quotes were entirely fabricated. Part of the class, a semester-long close reading of Morrison’s Jazz, was to keep a notebook of quotes as they read. This student had completed that task each week, dutifully, and then failed to muster a single one of those quotes in her essay.
I was flabbergasted. Cheating with ChatGPT, sure, but not even bothering to past quotes from your existing notes into the prompt? The fake quotes were, as my friend said, “like Morrison but without the subtext.” They felt hollow, as if kicking the seemingly solid wall of words would result in a clanging sound. But this is not an essay about ChatGPT’s use on college campus. We are all aware, I hope, that both chatbots and college freshmen can produce mediocre essays. The former is a matter of technology. It is the latter I am concerned with, given that it betrays something much scarier.
When I arrived at college just over 10 years ago now, I remember packing into the school’s largest auditorium for a pep talk by the dean about “following our passion” and taking courses that “fired our curiosity”. Sure, I thought, but my family and myself had made sacrifices to get here, and I wasn’t going to waste years of tuition on following my curiosity. I promptly ignored the dean and got on with my STEM degree. I reassured myself that I was simply being pragmatic, learning quantitative “hard” skills that would be directly applicable to the job market in a few years.
In some ways, I wasn’t wrong. I did get a high-paying, white collar job in my field less than three months after graduation. Those choices set me up well to be accepted to my top choice graduate schools, to have a stable financial foundation, and to give me the liberty now to choose my job relatively freely, albeit within the bounds of late-stage capitalism.
However, in other, deeper ways, my focus - certainly my own choice, but borne of a heyday of “learn to code” extremism - hollowed out the core of my education, and more than that, set curiosity and creativity at the edge of my life, rather than at its center. This newsletter; the recent workshop I ran; my writing was all secondary to my “serious” scientific work. I was told this over and over again by my mentors and teachers, my advisors and family members, and eventually, by myself.
Ironic, then, that every day now I use a technology that can replicate many of the “hard” skills I spent years learning in a matter of minutes, and I am left wondering: what remains for me to do?
Virtue ethics posits that morality and the definitions of a good person, a good life, and a good society are more “like a language that is spoken and invented in real time”1 rather than a set of hard-and-fast rules. Virtue ethicist Shannon Vallor studies how technology reshapes human character and functions as a moral practice. In her book, Technology and the Virtues, she argues that there are qualities (”technomoral virtues”) that allow humans to respond flexibly to emerging technologies while retaining their character and living well and wisely. She draws on the traditions of Aristotle, Confucius, and the Buddha to propose these virtues; they include honesty, empathy, and humility.
I work as a machine learning researcher, and the field is currently expanding and progressing at such a pace that we speak in matters of weeks, perhaps months. Statements that apply today may well be obsolete six months from now. I regularly use new tools and feel a sense of reality slipping, of the future being yanked unceremoniously into the present, as they do things I didn’t think possible. Casey Newton calls it “AI vertigo”.
I lately have found myself slipping into bouts of “AI ennui”2: what does it mean to work, to create, amidst technology that can do many of these acts as well, if not better than you? What is the point of learning, of building skills? Which skills still matter? Which activities are still valuable? At its worst, this AI ennui can make me want to stop writing, stop coding, stop creating. Even as tools like Claude Code make projects easier to bring to fruition, they also make me wonder: what does it mean to create something myself anymore?
Last autumn, I attended a workshop of the Royal Historical Society focused on creating a policy for AI and education. I was invited by my friend Adam Budd to listen to historians at various career stages discuss how AI is being applied in their field. They highlighted the importance of practices such as interacting with the archive, reading primary historical evidence, and crafting written and oral arguments, which have not changed in many decades and remain core to their work. In addition, they mentioned fears that students do not value struggle, deep focus, or challenging practice towards mastery. AI tools are one route around these frictional, difficult, discipline-focused practices, but one that leaves us impoverished in skills that must be hard-won, if they are to be acquired at all.
I often have this experience with writing - that it is both difficult and necessary, that I am forming my own thoughts as I write the piece. There is no escaping that struggle, not without losing the inherent value of the activity.
These, to me, are the activities to which C.S. Lewis refers to in the opening quote of this essay. The things we would want to be doing “when the bomb dropped”, regardless of their outcome: gathering with friends and loved ones, interacting with art, music, or literature, learning something interesting and new. C. Thi Nguyen discusses “striving games” in his book Games: Agency As Art. To his definition, striving games invert the means and the end. We choose an end that enables the means we want to experience. A striving player would be fundamentally disappointed if they were given the outcome of the game; the process is the point.
I would argue that activities in this category remain important independent of any advances in large language models’ capabilities. Even if a model could (say in the form of a humanoid robot) hug our children, or plant our favorite spring flowers in the garden, or read a beach read in the sun, we would not want to cede these activities to it. They are the heart of life: our curiosity, our agency, our very souls.
The fear, I have, is that the structures we exist within: our education, our employment, our economy, often squeeze these activities to the margins, and with it, encourage us to substitute easy but insufficient technologies for the harder work of thinking, learning, creating, and living. We are tired; we (I) often want to cede choices to others. I have certainly let the practicality daemon guide me, rather than my own curiosity. And now, when the endpoint of that practicality has led me to a field that could very well be automated away in the next several years, what is to guide me further? I will need a much more fundamental virtue set.
Vallor speaks about the opacity of future technologies, which she argues should inspire us to technomoral humility: “a recognition of the real limits of our technosocial knowledge and ability; reverence and wonder at the universe’s retained power to surprise and confound us; and renunciation of the blind faith that new technologies inevitably lead to human mastery and control of our environment.”
It is the work of a lifetime to cultivate the virtues to live well with technology, to develop a principled refusal when the circumstances required it, and to retain, in the face of AI ennui of the most pervasive sort, your sense of “reverence and wonder”. That humility, that wonder, might not tell you exactly where to go. But perhaps it is a door, or maybe just a window.
-
https://www.nytimes.com/2026/03/06/opinion/ezra-klein-podcast-dean-ball.html ↩
-
It is early March; I respect that some of this may be 3 months without sunlight talking. ↩