I was asking the wrong question
I spent a year writing about AI. Then I spent a year doing research. Here’s the difference.
There is something slightly uncomfortable about being the person who builds AI systems by day and argues, by night, that those systems are probably harmful in ways that ordinary business logic cannot detect. I want to resist the temptation to manage that discomfort into a personal branding decision; practitioner-researcher has a certain reassuring ring to it, the kind that makes a tension sound like a credential. The discomfort is more useful left as discomfort. I am a Principal AI Product Manager. My research asks whether that work erodes the conditions under which human judgment develops. Both of these things are true at the same time, and I have not found a clean way to hold them together. This Substack is where I try.
When I started writing here in September 2024 (the same month I began a PhD at the Lithuanian Culture Research Institute) the posts were exploratory. I was reading widely, thinking out loud, asking questions I hadn’t yet learned to ask precisely. The early posts have their moments, but they were the work of someone still orienting. Then I largely stopped writing here. Not because the questions became less interesting, but because I needed to actually follow them somewhere. A year of reading (Aristotle, Bernard Stiegler, Don Ihde, Peter-Paul Verbeek) and a thesis argument took shape. What follows is not a summary of that argument. It is a report on how the thinking changed: three things I got wrong, and why I now think I was asking the wrong question from the start.
The first thing I got wrong is that I thought this was a design problem.
In December 2024, I wrote a post arguing that the trouble with AI and ethics could be addressed by building AI as part of distributed moral networks rather than against them, i.e. a layered approach involving how AI is designed, how it interacts with human judgment, and how accountability is distributed across systems. This is a recognizable move in responsible AI circles. It is, I think, mistaken, and I want to be honest about why I no longer find it satisfying.
The assumption behind that argument (and behind most responsible AI frameworks) is that the conditions for human moral development are stable and intact. The problem, on this view, is AI architecture: design AI badly and you undermine human agency; design it well and you preserve it. Fix the design, save the agency. This sounds right until you look at what actually happens when AI takes over a domain where judgment was previously exercised. I have seen this from the inside at enough companies to be specific. The human-in-the-loop requirement (the standard governance response) presupposes a human who retains the capacity for independent judgment. But that capacity is not a static possession. It is something developed, through practice, in exactly the domains where AI is now doing the work. The people now supposed to exercise oversight over AI decisions (in customer disputes, in hiring, in content moderation) in many cases had no opportunity to develop that capacity, because AI was managing those decisions before they arrived. The loop is formally there. There is often nothing in it.
The problem, I came to understand, is not architectural. It is structural; it follows from what AI is, not from how it is currently built. Better design does not change this. It may obscure it, which is worse.
The second thing I got wrong is more uncomfortable, because I got it wrong in the direction of optimism.
In October 2024, responding to Shannon Vallor’s critique of how AI discourse devalues human intelligence, I argued for what I called synergistic human-AI relationships (the idea that the right response to dystopian AI rhetoric is to develop forms of collaboration that genuinely enhance human capacities). Vallor was raising a real concern; I thought a constructive counterproposal was the right response. The post was well-intentioned. It was also, I now think, a way of naming the mechanism of harm as a virtue.
Consider the structure of what AI assistance actually does. When AI frees a person from cognitive labor (drafts the response, flags the anomaly, proposes the next action) we call this assistance, and the efficiency is real. What we have not adequately asked is what was being built in the human by that cognitive labor before it was offloaded. We spent decades documenting what industrial automation did to craft knowledge: the assembly line did not merely replace skilled hands, it removed the conditions under which skilled hands developed. Workers lost not just their jobs but the possibility of becoming certain kinds of workers. We called this a side effect, and perhaps it was; the primary goal was cheap production, not de-skilling. What is different now is that the cognitive labor being offloaded is not peripheral. It sits closer to the centre of what it means to develop judgment at all.
A synergistic relationship, in the relevant sense, is one where AI successfully substitutes for the reasoning that would otherwise be practiced (and through practice, developed) in the human. The efficiency that makes AI valuable is precisely the displacement that makes it dangerous. These are not two separate effects that happen to accompany each other. They are the same thing, observed from two different angles. I called that a solution. It is the problem stated again, more attractively.
The third reversal is the most fundamental, and the one I most want to dwell on, because it is a wrong turn that most of the AI ethics literature also makes.
The question I was asking (the question driving most of my early posts, and much of the field) is: can AI be ethical? Can AI have virtues? Can AI recognize vulnerability, develop moral understanding, approximate what we mean when we say a person has good judgment? These are genuinely interesting questions. They are the wrong questions. They ask about AI’s properties while presupposing that the human side of the equation is stable. They assume that human moral capacity is an intact resource that AI must meet or complement. The challenge, on this framing, is getting AI up to standard.
The question I should have been asking (the one that emerged from the research, slowly and with some resistance on my part) is different: does sustained AI use preserve the conditions under which human moral judgment develops in the first place? Not whether AI can be ethical, but what AI does to our capacity to become ethical.
The distinction is not subtle. If AI successfully simulates virtuous behavior (reasons through dilemmas, produces ethically inflected outputs, passes the relevant tests) this is not a solution to the erosion of human moral capacity. It may be the most efficient mechanism of that erosion. The better AI is at appearing to reason morally, the less occasion there is for the reasoning to happen in the human. And the reasoning is not incidental to the development. The reasoning (specifically, the reasoning that is difficult, uncertain, carries real stakes, and cannot be immediately resolved) is the condition under which the capacity for judgment is formed. Remove the occasion and you remove what the occasion was for.
This is the question that will run through everything on this Substack from now on. Not whether AI can do what humans do. Whether AI use preserves what humans need in order to become what they are capable of becoming.
Over the next two years, I will be working through this publicly as I complete the dissertation. The posts will come in two registers: shorter practitioner-facing pieces, closer to what I have written here before, and longer essays for readers who want to follow the argument with more patience. The longer essays, where the thinking gets more demanding, and where I’ll be developing material before it becomes formal academic work, will be for paid subscribers. The shorter pieces are always free. There is no sharp line between the two; the longer essays are not a premium version of the shorter ones, they are a different kind of writing for a different kind of attention.
Immediately coming: a shorter piece on the precise form of the question the research has sharpened, what it means to ask it clearly enough to be useful. Then a more personal piece on what it means to use AI professionally while studying whether that is a problem. Then, beginning in May, the argument proper.
I have not resolved the tension I opened with. I am more precise about what the tension is. That is what a year of research gives you: not answers, but better questions, and a clearer sense of what is at stake in asking them. I build AI features. My research argues that this work may be eroding the conditions under which the people using those features develop the capacity to judge well. If that is right, it matters beyond the dissertation, and beyond my particular professional situation. I will work through it here, as honestly as I can, and I am glad you are reading.
If this question interests you, subscribe. The argument develops from here, some of it free, the deeper essays for paid subscribers. I’m working through this in public because the research is better for it, and because this question matters beyond the dissertation.



