The question underneath the question
There is a question underneath most AI ethics conversations that the conversation itself never quite reaches.
Most conversations I have about AI and ethics, at work, at conferences, in the comment threads of articles I shouldn’t read, orbit a set of familiar questions. Whether AI systems can be made fair. Whether their decisions can be explained. Who bears responsibility when something goes wrong. These are real questions. They’re not going away. I have spent a significant portion of my professional life working on versions of them, and I expect to spend more.
But there’s a question that sits underneath all of them, and I keep finding that it doesn’t get asked. It’s not about AI’s properties. It’s about what AI use does, over time, to the people using it.
Here it is, in its plainest form: does sustained AI use preserve the conditions under which good judgment develops?
Not does AI make good decisions? Not can AI be aligned with human values? Not even does AI respect human autonomy? Each of those questions presupposes that the humans interacting with the AI system have a stable, intact capacity for judgment, and that this capacity isn’t being affected by the interaction itself. They ask about AI’s outputs while treating the human as a fixed constant.
The question I keep returning to goes a step earlier. Before whether AI is making good decisions, there is the question of what happens, over time, to the people who rely on it to make decisions. Not in the short term; the short-term effects are mostly positive, which is why AI gets adopted. In the long term. In the cumulative effect on people who delegate more and more judgment to systems that are increasingly good at appearing to reason.
I have watched something like this happen in professional contexts. Someone joins a team where significant judgment work (customer disputes, content decisions, prioritization calls) is already handled, or heavily shaped, by AI systems. They learn to work with the outputs. They get good at reviewing, annotating, escalating, occasionally overriding. This is a real skill. The person who does it well has learned something. But it is not quite the same thing as the skill of forming the underlying judgment the AI is now performing, i.e. the one that required sitting with genuine uncertainty, without a model output to anchor to, and arriving somewhere through your own reasoning. Both skills exist; both can be practiced. Over time, the one that gets practiced in this environment is the one that develops, and the other is the one that doesn’t.
This is not a failure of implementation. It is the system doing what it is designed to do.
The standard response, in responsible AI circles, is that good design can address this. Build AI for augmentation rather than replacement. Keep humans meaningfully in the loop. Design the interaction so that the human’s judgment is exercised, not bypassed. I spent a long time thinking in this direction. I still think it’s better than the alternative. But I’ve become less confident that it resolves the underlying problem.
The argument for better design assumes that the development of judgment and the delegation of judgment can coexist without tension, i.e. that there is a way to hand off enough to AI to gain the efficiency while leaving intact the occasions where judgment is practiced and grows. For this to hold, judgment would need to be the kind of capacity that can be developed selectively: practiced in designated spaces while being offloaded everywhere else. But that is not what the development of practical skill suggests, and it is not what happens in the professional contexts I’ve described. Judgment develops through repeated engagement with real uncertainty; not simulated, not adjacent, but the genuine uncertainty where something is at stake and there is no pre-supplied answer. The situations where AI is most useful and the situations where judgment most needs to be practiced are not different situations. They are the same ones. AI is designed to extend into exactly the domains where the difficulty is real, which is also where the development happens. Better design can change the surface of this; it cannot change the structure.
This is the question I keep coming back to. Not whether AI can be ethical. Whether AI use preserves the conditions under which humans develop the capacity to be ethical, where developing that capacity means practicing judgment in situations that are genuinely uncertain, where something real is at stake, and where there is no AI output to defer to.
I don’t have a clean answer. The next several months here will be an attempt to work through one carefully. Starting in May, I’ll take the argument apart piece by piece. Before that: one more thing I want to be honest about, which is that asking this question hasn’t resolved the obvious tension in how I actually live it.
I’m developing this argument formally in my dissertation. Subscribe to follow the research.



