Living inside the argument
Research changed what I notice. It didn't change what I do.
I promised honesty in the last post, so here it is.
The research I’ve spent the last year doing – about AI eroding the conditions under which judgment develops – has not made me use AI less. It has made me use it more. I write with AI assistance. I use it to work through product decisions at Vinted. I use it to structure arguments, check my reasoning, draft communications I don’t have time to draft alone. The efficiency gains are real, and I take them. If you came to this Substack expecting to find the work of someone who has resolved the tension by opting out, I am not that person.
I’m not sure opting out would even be the right response if I could manage it. The argument isn’t that all AI use is harmful. It’s that something specific is at risk when AI takes over the domains where judgment is practiced and developed, i.e. where the difficulty isn’t incidental, but is the point. That’s a more targeted concern, and it doesn’t mean using AI to draft a stakeholder update is the same kind of problem as using it to decide which customers to flag, or which hires to make, or what counts as acceptable content. The distinction matters. I’d be lying if I said it always holds cleanly in practice.
What the research has changed is not my behavior. It has changed what I notice. There are moments now (reaching for AI on something with real stakes, where the uncertainty is genuinely mine to sit with) where I feel the pull of that reach differently than I used to. Not always enough to stop. Sometimes enough to slow down. Often enough to feel uncomfortable with myself when I don’t.
That noticing is not the same as refusing is worth being precise about. It doesn’t constitute the kind of reflective distance the argument says is at risk, i.e. the capacity to hold the decision at arm’s length, evaluate whether this is a moment where the difficulty is the point, and act on that evaluation. What it is, exactly, I’m less sure. Some remaining capacity that hasn’t yet been displaced. Or the last recognizable trace of a capacity that is already mostly gone. The fact that I can still feel the pull distinctly doesn’t tell me which, and that uncertainty is itself part of what the argument predicts.
And the fact that noticing is apparently the most the research has produced in me, in the person who spent a year developing the argument, is worth sitting with longer than it’s comfortable to sit.
Here is the uncomfortable part. The capacity to evaluate whether AI use is harmful is not a stable external vantage point from which I observe AI’s effects on others. It is a capacity that my own AI use is shaping, in real time, in ways I cannot fully track. I am not outside the problem I am studying. I am a case of it. The research argues that sustained AI use may gradually erode the capacity for independent judgment, and I am someone who uses AI extensively while making that argument. The fact that I can still articulate the concern does not mean the concern doesn’t apply to me. It may just mean the erosion is gradual enough that I cannot see it from the inside, that what I experience as critical distance is itself already shaped by the thing I am trying to hold at a distance. I cannot tell the difference between those two possibilities from where I stand, and neither can you from where you stand, which is exactly the structure the argument is about.
I don’t know how to resolve this. I’m not going to pretend I do.
What I can say is that this tension (between using AI and studying its effects, between the efficiency and the cost) is not a personal failing that a more disciplined person would have already corrected. Starting in May, I’ll try to show why. The mechanism that makes AI useful and the mechanism that makes it dangerous are the same mechanism. That makes the tension not resolvable by better choices, but at best visible. Visibility, it turns out, is harder than it sounds.
I’m developing this argument formally in my dissertation. Subscribe to follow the research.


