Why AI is different from every other tool
The comparison to past technologies is meant to reassure. Here's exactly why it doesn't.
Every time someone raises a serious concern about AI, someone else reaches for history. Calculators, they say. The printing press. The loom. Each of these technologies was met with anxiety, changed something real about how people worked and thought, and yet here we are – adapted, with certain old skills replaced, but continuing. The argument from precedent is not stupid as it is based on a real pattern. Technologies change what humans do and how they do it, people lose capacities they once had, new ones develop in their place, and over long enough time horizons the adaptation looks roughly adequate. This is the honest version of the reassurance.
I want to take it seriously before explaining why it doesn’t apply here.
The pattern the reassurance relies on is this: prior technologies changed the method while leaving the deliberating agent intact. The calculator made mental arithmetic for large numbers unnecessary, yet the person using it still has to understand what calculation is called for, interpret the result, decide what to do with the number. The GPS changed how we navigate, yet the person driving still decides where to go, responds to the unexpected, judges whether the route seems wrong. The loom transformed weaving, yet the person operating it still makes every decision about what to produce and how the fabric should serve its purpose. In each case, the tool takes over a specific operation and the human remains the agent who decides what to do with what the tool produces. The hard part stays with the person.
The distinction I want to draw is between a technology that gives you more to work with and one that performs the work itself. Call it the difference between mediation and substitution.
A cardiologist using an ultrasound machine is working with a mediating technology. The machine gives her access to structures her unaided perception cannot reach: the interior of the chest, the movement of valves, the flow of blood. It extends what she can see. Everything that follows (interpretation, clinical judgment, the decision about what this image means for this specific patient) is still hers. The machine produces data; she produces the conclusion. The hardest part of the clinical act, reading a complex image in the context of someone’s full history and deciding when the textbook pattern doesn’t apply, is exercised every time she uses it. The tool and the judgment grow together.
A risk-scoring algorithm in that same clinical setting produces something different. It returns a probability: high risk, low risk, a number. That output is not raw data. It is a conclusion, formatted in the register of authority, already positioned as the thing that makes the next step obvious. The clinician nominally reviews it. But the judgment about what kind of risk this is, whether the statistical model fits this particular person’s situation, what the number means given everything the chart doesn’t contain, i.e. that judgment has been pre-performed by the model. The clinician is responding to a conclusion rather than reaching one. Both are looking at a screen and deciding what to do. What’s structurally different is the position the output occupies in the act of deciding.
You can see the same pattern in less clinical settings. When a platform’s recommendation system surfaces a candidate in a hiring process, it is not giving you information to reason about, it is already incorporating an evaluation, weighting experience, inferring fit, ranking relevance. The recruiter reviews the ranking. But the first act of judgment, i.e. which profiles are worth attention at all, has been performed before the recruiter enters. What’s left to them is mostly confirmation or exception, not original evaluation. This isn’t about whether the algorithm’s ranking is accurate. It might be more accurate than unaided review. The point is structural: the judgment and the data have been collapsed into a single output, and the human receives the conclusion.
That structural difference is what the reassurance from history misses. Tools have always changed what humans needed to do and made certain operations obsolete. What was not true until now is that a tool could produce an output that functions as the judgment rather than its input.
Once you see this, four things follow.
The first is about scope. Every prior tool had a domain. The calculator handles arithmetic; the GPS handles navigation; the loom handles weaving. When a tool is domain-specific, the displacement it causes is local: you stop exercising something in that domain, and the rest of your judgment is untouched. AI doesn’t have a domain in this sense. It follows you: into the meeting where you’re deciding what to do about a difficult personnel situation; into the inbox where you’re figuring out how to respond to someone in distress; into the product decision where you’re weighing competing goods that can’t all be satisfied at once. The same system that suggested a restaurant yesterday offers an ethical framing today. This is not a more powerful version of what calculators do. It is a different structure: not local displacement of a specific operation but ambient substitution across the range of judgment itself.
The second concerns something that happens before deliberation even begins. Before you decide what to do about a situation, you have to notice it as the kind of thing that calls for decision. What gets your attention is not neutral. A content feed that surfaces certain situations and not others is not just filtering information, it’s shaping which circumstances register as morally significant, which people’s conditions appear worth responding to, which risks you come to see as risks at all. This isn’t the same as propaganda, which delivers a specific message to a formed mind. It’s something that operates upstream of message reception, shaping the perceptual field within which messages arrive. By the time you’re deliberating, the frame is already in place. You didn’t choose the frame; it was installed before you arrived at the question.
The third follows from the first two. The standard check you run on any tool is: does the output seem right, given what I independently know? GPS says turn left; you look at the road and notice you’d be driving into a river. Your spatial sense is intact, it hasn’t been exercised less because you’ve been using navigation assistance. The problem with the tool is legible to you because you’ve been exercising the capacity that makes it legible. But if the tool is handling the judgment itself (reading situations, weighing considerations, constructing responses) what’s the independent standpoint from which you run the check? The capacity you’d need to evaluate whether the output is right is the same capacity that using the tool has been substituting for. The loop closes in a way it didn’t close with prior technologies. The oversight requires precisely what the tool removes.
This is the structure that makes AI categorically different, and it’s why the reassurance from history, while not dishonest, is addressed to the wrong problem.
When a technology extends perception, the right response is design quality, fair access, and sensible limits. When a technology substitutes for judgment across the full scope of practical reasoning, while shaping what appears worth judging, while making oversight structurally dependent on the capacity it displaces, all of that requires asking different questions. Which domains should it enter? What develops in people when AI is absent that cannot develop when AI is present? What kind of formation does ambient substitution preempt, not gradually but from the start?
Those are the questions I want to work through in the posts that follow. Each of the four things I’ve described above gets its own essay:
what it means for an AI output to function as a conclusion rather than an input (ETA: June 2026);
why the absence of a domain boundary changes what’s at stake (ETA: July 2026);
how AI shapes what appears worth deciding about before deliberation begins (ETA: August 2026); and
why the oversight loop that AI governance frameworks rely on can’t work in the way those frameworks assume (ETA: September 2026).
For paid subscribers, there will also be a longer treatment of all four together (ETA: July/August 2026), i.e. the argument developed fully, explicitly connected to the research I’m doing.
My thesis isn’t that AI is bad. It is that AI is something we haven’t had precise enough language for, and that the imprecision is not a philosophical footnote but the source of every misdesigned response. Getting the category right comes first.
May 2026


