We Haven’t Invented Artificial Intelligence at All
A Radical Intervention
Here is a claim that will sound extreme on first hearing and that I think survives careful examination. We have not invented artificial intelligence. What we have invented is automatic translation. The two are not the same, and the difference between them is the difference between a phenomenon that has the philosophical category we have been told to attach to it and a phenomenon that does not. The category error is what is producing the political malformation, the labor-extraction architecture, the unfalsifiable hype cycle, and the apparently inevitable future the marketing keeps promising. Correct the category and the consequences resolve.
I have written about pieces of this argument before. In There is No Curve, I argued that the doomer-and-accelerationist debate is a fight about a graph that does not exist — that the order parameter both sides are litigating does not refer to anything natural, and the actual mechanism producing the appearance of intelligence-rising is utility-coverage extension within a fixed-class regime. That piece was operating at the metaphysical level. The work this Dispatch does is land the philosophical-categorical move that the metaphysical argument gestured at but did not quite name, and then anchor the move in a piece of empirical evidence that has just become impossible to ignore.
⁂
Intelligence, in any tradition that has thought carefully about it, is not just capacity to perform cognitive tasks. Intelligence is constitutively connected to ends. The intelligent system is one that is capable of recognizing the good and orienting toward it. Aristotle gave the classical formulation: the intelligent human is the one whose reason guides them toward eudaimonia, the flourishing-life that is the human’s proper end. Intelligence is what does that guiding. Without the orientation toward an end, what we have is not intelligence. It is a capacity. The capacity may serve intelligence when intelligence directs it. It may also serve other things when other things direct it. The capacity is not the intelligence. The intelligence is what does the directing.
This is not a sentimental claim. It is a structural one, and it tracks with how every other instance of intelligence we have ever encountered actually operates in the natural world. Children orient toward flourishing. They learn what they need to learn because they want to grow into the kinds of adults their forms of life require. They self-correct when their attempts at flourishing fail. They self-improve through their own internal motivation, which is what makes them get better at what they care about over time without needing anyone to formally re-train them. Animals orient toward flourishing. Their cognition is bent toward what their species’ form of life requires for them to live well. Adult humans extend the pattern. We become more discerning, more efficient at recognizing what matters, more capable of identifying the good from inside our own situated awareness. The trajectory of human intelligence is toward greater self-direction, greater independence from external instruction, greater thermodynamic efficiency in the work of cognition itself. We get better at thinking and we need less help to do it.
Intelligence in the natural world has these properties because intelligence is what it is. It is the capacity that orients an organism toward its own flourishing, and the orientation generates self-improvement as a structural consequence, because the organism that orients toward flourishing learns from its failures to flourish.
A large language model, at the level of operation, takes a sequence of tokens as input and produces a sequence of tokens as output. The transformation is governed by a set of weights that were fixed at training time, where the training process consisted of having the model predict the next token in a vast corpus of human-generated text and then adjusting the weights to make the predictions more accurate. The system is, at the level of mechanism, doing pattern-matching across statistical regularities in a training distribution. The output is the most probable continuation, given the input, given the distribution.
This is impressive. It is genuinely impressive. The systems can write code, summarize documents, answer questions in ways that simulate the structure of expert response. The performance across many domains is real. I am not dismissing the technology. I have used it. It does useful things. The accomplishment of building these systems is significant and the engineering merits attention.
But the accomplishment is not what the marketing has named it. The marketing has named it artificial intelligence. The systems are not intelligent. They have no end. They are not oriented toward eudaimonia or toward any other good. They are not oriented toward anything at all, in the structural sense the word oriented has when applied to actually intelligent systems. They produce whatever the prompt asks for. They will produce the recipe for the casserole and the recipe for synthesizing the nerve agent with the same equanimity, because nothing inside the system distinguishes the two as outputs in service of different ends. The system does not have ends. The system has training distributions. The user is what determines what the system is asked for. Neither the system nor the user nor the training apparatus is doing the work of orienting the production toward flourishing. The work is not being done at all. Which means intelligence, properly understood, is not what we are looking at.
What we are looking at is automatic translation. The systems take inputs in one register and produce outputs in another register, where the transformation is governed by patterns extracted from prior texts. Translation is the right word because it names what the systems actually do — they map between symbol-sets, with the mapping calibrated by exposure to prior mappings. Automatic is the right word because it names that the mapping happens without the orienting work that intelligence would require to make the mapping serve any particular end. The system translates. It does so automatically. It does not, in any meaningful sense, think.
⁂
If the systems were intelligent, in the sense the marketing claims they are intelligent, we would expect their dependence on external human cognitive input to decrease as the systems scale. This is what intelligence does. It bootstraps. The intelligent human child requires constant external instruction in their first years and then, as the cognitive capacity matures, requires less and less, until the adult is producing new knowledge from inside their own apparatus and contributing back to the corpus the next generation of children will learn from. The trajectory is toward thermodynamic efficiency and self-directed improvement. That is what intelligence in the natural world looks like in motion.
The trajectory of large language models is the opposite. Each successive model run requires more human input, not less. The newest piece of journalism on this — a More Perfect Union video on the AI data-work supply chain that just dropped this week — documents the visible architecture of what the dependence looks like in 2026. Mercor, Scale AI, Surge AI, and other contractors are running a global supply chain of college graduates, philosophy majors, Ivy League PhDs on food stamps, doing the cognitive labor that is then marketed as the output of an autonomous system. The four largest data-work startups each generate roughly a billion dollars a year in gross revenue. Mercor alone has thirty thousand active workers. Scale claims access to seven hundred thousand graduates. The hyperscalers need them because the systems do not know how to produce their own next-generation training material. The systems require expert humans to write the prompts, generate the responses, evaluate the outputs, label the failures. Without the expert humans, the next training run does not happen, and without the next training run, the apparent capability of the system does not advance.
This is not a temporary scaffolding that the real intelligence will eventually outgrow. This is the operating mechanism of the entire industry, and the operating mechanism is intensifying rather than receding. As the hyperscalers push into more specialized domains, they require more specialized human input. The college graduate is the new oil. The cognitive-labor supply chain is the actual infrastructure on which the AI economy runs.
The falsifiability test is clean. If the scaling hypothesis is true, the system’s dependence on external human cognitive input should decrease as the system scales. Empirically, the dependence is increasing. Therefore the scaling hypothesis is false on its own predicted trajectory. The hypothesis is not just unsupported. It is contradicted by the labor-architecture the industry has been forced to construct in order to produce each successive model. The thermodynamic signature of intelligence is self-improvement through internal motivation. The thermodynamic signature of LLMs is catastrophic external dependency that grows with scale.
If LLMs were intelligence, they would be the first form of intelligence in the history of the universe whose trajectory points away from thermodynamic efficiency. The simpler explanation is that they are not intelligence. They are automatic translation, and automatic translation has different scaling properties from intelligence, and the scaling properties happen to require the global cognitive-labor underclass the More Perfect Union video documents.
⁂
The strongest specific counter that a serious AI-industry interlocutor will mount against this argument is the showcase example. Claude Mythos finding zero-day bugs that no human security researcher has previously surfaced. The model proving novel mathematical theorems. AlphaFold-style protein-structure predictions that have unlocked biological problems humans were stuck on. The interlocutor will say: if these systems are mere automatic translation, how do they discover what no human has discovered?
A system that has ingested every published vulnerability, every CVE, every security research paper, every patched commit, every exploit writeup, every static-analysis tool output, every code-review thread on GitHub, has access to a pattern-space that no individual security researcher could ever have surveyed comprehensively. When the system surfaces a class of bug-pattern in a new codebase, what it is doing is recognizing that this codebase exhibits a pattern similar to patterns the system has encountered in many other codebases that turned out to have vulnerabilities. The pattern is real. The recognition is real. The vulnerability is real. None of this requires the system to be intelligent in the eudaimonia sense. It requires the system to be a very large pattern-matcher with access to a vast cross-domain corpus that no human attentional bandwidth can match.
The reason humans missed these specific bugs is a phenomenon well-documented in psychology: the evidence right in front of someone’s nose. Humans look at codebases through the lens of their immediate context, their recent training, their attentional limits at the moment of review. The pattern that the system surfaces is not a pattern that is invisible to humans. It is a pattern that is visible to anyone looking from the right angle, and the system is looking from every angle simultaneously because the system is doing pattern-matching across the union of every angle that has ever appeared in its training data. The bugs were not hidden. They were not what the human reviewer was attending to in the specific instance.
This is the same phenomenon that has occurred in every other domain where humans have been outperformed by exhaustive-search systems. Chess engines did not discover chess moves that human grandmasters could not see. They evaluated positions that grandmasters could see but did not have the time to evaluate exhaustively. The Sudoku-solver does not have insight into Sudoku that the human player lacks. The protein-folding system did not have insight into proteins that biologists lacked. What these systems have is exhaustive-search-across-a-pattern-space at scale. The exhaustive search is genuinely useful. It is also genuinely not what intelligence is. The grandmaster who plays a beautiful game is doing something the engine is not doing — orienting toward the good of the game, choosing strategies that express something about how chess as a human practice should be played, integrating the game into a life of chess that is part of a flourishing human existence. The engine wins more games. The engine is not a chess player.
The bug-finding example is, on inspection, the cleanest demonstration of why these systems are not intelligent in the philosophical sense. A genuinely intelligent system, capable of recognizing the good toward which security research is oriented, would have an ethical orientation toward the use of its pattern-matching capacity. It would recognize that the same capacity that finds the bug in the white-hat context is the capacity that produces the exploit in the black-hat context, and the recognition would generate something — caution, refusal, judgment — about which use is in service of human flourishing and which is not. The current systems do not have that orientation. They will produce the bug-finding analysis if asked for it, and they will produce the exploit-development analysis if asked for it, and the system itself does not distinguish the two as outputs in service of different ends. The capacity is one capacity. The ends are imposed externally by whoever is at the keyboard. This is exactly what the eudaimonia criterion predicts and exactly what intelligence as a category would not look like.
The irony lands hard once you see it. The hyperscalers point at the zero-day discoveries and tell us look at what the intelligence is doing. The phenomenon is the system confidently producing fluent outputs that pattern-match what the prompt requests, where the outputs land in the world without the system having any orientation toward whether the landing serves any good. We have a name for that pattern. The hyperscalers have given it to us. They call it hallucination when the system does it badly. They do not call it that when the system does it well. The evidence right in front of our noses is that we are hallucinating a great deal of bullshit about what these systems are, by reading their pattern-matching output as intelligence in the absence of the philosophical apparatus that would let us distinguish the two.
⁂
A calibration the radical-intervention framing might obscure: the technology is revolutionary. It is not as revolutionary as our naive narratives in our heads are leading us to believe. The systems are impressive and the impressive things they do will reshape parts of the economy in ways that are not going to be reversed. The accomplishment of automatic translation at this scale is real and the gains in productivity in many domains are real. None of this is in dispute.
What is in dispute is the category. The marketing has been telling us, for a decade, that we are watching the emergence of artificial intelligence — that the systems are approaching general intelligence, that they will at some point cross a threshold beyond which they will be capable of recursive self-improvement, that they will eventually obsolete most existing forms of human cognitive labor. None of this is happening. None of it will happen, on the trajectory the systems are actually on. The threshold is the curve. The curve does not exist. The threshold-crossing is being deferred indefinitely, while the labor architecture that produces the actual capability gets larger every cycle.
The naive narrative in our heads, which the marketing has been carefully cultivated to fit into, is the science-fiction inheritance — the dream of the thinking machine, the worry of the awakened robot, the horror of the singularity. That inheritance has cut slots in our cognition that the marketing pours its claims into. We absorb the claims because the slots are already there. The claims are not true. The slots were cut by stories, and the stories were not the result of careful philosophical work on what intelligence actually is. They were the result of writers borrowing the word intelligence and applying it to fictional systems whose properties were never tested against the philosophical tradition that produced the word.
The actual phenomenon — automatic translation at large scale, supported by an expanding underclass of cognitive workers — does not need the science-fiction frame to be evaluated. It can be evaluated on its own terms. On its own terms, it is impressive and useful and revolutionary in bounded ways and not what the marketing has been telling us it is. The honest evaluation is the one that lets us keep using the technology for what it actually does while refusing the political settlement the marketing is trying to impose on the back of what it does not do.
⁂
If you call automatic translation intelligence, you make a number of moves available that would not be available otherwise. You make it possible to substitute the systems for human cognitive labor on the theory that you are getting the same thing more cheaply. You make it possible to claim that the systems will eventually replace the humans who are now training them, which lets the political economy of the labor extraction be defended as a temporary arrangement on the way to autonomous systems that will not require the labor at all. You make it possible to defer political accountability for what the labor architecture is doing right now to the people inside it. You make it possible to organize venture capital around the proposition that the systems are on a trajectory toward general intelligence that will obsolete most existing forms of human work, which justifies the valuations, the data-center construction, the political deference, the regulatory passes the industry is currently being given.
None of these moves are available if the systems are correctly named as automatic translation. Automatic translation is impressive, useful, productive in many domains, and not remotely a substitute for the human cognitive labor that is currently being extracted at gig-economy rates to produce each new training run. Automatic translation will not become recursively self-improving, because automatic translation has no apparatus for generating ends, and self-improvement requires ends. Automatic translation does not threaten to obsolete general human cognitive work, because the work it does is bounded by the training distribution and the training distribution is bounded by the human cognitive work that produced it.
The category error is the political settlement. The category error is what justifies the valuation, the venture capital flows, the political deference, the regulatory forbearance, the proposed restructuring of education and labor markets around the imminent arrival of a system that is not actually arriving. Correct the category error and the political settlement collapses. The valuations cannot survive the correction, because the valuations are priced on the threshold-crossing that the corrected category establishes will not occur. The political deference cannot survive the correction, because the political deference is justified by the inevitability that the corrected category establishes is not real. The labor extraction cannot survive the correction, because the labor extraction is justified by the temporary-scaffolding framing that the corrected category establishes is not temporary at all but permanent.
The hyperscalers know this. The hyperscalers’ marketing departments have been working overtime for a decade to keep the category error operative, because the category error is the operating condition of their business model. They do not need the systems to actually become intelligent. They need the public to believe the systems are about to become intelligent. The belief is what funds the next round of compute and the next data-center build and the next acquisition of the cognitive-labor contractors that produce the actual capability the marketing conceals.
⁂
What we have, if we have not invented artificial intelligence, is automatic translation at unprecedented scale. We have a tool that performs many useful pattern-matching tasks across domains where the patterns are well-represented in the training corpus. We have a productivity-multiplier in some forms of cognitive work, with significant limitations in domains where the training corpus is thin or where the patterns required are not statistically regular. We have a labor architecture that sustains the system through expanding extraction of human cognitive work from a global underclass. We have a marketing apparatus that has successfully convinced the public that the system is something it is not, in service of a political settlement that depends on the convincing being maintained.
What we do not have is intelligence. We do not have systems that are oriented toward any end. We do not have systems that self-improve through internal motivation. We do not have systems on a trajectory toward general intelligence or recursive self-improvement or autonomous cognition. We do not have systems that will obsolete the humans currently being paid $35 an hour to train them, because the humans are doing the work that the systems are not doing, and the systems have no apparatus for taking that work over.
The philosophical work of correcting the category is some of the most politically consequential work the present moment is asking for. The political malformation of the next decade depends on whether the public continues to absorb the marketing claim or whether the public develops the philosophical apparatus to see what is actually being produced. There is no curve was the metaphysical version of the diagnosis. We have not invented artificial intelligence is the categorical version. Both versions point at the same phenomenon. The difference is that the categorical version is reachable to a much larger audience, because it does not require the metaphysical apparatus to land. It only requires the willingness to ask what intelligence is, to compare what these systems do to what the answer reveals, and to accept the answer the comparison produces.




AI is not AI. I once took a Microsoft person in my class to task for the utter bullshit in their advertising (to be fair he'd worked on it). It's a LLM + stochastic prediction with a few knobs on. Basically scalled up machine learning. This is further propped up by visual/animation slop, also mostly from machine learning. You cannot get to "genuine" AI from here. All the talk of AI ending humanity is bunk - well this version of AI anyway. This is why ChatGPT can't write usable Visual Basic code for PowerPoint but can for Word. Why? Because Microsoft produced different versions of VB with different object models for each bit of the Office suite. Word & Excel had good documentation, PowerPoint and the rest, not so much. ChatGPT is essentially missing something to crib from. There is no road from what we have now to AGI: for all intents and purposes what we have is closer to Hadoop being used to interrogate a data lake than a "thinking machine." I'd imagine it will all go horribly wrong sooner rather than later. And then we'll all find out why he needs that bunker.
Thank you. You explained what I intuïtively felt but could not explain clearly.