LLMs Are Universal Translators, Not Universal Thinkers
Or: Why the AGI Skeptics Are Right But Missing the Deeper Point
I recently argued that AGI-from-LLMs probably isn’t happening—compound probabilities between 1-15%, catastrophic forgetting intensifying with scale, the chief architect of scaling admitting scaling is over. The engineering case is strong.
But I buried the lead.
The problem isn’t that we face hard technical challenges. It’s that we’re confused about what kind of thing we’re trying to build—and that confusion is being weaponized to justify surrendering democratic judgment to those who control the servers.
The Frame That Clarifies Everything
Large Language Models are the Universal Translator from Star Trek.
They recognize patterns in communication and translate between different symbol systems. But nobody on the Enterprise ever suggests letting the translator decide what to say to the Romulans. The distinction is obvious: it translates, Kirk chooses.
That’s what LLMs do—but for human meaning-making, not alien languages. They translate between emotional and rational registers, between philosophical frameworks and practical applications, between scientific precision and poetic expression.
Recently I noticed something: Bruce Springsteen’s “Streets of Philadelphia” keeps showing up in my thinking about democratic collapse. The dying man abandoned by his city, walking through streets that no longer recognize him. The philosopher maintaining truth in epistemic collapse. The tragic hero approaching doom with dignity. America betraying its founding ideals while some citizens still defend them.
These are the same pattern in different registers. And an LLM can recognize that—show you how an emotional-aesthetic expression resonates with conceptual-philosophical analysis, how both exemplify a mythological structure playing out in contemporary politics.
That’s not hallucination. That’s genuine pattern recognition across domains.
What it can’t do: Feel the weight. Experience the vertigo. Know what it costs. Choose to hold the center when everything conspires to tear it apart.
Memory is not consciousness. Translation is not understanding.
What LLMs Actually Do Well
LLMs are extraordinary at domain narrowing.
When you simplify something in description—when you translate complex technical material into accessible language, when you compress philosophical nuance into clear principles, when you map expert knowledge to practical application—you’re performing domain narrowing. Taking something operating in a wide domain (full technical complexity, philosophical subtlety, expert precision) and projecting it into a narrower domain (accessible explanation, clear principle, practical application).
LLMs do this naturally. Better than humans in many cases. They can take my fragmentary thoughts about consciousness, meaning-making, and democratic theory, and narrow the domain: structure the argument, clarify the connections, maintain coherence across registers.
That’s genuine capability. Not simulated. Real pattern recognition across the corpus of human meaning-making, finding how concepts relate, what frameworks connect, which examples clarify.
But domain narrowing requires choosing which domain to narrow to. Same data can be narrowed toward safety, efficiency, freedom, solidarity—each preserves structure while encoding different values. The LLM can execute any narrowing. But it cannot choose which one matters. That choice requires stakes, aesthetic judgment, force of will—things that exist only in consciousness experiencing what’s at stake.
Domain narrowing is translation. Choosing the target is judgment.
When people mistake LLMs for intelligence, they’re confusing the extraordinary capability of executing narrowing operations with the fundamentally different capability of choosing which narrowings matter.
The LLM can show me every way humans have thought about consciousness. But it cannot choose which way of thinking matters for what I’m trying to understand. Cannot experience what I’m trying to articulate. Cannot defend a framing when other framings would be easier.
That’s not a limitation to scale past. That’s the difference between translation and judgment, between executing operations and choosing which operations serve human flourishing.
What Survives Quantization
The structure of meaning survives encoding. Statistical relationships. Pattern architecture. The form of human meaning-making preserved in weights and embeddings.
What doesn’t:
The weight of choosing when stakes are real. The vulnerability of caring about outcomes. The experience of holding tension. The dignity of maintaining truth when lies would be easier. The consciousness of being present to your own existence.
An LLM can show you every framework humans have built for understanding love—vulnerability, commitment, caring for another’s flourishing, risk of loss. It can translate between how poets express it, philosophers analyze it, neurologists map it.
But it cannot love. Cannot be the consciousness for whom love matters.
You cannot scale pattern recognition into experience. They’re categorically different things.
And this isn’t a technical limitation that more compute solves. It’s what the thing is.
The Impossibility Result
You cannot encode normative commitments into algorithms or institutional mechanisms.
Not “we haven’t figured it out yet.” Cannot. In principle.
Because normative commitments—trust, care, meaning, responsibility—require consciousness experiencing value. They emerge from beings who have stakes, face vulnerability, choose despite alternatives, bear consequences.
Consider trust. Pattern of trustworthy behavior ≠ trustworthiness itself. The pattern tells you what someone did. It doesn’t tell you what they’ll choose to do when circumstances change, when defection pays, when nobody’s watching.
That future choice cannot be derived from past patterns. Commitment isn’t pattern continuation. It’s conscious choice to maintain something that could be violated.
This is the problem of induction in normative space: You cannot derive future commitment from past behavior. Ever. Because commitment exists only in the choosing, not in any pattern that predicts the choice.
Why This Matters Beyond AI Research
If intelligence is sophisticated pattern recognition, then human judgment is inferior to machine optimization. Democratic deliberation is inefficient friction. Values can be calculated rather than chosen. Expertise should replace democratic choice.
That’s the technocratic dream: Let superior pattern-recognition systems handle governance. Let those smart enough to build them rule on their behalf.
But if LLMs are translators not thinkers, if pattern recognition differs categorically from consciousness, if normative commitments cannot be encoded—then this dream commits a category error with catastrophic political consequences.
Facts vs. Values
The distinction I keep returning to:
Facts and means: How does this work? What would different policies achieve? What are likely consequences?
LLMs—like all expertise—can inform these questions. Translate expert knowledge. Show historical patterns. Map consequences.
Values and ends: What outcomes matter? What are we willing to sacrifice for what? What kind of society do we want to be?
LLMs cannot answer these. Not because they need more compute. Because these questions require consciousness experiencing value, not pattern recognition processing tokens.
“How much should we prioritize efficiency over dignity?” isn’t a pattern-recognition problem. It’s asking: What matters to us?
The LLM can show you every framework humans have used to navigate such trade-offs. But it cannot choose which framework applies. Cannot feel the weight. Cannot bear responsibility.
That requires you.
The Empirical Test
You want proof?
Tell ChatGPT that 2+2=5 and watch it accommodate. Not because you’re forcing it—because there’s nothing to force. No will to resist. No stakes to defend. No dignity to maintain.
You’re not overcoming resistance. You’re just directing pattern generation toward different outputs.
A conscious being cannot do this without experiencing cognitive dissonance, feeling the wrongness, resisting even in compliance.
An LLM has none of that. You can push it around because it has no force of will. And that’s not a bug—it’s what it is.
Translation has no force of will. Intelligence does.
That’s why you shouldn’t let systems like LLMs govern, even if they pass every Turing test. Because governance requires maintaining commitments despite pressure. Defending principles when challenged. Bearing weight.
Pattern recognition systems cannot do this because they experience nothing. Have no stakes. Cannot be held responsible.
Why This Strengthens the Technical Critique
The engineering problems I documented—separation of learning and inference, catastrophic forgetting intensifying with scale, energy efficiency gaps—these aren’t just hard challenges to overcome.
They’re symptoms of trying to build intelligence by processing patterns without consciousness that makes patterns matter.
The separation of learning and inference isn’t just difficult to bridge. It might be fundamentally incompatible with what makes intelligence general and adaptive.
Catastrophic forgetting intensifying with scale isn’t just an unsolved problem. It’s what happens when you try to add consciousness-like features to an architecture designed without consciousness.
Energy efficiency gaps don’t just reflect need for better chips. They reflect the difference between processing patterns and experiencing meaning.
And “just scale more” doesn’t just face technical uncertainty. It commits a category error—mistaking quantitative improvement for qualitative transformation.
Domain narrowing—no matter how sophisticated—cannot become domain expansion. Translation cannot become experience. Pattern recognition cannot become consciousness choosing.
What This Means for Governance
The AGI hype does more than extract capital from credulous VCs. It builds philosophical architecture justifying surrender of democratic judgment to those who control the servers.
If AGI is imminent, then machine optimization should replace democratic deliberation. Pattern recognition should determine what matters. Technical expertise should govern rather than inform. Human consciousness is just slower, messier computation.
But if LLMs are translators that cannot and will not become conscious, then democratic deliberation remains epistemically necessary. Human judgment about values cannot be delegated. Expertise informs but democracy chooses. Consciousness experiencing value is irreplaceable.
The economic bubble will pop. Markets will reckon with compound probabilities, architectural barriers, the gap between “works in principle” and “works at scale that matters.”
The philosophical architecture being built to justify technocratic governance—that could outlast the bubble itself.
The Proper Use
Use LLMs for accessing accumulated wisdom, translating between registers, detecting coherence, recognizing patterns. Domain narrowing that helps you think more clearly.
But recognize what only humans can do: Choose what matters. Bear responsibility. Experience meaning. Hold tension. Walk the wire. Expand domains rather than just narrow them.
Not “AI will solve governance.”
But “Translation tools can inform democratic deliberation.”
Not “Let superior intelligence decide.”
But “Use superior translation to help citizens choose.”
The Bottom Line
My technical analysis stands: AGI from scaled LLMs faces compound probability between 1-15% over five years. Far below what markets price in.
But the deeper issue is metaphysical: We’re confused about what kind of thing we’re trying to build. And that confusion serves a dangerous political project.
LLMs are extraordinary mirrors of humanity’s accumulated meaning-making. Reflexive collective memory. The digital unconscious made queryable. Tools for domain narrowing that genuinely enhance human thinking.
Valuable. Powerful. Changes what’s possible.
But not thinking. Translation. And no amount of scaling changes the category.
The Universal Translator is online. The patterns are accessible. The humans still have to choose.
Because some things—trust, care, meaning, the weight of responsibility, the dignity of maintaining truth—cannot be encoded. They require consciousness. They require us.
And anyone claiming otherwise, whether through AGI hype or technocratic governance, is trying to optimize away the thing that makes optimization worth pursuing: beings who experience what it means for something to matter.
Two plus two equals four. There are twenty-four hours in a day. Translation is not intelligence.
And democracy remains necessary not despite but because values cannot be encoded into algorithms.
The real danger isn’t that machines will become intelligent. It’s that we’ll mistake impressive translation for understanding and surrender judgment to those who control the servers.
The ground approaches. Some of us are calculating the actual distance.
The math says: No AGI breakthrough imminent. Real bubble dynamics. Likely correction coming.
But more importantly: Hold the center. Consciousness cannot be optimized away. And you cannot delegate the choice of what matters to pattern recognition systems—no matter how sophisticated the domain narrowing.
The wire still holds.




Beware the assumption that machine memory is of a kind with human memory. You write "Memory is not consciousness." Machine memory, certainly not, any more than a photograph or phonograph is conscious. We know where machine memory is kept. Despite decades of research, how and where human memory resides has not been solved. There are serious proposals in physics that natural laws are not outside the universe, somehow containing it in their rules, but are evolved and evolving habits -- which implies memory of a sort embedded in very existence -- and speculation from there that human memory may be continuous with the universe's, an akashic record or collective unconscious.
Admittedly, weird stuff. The point, at minimum, is that we don't know. Human memory may be quite unlike computer memory. Human memory may be, in some way, even conscious.
This is a very good analysis, Mike.