The Crisis, No. 15
On the lie that the future is fixed
Noah Smith published an essay this morning called “You Are No Longer the Smartest Type of Thing on Earth.” His thesis is simple: AI is smarter than you. The trajectory is determined. The resources deployed are vast. The scaling laws hold. Your destiny is, in his words, “mostly out of our hands.”
He uses a metaphor. A tiger. He tells you that for all of human history, you were the most intelligent thing on this planet. Now there is a tiger in the bed. It is bigger than you. It is stronger than you. It can kill you. Your task, going forward, is to figure out how to sleep next to it without being eaten.
Every sentence in his essay is a description. Not one of them is a prescription. And yet the conclusion — your destiny is out of your hands — presents itself as realism. As the mature assessment of a serious observer. As the thing any intelligent person would conclude, given the data.
I want to ask one question about the tiger. Who built it?
But first — a prior objection. Smith’s title tells you everything about his frame: “You Are No Longer the Smartest Type of Thing on Earth.” The premise is that human worth is indexed to intelligence. That being the smartest thing was what mattered, and losing that status is what makes the moment urgent.
I find this premise disgusting.
I do not consider human worth relative to intellect or smartness. I never have. That I am learned, that I possess knowledge and the ability to provide insight to others about the world — this feels like an obligation to share. Not an advantage to be wielded as proprietary information. That is why I write. Not because my intelligence makes me valuable. Because whatever I have learned belongs to the community that made my learning possible.
Smith’s frame — you were the smartest, now you are not, therefore tremble — is the view from nowhere applied to human dignity itself. It says your worth was always a function of your cognitive rank in the hierarchy of beings. That you mattered because you were at the top. And now that something else might be at the top, the question of whether you matter is suddenly open.
The question was never open. Human dignity is not a benchmark. It is not subject to scaling laws. It does not depreciate when a machine passes a test. The parent who loves her child does not love the child for being the smartest type of thing on Earth. She loves the child. Period. The obligation we owe each other — to share what we know, to protect what we’ve built, to govern ourselves with care — does not diminish because a language model can write code.
Smith cannot see this because his frame has no room for it. In his frame, worth is position. And position is precarious. That is not wisdom. That is the anxiety of a man who has mistaken rank for meaning.
⁂
Smith describes a trajectory. The compute is scaling. The investment is flowing — $650 billion this year alone. The capabilities are advancing. The AI writes code now. The AI solves math problems now. The AI is, by any functional measure, surpassing human performance in domain after domain. He presents charts. He cites benchmarks. He links to viral essays about how fast everything is changing.
All of this is true. None of it is a law of nature.
The compute did not deploy itself. The $650 billion did not allocate itself. The venture capitalists who decided to pour the resources of a civilization into this particular technology rather than any other — they were not following gravity. They were making choices. The scaling laws describe what happens when those choices continue. They do not describe what happens if the choices change.
This is the most important distinction in the philosophy of technology, and Smith erases it in his opening paragraph. He treats the trajectory as fixed. As determined. As something that is happening to humanity rather than something being done by specific human beings who could, at any point, do something else. The choices of the few become the conditions of the many. And the many are told: adapt.
Hume’s guillotine falls here with such force it should be audible across the internet. Every point in Smith’s argument is an is. The AI can do this. The compute is growing. The benchmarks are rising. Not a single ought. And the conclusion — that humanity’s destiny is out of our hands — is a normative capitulation disguised as empirical observation.
Someone chose. The choice came from somewhere. The trajectory is not weather. It is policy — made by people, for reasons, serving interests. And the claim that it cannot be altered is not a description of reality. It is a permission structure for the people making the choices to never be held accountable for them.
⁂
I have written thirteen pamphlets about the view from nowhere — the claim that one can make choices without standing anywhere, that one can wield power without having commitments, that the normative can be hidden inside the descriptive and thereby escape accountability. I traced it through media — Weiss at CBS, installing Free Press contributors while claiming editorial neutrality, in Crisis No. 6. Through corporate governance — Cook at Apple, serving two authoritarianisms while wearing an MLK quote, in Crisis No. 5. Through epistemology — the claim that objectivity transcends perspective, in Crisis No. 13. Through economics — the claim that property precedes the political community, in Crisis No. 14.
Smith’s essay is the final form.
It is the view from nowhere applied to the future itself.
He is not claiming to stand outside ideology. He is claiming to stand outside agency. He is saying: it does not matter what you want. It does not matter what you choose. The technology decides. The scaling laws decide. The compute decides. You are a passenger in a vehicle that has no steering wheel, and the mature response is to accept this and plan accordingly.
This is not realism. It is the most complete abdication of moral imagination available to a literate person.
⁂
There is a reason I keep returning to David Hume in these pamphlets. It is not because Hume is fashionable. He is not. It is because Hume saw something in the eighteenth century that the twenty-first century desperately needs to remember.
You cannot derive ought from is. No accumulation of facts, however vast, however precise, however rigorously gathered, produces a normative conclusion. The data does not tell you what to do with the data. The scaling law does not tell you whether to scale. The benchmark does not tell you whether the thing being benchmarked should exist.
Between every description and every prescription, there is a gap. And the gap is permanent. It cannot be closed by more compute. It cannot be optimized away. It cannot be crossed by scaling laws or venture capital or benchmark improvements. The gap is the space where human agency lives — the irreducible domain in which conscious beings decide what matters, what to pursue, what to build, what to refuse.
Smith’s essay is an attempt to close this gap by pretending it does not exist. If the technology is sufficiently advanced, he suggests, the question of whether to build it becomes moot. It will be built. It is being built. The resources are committed. Your preferences are irrelevant. The is has become so large, so overwhelming, so well-funded, that the ought disappears beneath it.
But the ought does not disappear. It cannot disappear. This is what Hume proved. The normative domain is not a luxury that sophisticated people outgrow. It is the permanent condition of being a conscious being in a universe that does not come with instructions. The universe gives you facts. You must decide what they mean.
And “the AI is getting smarter” does not mean “let the AI decide.” It means: the question of who decides just became more urgent, not less.
⁂
I want to talk about the parent.
There is a mother or a father — millions of them — who watches their twelve-year-old disappear into a screen and come back different. Flattened. Anxious. Performing for an audience of strangers. Unable to sit with boredom. Unable to sustain attention. Changed in ways that the parent can see but the metrics do not capture.
This parent did not ask for any of this.
They did not ask for an attention-extraction engine optimized for engagement to be pointed at their child’s developing brain. They did not ask for the social architecture of adolescence to be restructured around algorithmic feeds. They did not consent to the experiment. They were not informed of the risks. The choice was made for them — by people in specific rooms, for specific reasons, serving specific financial interests — and the parent was told, afterward, that this is simply how progress works. That their discomfort is a failure of understanding. That the technology is neutral and the market decided and nobody chose this and it just happened.
The parent knows this is a lie. Not because they have read Hume. Because they live with the child. Because they can see, from where they stand, that something was taken from their family without their consent. And every authority figure — the tech executive, the economist, the policy analyst, the Noah Smiths of the world — tells them that what they see is not real. That the trajectory is determined. That adaptation is the only freedom available.
The parent’s response — I didn’t ask for this — is the most precise epistemological claim available to a citizen. It is the claim that a normative choice was made that restructured their life, and they were not included in the making of it. That is not misinformation. That is a correct description of a political event. Someone chose to build the attention engine. Someone chose to point it at children. Someone profited. And the people who bear the consequences were never consulted.
The establishment’s move — “the people are merely misinformed” — is the institutional version of the view from nowhere. It says: we see the full picture, they see only parochial concerns. Their resistance is ignorance. Their discomfort is a knowledge deficit that more education, more “digital literacy,” more explanatory journalism will fix.
The deficit is not on the demand side.
Smith tells this parent: the tiger is in the bed. Learn to live with it.
I tell this parent: you are right. The tiger was built by people. The people who built it are not gods. The trajectory is not fate. And your right to participate in decisions that restructure your child’s life is not a quaint sentiment that technology has made obsolete. It is the foundation of self-governance. It is what the republic was built to protect. And anyone who tells you otherwise — however sophisticated their charts, however impressive their benchmarks — is making a political argument, not a scientific one.
⁂
I need to make an uncomfortable observation. It needs to be said plainly, because the comfortable version has been shielding people from the truth for too long.
Fascists are good at technology.
The V-2 rocket was the most advanced piece of technology on Earth in 1944. It was built by slave laborers at Mittelbau-Dora under the direction of Wernher von Braun. More people died building the V-2 than were killed by its deployment. The engineering was impeccable. The rocket flew beautifully. The method was not in question. The values were in question. And the method had nothing — nothing — to say about the values, because it cannot. That is Hume’s guillotine. That is the whole point.
Henry Ford perfected the assembly line. He also published The International Jew, received the Grand Cross of the German Eagle from the Nazi government, and was praised by name in Mein Kampf. His technological genius was real. His fascism was real. These were not in tension. The assembly line did not require democratic values. It required optimization. And optimization is indifferent to what is being optimized and for whom.
In The Lord of the Rings, Tolkien — a man who survived the Somme, who watched the industrialization of slaughter — made Mordor the most technologically advanced civilization in Middle-earth. The forges of Isengard. The breeding programs. The siege engines. The mechanized production of armies. Saruman’s betrayal is marked specifically by his turn toward industry — the destruction of living things in service of efficient production. Tolkien understood, because he had lived through it, that technical mastery and moral catastrophe are not opposites. They are natural companions when technique is severed from the normative ground that should direct it.
The Shire has no technology to speak of. It has gardens.
Elon Musk builds rockets. The rockets fly. He is dismantling the democratic institutions that would constrain how the rockets are used and who they serve. He is good at technology. That is not in dispute. The question was never whether he is good at technology. The question is whether being good at technology qualifies you to govern.
The historical record answers this question with such clarity that it should not need to be stated: the people who built the best rockets in 1944 were pointing them at London while working their slave laborers to death. The method worked. The values were absent. And the method did not notice, because the method cannot notice. That is not what the method does.
⁂
Steve Jobs understood this. It was his deepest intuition, even if he never articulated it in philosophical terms.
“The computer is a bicycle for the mind.”
The bicycle is a tool. The human is the rider. The rider decides where to go. The bicycle amplifies the rider’s capacity to get there. The destination is not the bicycle’s business. The destination belongs to the rider. The tool serves. The human directs.
And the direction — the where — lives on the far side of Hume’s guillotine. No amount of bicycle engineering tells you where to ride. The gears can be perfected. The frame can be lightened. None of that produces a destination. The destination comes from the rider, from the normative domain, from the place where values and meaning and purpose live. I wrote about Jobs in Crisis No. 5 — about his insistence that Apple sit at the intersection of technology and the liberal arts. That was not marketing. That was an epistemological commitment — the recognition that the tool and the purpose come from different domains, and the purpose has primacy.
What has replaced Jobs — not just at Apple, but across Silicon Valley, across the entire technological establishment — is the inversion of his metaphor. The current vision, stated openly at their conferences and in their pitch decks and now in Noah Smith’s essay, is that the human is the bottleneck. The human is slow. The human is biased. The human is irrational. The human is the part of the system that needs to be optimized away.
The computer does not need a rider. The computer needs data, and the human is a source of data, and once the data is extracted, the human’s role is finished.
The human is the bicycle for the computer’s mind.
That inversion is not a technical development. It is a metaphysical revolution. It is the replacement of the Humean understanding — that the normative domain is irreducible, that the human rider is necessary because only the rider can choose the destination — with a Cartesian fantasy in which the system becomes complete. The machine will optimize. The optimization will converge on the right answer. The is-ought gap will close. The view from nowhere will be achieved — not by a human, but by a machine that has transcended the human limitations that Hume identified as permanent.
It is the oldest wrong in the history of philosophy, dressed in new hardware. And Gödel proved — formally, mathematically, irrevocably — that no formal system sufficient to express arithmetic can prove its own consistency from within. The machine is a formal system. More compute does not escape the incompleteness. More parameters do not cross the gap. The is-ought boundary is structural, not computational. And no amount of scaling will cross it, because it is not a limitation of processing power. It is a feature of the relationship between description and prescription.
The machine cannot tell you where to point the rocket. That is the rider’s job. And the rider is the citizen.
⁂
There is something deeper here. Something that the materialist frame — on both sides — cannot see.
I wrote in Crisis No. 14 that property is real and downstream of the democratic compact. That Jefferson deliberately removed it from the foundation and placed the pursuit of happiness there instead. I want to extend that now, because Smith’s essay and the entire techno-determinist project rest on the same error that Locke made — the belief that accumulation is the ground of meaning.
I am not a materialist. My ethics are not based on the accumulation of stuff. I do not think it is a moral principle that the billionaire should not exist. It is a spiritual observation that I find more important: why would anyone want that? Is having accumulated the most money what will comfort you on your deathbed? And to those who covet what the billionaires have — you are making the same mistake they are.
This is why I am neither a socialist nor an anti-capitalist. I wrote about this at length in The Two Materialisms — there are two forms of materialism in American political life, and they are mirror images. Marxist historical materialism says: change the ownership of the means of production and consciousness follows. Neoliberal market materialism says: maximize economic freedom and flourishing follows. Both treat economic arrangements as primary and spiritual values as derivative. Both think the fundamental question is who owns the stuff. Both are wrong.
The billionaire and the revolutionary who wants to redistribute his wealth are trapped in the same frame: the frame that says the material is what matters. That the question of who owns the stuff is the fundamental question. It is not. The fundamental question is what we owe each other. What is true. What is just. What is worth pursuing with the brief time we are given.
I pity people who think opulence is the good life. Opulence is its own prison and its own punishment. Look at the men building these tigers. Look at the hollowness behind the accumulation. Look at the frantic drive to build more, scale more, own more — as though enough compute will fill the space where meaning should be. They are building Mordor and calling it progress. They are accumulating power and calling it purpose. And they are telling the rest of us that our destiny is to adapt to the world they are building for themselves.
But the happiest people — the people who actually live well — are never the ones pursuing stuff. They are pursuing truth, justice, connection, craft, beauty, service. The things that live on the far side of Hume’s guillotine. The things that no scaling law produces and no optimization converges on. The things that require the rider to choose a destination and ride there with purpose.
The technological determinists have not merely made a political error or an epistemological error. They have made a spiritual error. They have mistaken power for meaning and trajectory for purpose. And they would impose that error on seven billion people who never asked for it.
The People are sovereign. Including over the question of what constitutes a good life — a question that no machine, no scaling law, and no billionaire can answer for them.
⁂
Smith tells you the tiger is in the bed. He tells you to be smart about it. To adapt. To accept that your destiny is out of your hands.
He does not tell you who built the tiger. He does not tell you who funded the tiger. He does not tell you who profits from the tiger. He does not tell you that at every point in the tiger’s construction, a human being made a choice — a normative choice, an ought — and that choice could have been different. He presents the tiger as if it arrived by natural process. As if tigers just appear when compute crosses a threshold.
But someone built the tiger. Someone is building it right now. And the seven billion people who have to sleep next to it were never asked whether they wanted a tiger in the bed.
This is the final lie. Not that the technology is powerful — it is. Not that the capabilities are advancing — they are. But that the trajectory is fixed. That the future is determined. That human agency has been superseded by the scaling laws. That the only freedom remaining is the freedom to adapt to conditions you did not choose.
That is not realism. It is feudalism with a tech stack. It is the lord of the manor telling the serf that the harvest belongs to him because the land is his, and the serf’s only freedom is the freedom to work or leave. The serf did not choose the arrangement. The serf was not consulted. But the arrangement is presented as natural — as the way things are — and the serf’s objection is treated as a failure to understand agriculture.
I traced the intellectual genealogy of this feudalism in Crisis No. 2 — from Rothbard through Hoppe through Srinivasan. I named the empire of exit as foreign policy in Crisis No. 8. I identified the epistemological structure that enables it in Crisis No. 13. And I found the root in Crisis No. 14: the entire project is the attempt to reverse Jefferson’s substitution, to put property back in the foundation, to make ownership the ground of governance again.
Smith’s technological determinism is the temporal extension of that same project. It says: not only does property come first — the future belongs to the people who build. The owner’s logic is not merely the correct logic for the present. It is the inevitable logic of the future. The scaling laws have spoken. The trajectory is set. And you — citizen, parent, voter, human being — are a passenger now.
The People are sovereign. Including over their tools. Including over their futures. Including over the decisions about what to build, who it serves, and whether it should exist at all.
The scaling laws are real. They describe what happens if the current choices continue. They do not describe what happens if the choices change. And the choices can change. Because they were always choices. Made by people. Standing somewhere. Serving interests. Answerable — or they should be — to the democratic community whose lives are restructured by the consequences.
⁂
These are the times that try men’s souls.
The summer soldier will tell you the future is decided. That the technology marches on. That your agency is a relic of a simpler time. That the mature response is accommodation. That resistance is Luddism, and nostalgia, and a failure to understand the charts.
The summer soldier is a coward. Not because he is afraid of the technology — he is right to respect its power. But because he has surrendered the one thing that no technology can take from you unless you give it away: the right to decide.
The future is not fixed. The trajectory is not fate. The tiger was built by human hands, and human hands can choose what to build next. The parent who says I didn’t ask for this is not naive. She is making the foundational democratic claim — the claim that precedes every other right, that grounds every institution, that makes the republic possible: the People are sovereign over their conditions, not subject to them.
Steve Jobs made bicycles for the mind. This lot would make the mind a bicycle for their machines — and call the servitude inevitable.
I refuse.
The future belongs to the People. Not to the scaling laws. Not to the venture capitalists. Not to the engineers who build tigers and then tell you adaptation is freedom.
The future belongs to us. It always has. And the lie that it doesn’t — the lie that the trajectory is fixed, that the choices are made, that your only freedom is the freedom to accommodate — that lie is the last and most dangerous version of the view from nowhere. Because it doesn’t just hide the choices being made in the present. It forecloses the choices that could be made in the future. It tells you that tomorrow is already decided. That the democratic community has no standing to shape its own conditions. That the People are passengers now.
They are not. We are not.
The ground is ours. The future is ours. The tools are ours — to use, to refuse, to direct, to constrain, as a self-governing people accountable to each other and to the generations that follow.
The computer is a bicycle for the mind. Not a throne. Not a king. Not a tiger.
A bicycle. And the rider decides where it goes.





"The compute did not deploy itself. The $650 billion did not allocate itself. The venture capitalists who decided to pour the resources of a civilization into this particular technology rather than any other — they were not following gravity."
"Yeah, but your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should."
As I've said before, science fiction tried to warn us, over and over again, and nobody listened.
You reminded me of this from J Krishnamurti:
Krishnamurti argues that our problems—conflict, fear, greed, and war—are generated by the "self," which is a "database" of accumulated memories, knowledge, and conditioning. Using this same, fragmented, self-centered thought process to find a solution only creates more, new problems.