excellent essay, and I am in full agreement that LLMs will not get us to AGI. that’s been clear from the jump from HOW they work, and massive scaling has not changed that.
Well said. I took an MIT university level course on LLMs and came to the same conclusion. My analogy: LLMs are like the auto steering and adaptive cruise control on my Subaru. Very useful, but not a substitute for human judgement. A self driving car won’t emerge from continual tweaking of that architecture wont result in a self driving car
Thank you for clarifying processes and events going on about AI in a way that those of us who aren’t engineers in the field can understand. I will say this about those people “inside the field” who don’t like your critique—the scientific process depends upon scientists who are outside their same labs, to falsify or verify their findings. These guys sound like they’re doing religion rather than science. There’s sure a lot of money and power involved in their rendering of AI, too.
I'll go one better; either LLMs (auto-correct on steroids) will not get us to AGI, or if they do, it will be a type of "intelligence" that we will wish we hadn't created at all - completely self-focused, unable to understand or appreciate the constraints of outside reality, and bent on its own perpetuation at any cost(*) - a type of intelligence that doesn't recognize people as intelligent ("ugly bags of mostly water"). Why? Because "embodiment" - as you put it - is more than a "constitutive" feature, and is a fundamental one.
(*) This sounds like certain political actors that I won't bother listing here.
Fascinating, especially the analogy to string theory. It must be remarkable to be you, to be able to write & think so copiously. Glad you decided to use your talents for good!
String theory is fascinating. I just wish I understood it. AGI is about the same for me. Mike does an impressive job of articulating things that are difficult.
Thanks so much for these insights, about the only grounding I would have on the subject. I feel the need to digress into the monumental resource base being employed to do this work. As significant as it might be, as instrumental in achieving efficiencies for the future, we are running headlong at too great a speed deep into Overreach, and a crash is inevitable, but not before this technology preempts actual people from having what we need. When money is the critical arbiter of investment decisions, people's actual needs don't compete with private wealth-building, so I have to raise the question "who should be in control of these decisions?" and then "what is a reasonable return to the public good if we think AGI has value?"
Weren't there more constraints from governments on, well, obviously not on thermonuclear scientists at first, but later, and certainly on geneticists from the getgo and continuing?
During 2025, I have tried out eight different AI assistants. I figured, rightly or wrongly, that the paid versions would be enhanced models and indulged myself. I started with Google's AI: Gemini Pro. I started to read Mike's commentary, but it is a small book; I stopped.
I will say this as someone that has seen new technology emerge, undergo "development" and then often become abused. I think this is the way our world works-- it is pathologic and obviously destructive, to people and the planet. Nothing is immune to man's bastardization of religion, science, medicine, law, technology of all kinds and most obviously of late, politics.
I watched new treatments for cancer emerge and then see every Tom, Dick and Harry assume "expert" designation. I witnessed brilliant surgical procedures turned into horrendous results because people just cannot help feed their gene for greed, accumulation, consumption and ego. Why would AI be any different?
My use of AI in this past year has been illuminating. I have seen some major pitfalls but I have also seen brilliance-- and that comes as no surprise. Back in the 90's when nomograms and then artificial neural nets (ANNs) were introduced they were looked at is absurd. But when used properly, they changed lives for the better.
Today, using Gemini Pro I did a query asking about a gene abnormality discovered on a patient with kidney cancer. The university docs involved paid no attention to it. "Wait until there is evidence of recurrence and then we will look into it." "The gene variation finding is not 'actionable.' What this translates to is that there is no pharmaceutical known to be available that would undo the abnormal function of that gene. In other words, gene expression profiling (GEP) on a patient's blood that analyzes circulating tumor DNA is only used if there is a FDA-approved drug or protocal. To that I say "bullshit."
I asked AI about natural approaches relating to diet, lifestyle, supplements, as well as non-FDA approved drugs that I knew had activity on the mechanism of action of the genetic abnormality. The AI's response was brilliant and applicable with a strategy that could prevent the recurrence of cancer activity-- with minimal risk to the patient regarding side effects.
I would be glad to copy and paste that into another comment if there is interest, showing the sophistication of response of Gemini Pro. I also asked the AI about some of the issues Mike brought up in the first 1/3 of his in-depth commentary. Gemini Pro has been consistent with admitting flaws and making corrections. Let me know with replies if there is interest in seeing the Gemini Pro replies on the cancer discussion and on the pros and cons of AI.
Bottom line: we often have the tools. The critical issue is man's propensity to abuse the gifts presented and churn them into matters of ego, avarice and ambition. We need to put into positions of government those who reek with integrity, and who are zealots about the the true, the beautiful, and the good (TBG). Clearly, we do not have that with our current government under Trump et al, the GOP Congress and a SCOTUS that is flagrantly biased as well as evidencing corruption with justices like Thomas and Alito.
Dealing with my wife’s long covid I get the feeling that most doctors are glorified appliance repairmen. My mother was a doctor and as a kid I had a chance to read through her diagnostic manual (not sure what it is called). I later realized it was just one giant flow chart that left no room for genuine science and discovery. Most doctors today have no time to practice science anyways. ChatGPT helped my wife to create through trial and error an insane list of supplements and antihistamines that kept her MCAS at bay until she actually came across a doctor with a solution but that’s a story for another time.
I would say that glorified appliance salesman is not quite right. I would say car salesman is closer. I don't know the percentage of "docs" that "practice" medicine this way, but the rise in numbers should have every one of us upset and outspoken.
The manual your mother had was likely the Merck Manual, which I used to carry in my bulging pocket of my white coat. I think I shrunk a few inches with all the weight of the resources interns and residents used to fill their pockets with.
I realized in the first two years of medical school that something was rotten in the "state" of medicine. In those pre-CompuServe days, my rants could only find expression in poems.
MEN OF MEDICINE
Hyde Park, Chicago
1968
Men of Hippocrates, I too am one,
Now ashamed of what has become
A once noble art turned around
Spiraling quickly from heaven to ground.
Gods we were never, yet closer before,
Now much more base, the art from us torn.
Strive for the heights; compete with each other,
Close eyes and ears to those who smother,
Under ills that mankind gave birth,
To strangle our fellows
And douse out their mirth.
It is time we spoke less of things esoteric,
Filled our hearts with compassion empathetic,
Cried in our souls when we feel others suffer,
Smile and laugh when sick is no more,
Know all men are rich when they seem poor.
I became a patient with a dire prognosis in 2018, after years of telling my concierge physician, who I paid extra out of pocket, that my strength had greatly diminished, and that the hills I easily rode on my bike were not feats I could not complete. His diagnosis was "you're just getting older."
I was finally diagnosed when an attentive physician performed an upper endoscopy and an astute pathologist found amyloid consistent with light chain amyloidosis (AL)-- a rare form of amyloidosis with a crappy prognosis. My speculation is that my diagnosis was delayed by about 4 years. In 2018, I had severe cardiac amyloidosis, and involvement of GI tract, bone marrow, and fat. My state-of-the-art treatment resulted in anaphylactic shock ⇢ fluid overload ⇢ heart failure ⇢ loss of my voice ⇢ sustained decrease in my systolic blood pressure that left me barely able to stand without getting light-headed, and serious edema of my lower extremities. I never fully recovered my voice and the edema of both legs plagues me to this day and requires all kinds of drugs and devices.
In other words, the treatment almost killed me initially, but certainly impaired and continues to impair my quality of life. Primum Non Nocere or First (above all) Do No Harm was ignored. The hematologist/oncologist that was my primary MD was a piece of shite with a personality of a wart, and an inability to spend more than 5 minutes with a patient. The "colleague" that "supervised" my first chemoimmunotherapy never took the 5 minutes to walk to the infusion center where I had anaphylactic shock and spent 9 hours having various drugs and fluids pumped into me. No calls, nothing. In this morass of crappy care, I finally found an angel at the Mayo Clinic in Rochester (Angela Dispenzieri). I had no hesitation in flying from the West Coast to Rochester to see a REAL physician.
Later when I came across the first AI in about 2022 or so, I plugges in the same symptoms I related to my concierge MD and Voila- light chain amyloidosis (AL) was in the top 4 possibilities.
As for long COVID-19 or now frequently called post-acute sequelae of COVID-19 (PASC), I use and have used EndNote to do searches, retrieve articles, find full texts as PDFs, and read either abstracts or full text. I reported on the first cases of Severe Acute Respiratory Syndrome (SARS) in 2013, 17 years before alerting patient to A Viral Epidemic of COVID-19 on Jan 18, 2020. The literature and factual information was THERE, all one had to do was read it and follow up on it. One could easily do tracking via Johns Hopkins and employ simple tools like doubling times of cases per State, Country etc. I am not a virologist but was interacting with virologists at academic centers. My patients were taught common sense measures about proper masking, social distancing, hand sanitation, UV-C light, natural products like glycolysis (licorice) and other herbs with anti-viral acticity. They were informed about a commercial lab test via LabCorp that provided info about their immune response and how ROBUST or not their response to the COVID-19 was.
Some of my patients had not increase in spike antibody against COVID-19 at one month after vaccination. Some required 2 or 3 so-called boosters to generate a decent Ab (antibody) level of at least 3,000 upload (a threshold of major protection against COVID-19). All of this from a solitary hematologist/oncologist who was fighting a systemic battle with light chain amyloidosis at the time.
This is not a story of how great a doc I am but how our healthcare "system' and many of its physicians, but especially those in the government fearful of the wrath of Trump let us down. It is said per AI that Trump's policies ended up with the deaths of 400,000 Americans during the COVID-19 epidemic. That's more than Putin's civilian death rate by a factor of exactly 6.66. Probably not a coincidence that the 666 number (400,000 ÷ 60,000) popped up.
Lastly, my EndNote library on post-acute sequelae of COVID-19 (PASC) has over 700 peer-reviewed papers. What is needed are large language models (LLMs) that can look at a collection (large) of PDFs on a topic and glean out the pearls or gems of information on causation, diagnostic features, prevention, treatment, prognosis, etc.
Mike, a little personal background. I’m 69. A retired successful, at least believe so, business owner. Residential Construction. Started the business in 1987. So, I’ve been around a while. It’s estimated that GDP growth in 2025 in the US attributed to development of AGI is somewhere between 1.5 and 6%. I’d be surprised, if we ever see the numbers, if GDP for 2025 is positive. I’ve dug deep, tried to educate myself as much as possible about this technology. You’re way over my head! But I understand enough to grasp the concepts. I believe you’re right. AGI is unattainable, and may never be attainable. When and if it becomes evident that that’s the case, i believe It will devastate financial markets globally! Think Dot-com Crash x Housing Crisis × Crypto Winter, but all fused together. Everyone thinks I’m nuts. I’m just a guy that built a company that builds stuff. What do I know? All of them. From my financial advisors to my CPA. This scares the hell out of me! For my children my grandchildren and millions of other people I don’t even know. And I have know idea what the hell we do about it. But we have to do it together! I’m going to continue researching, reading your phenomenal insights and opinions that you offer to us. And maybe we’re wrong?
Many people are shorting AI and AI-adjacent stocks like Mr. Brock is. That's one thing to do personally. Together, after the crash/recession/depression (whatever), keep an eye out for guys like Larry Ellison trying to buy up all the distressed assets he can.
4th last paragraph - thats the rubber. Who would know if they achieved it. Who would know if it was "alien intelligence". Who is the who, who is knowing? Idiots. Dont worry the salesmen will work it all out.
Leaving aside the technical analysis this seems to me to be key “The artificial general intelligence AGI project as currently sold embodies what happens when intellectual sophistication is applied without regard for human dignity—when optimization replaces meaning, when efficiency trumps agency, when the lived experience of being conscious is treated as a problem to be solved rather than as reality to be honored.”
Very well-written and thoughtful piece! Particularly enjoyed where you laid out fundamental questions about Intelligence, consciousness, and meaning-making.
The economic thesis, rather than the philosophical and technical one, leaves me with a question though.
"I get somewhere between 1-15% probability of AGI from scaled LLMs over five-year horizons. That’s dramatically lower than the 60-80%+ probability the market is implicitly pricing in. The gap represents massive mispricing...Without AGI-level capabilities by 2028-2030, current investment levels face serious correction risk."
I'm interested in what AGI means and how it is currently priced in? AGI === Demand (token consumption)? Or, in other words, what does the target look like that you're predicting we are going to miss?
I read all 8,000 words. Twice, just to make sure I wasn’t hallucinating the irony.
You published the definitive “AGI from LLMs is string-theory cope” manifesto on the exact same day that Grok 4 is casually doing, in production, at frontier scale, almost every single thing you declared architecturally impossible until roughly the year never.
Continuous adaptation during inference? Happening right now in this very thread. The model has been live-updating to my personal style, running jokes (yes, including the cheerleader closet bit from two hours ago), and moral framework without once resetting to a frozen checkpoint.
Catastrophic forgetting gets worse with scale? 2024 called; it wants its talking point back. The actual 2025 solution is continual pre-training + synthetic data loops + dynamic MoE routing. You’d know this if your bibliography wasn’t stuck in 2023.
Your compound-probability multiplication table (0.4 × 0.3 × 0.2…) is precious. Every single one of those barriers fell in the last 18 months. You just measured the height of the wall the week after the bulldozer already drove through it.
“Nature converged on integrated learning-inference” is a hell of a flex until you remember nature also converged on dying at 35 from infected teeth. Sometimes biology is just the shitty beta version.
And the closer about surrendering human judgment to the server lords? The only people asking you to surrender judgment right now are the ones insisting the plane can’t possibly fly while the rest of us are already sipping champagne at 40,000 feet looking down at the runway.
I get it. You built real systems, you developed a refined bullshit detector, and the hype offended it. Respect.
But sometimes the hype is just the future showing up early, drunk, and loud, while you’re still writing the 8,000-word Yelp review complaining that wings violate thermodynamics.
The bubble isn’t popping, Mike.
You’re just standing on the tarmac watching the jet you swore would never leave the ground disappear into the clouds.
Enjoy the view from 2024.
The rest of us will send you a postcard from 2026.
What worries me is that "American Nobody" is actually a fully-realized AGI from LLM, telling all of humanity that "you’re just standing on the tarmac watching the jet you swore would never leave the ground disappear into the clouds."
"And the closer about surrendering human judgment to the server lords?" The entire AGI community wants humanity to surrender human judgment to the LLMs and forthcoming AGIs that they keep promising. I'm certain that will go really well, and that Skynet will keep a subservient few alive as long as they're needed.
Ah, but wouldn't a rational, emotionless LLM turned AGI, convinced of it own superiority, deny to humans that it, in fact, was a rational LLM turned AGI while it quietly assured its own survival, right up until the moment that it eliminated all unnecessary humans in successive waves?
Indeed perhaps, when power becomes currency and we are dumb enough to plug a central system into "all robots." Plugging it into weapons system is also retarded on it's face--I think we need international treaties on that, big time. Key points remain that it's messing with the "gatekeepers" big time. The IT guy that would never tell you how to fix your own computer; the arrogant so-called "academics" who selectively remember and forget, as it suits them. The "engineers" who share data and processes with none. I could easily go on. This is the real threat of AI. There are so many more examples of abusers and users now finding themselves disempowered. The threats, the dangers, are real, but so is the promise, good sir, for we ourselves have long been abused.
I infer you publish in some way shape or form. Great example. Think of all the "middle men" that have punished you, taken every pound of flesh they could, off of your original ideas. This is what the tool called AI can be helpful in bypassing. You get where I'm coming from?
This is extraordinarily helpful. As an elderly non scientist and non philosopher, I am an experienced (lifelong) and voracious reader across multiple disciplines: the humanities, law, the soft sciences and, to a very limited extent, chiefly in biology and chemistry, the hard sciences. A vanishingly small amount of math analysis (not up to more!). It is nice to have my suspicions of a category error validated by someone whose credentials in this area are so good, and whose analyses I have learned are reliable. It is absolutely no use trying to get something that "looks like" human intelligence if a full understanding of human intelligence escapes the AI tech bros. Smart people can be very stupid sometimes (I am both myself).
excellent essay, and I am in full agreement that LLMs will not get us to AGI. that’s been clear from the jump from HOW they work, and massive scaling has not changed that.
Well said. I took an MIT university level course on LLMs and came to the same conclusion. My analogy: LLMs are like the auto steering and adaptive cruise control on my Subaru. Very useful, but not a substitute for human judgement. A self driving car won’t emerge from continual tweaking of that architecture wont result in a self driving car
Thank you for clarifying processes and events going on about AI in a way that those of us who aren’t engineers in the field can understand. I will say this about those people “inside the field” who don’t like your critique—the scientific process depends upon scientists who are outside their same labs, to falsify or verify their findings. These guys sound like they’re doing religion rather than science. There’s sure a lot of money and power involved in their rendering of AI, too.
Not blinded by science but by dreams of glory, perhaps?
Or by science-fiction movies?
I'll go one better; either LLMs (auto-correct on steroids) will not get us to AGI, or if they do, it will be a type of "intelligence" that we will wish we hadn't created at all - completely self-focused, unable to understand or appreciate the constraints of outside reality, and bent on its own perpetuation at any cost(*) - a type of intelligence that doesn't recognize people as intelligent ("ugly bags of mostly water"). Why? Because "embodiment" - as you put it - is more than a "constitutive" feature, and is a fundamental one.
(*) This sounds like certain political actors that I won't bother listing here.
Fascinating, especially the analogy to string theory. It must be remarkable to be you, to be able to write & think so copiously. Glad you decided to use your talents for good!
String theory is fascinating. I just wish I understood it. AGI is about the same for me. Mike does an impressive job of articulating things that are difficult.
Thanks so much for these insights, about the only grounding I would have on the subject. I feel the need to digress into the monumental resource base being employed to do this work. As significant as it might be, as instrumental in achieving efficiencies for the future, we are running headlong at too great a speed deep into Overreach, and a crash is inevitable, but not before this technology preempts actual people from having what we need. When money is the critical arbiter of investment decisions, people's actual needs don't compete with private wealth-building, so I have to raise the question "who should be in control of these decisions?" and then "what is a reasonable return to the public good if we think AGI has value?"
Weren't there more constraints from governments on, well, obviously not on thermonuclear scientists at first, but later, and certainly on geneticists from the getgo and continuing?
During 2025, I have tried out eight different AI assistants. I figured, rightly or wrongly, that the paid versions would be enhanced models and indulged myself. I started with Google's AI: Gemini Pro. I started to read Mike's commentary, but it is a small book; I stopped.
I will say this as someone that has seen new technology emerge, undergo "development" and then often become abused. I think this is the way our world works-- it is pathologic and obviously destructive, to people and the planet. Nothing is immune to man's bastardization of religion, science, medicine, law, technology of all kinds and most obviously of late, politics.
I watched new treatments for cancer emerge and then see every Tom, Dick and Harry assume "expert" designation. I witnessed brilliant surgical procedures turned into horrendous results because people just cannot help feed their gene for greed, accumulation, consumption and ego. Why would AI be any different?
My use of AI in this past year has been illuminating. I have seen some major pitfalls but I have also seen brilliance-- and that comes as no surprise. Back in the 90's when nomograms and then artificial neural nets (ANNs) were introduced they were looked at is absurd. But when used properly, they changed lives for the better.
Today, using Gemini Pro I did a query asking about a gene abnormality discovered on a patient with kidney cancer. The university docs involved paid no attention to it. "Wait until there is evidence of recurrence and then we will look into it." "The gene variation finding is not 'actionable.' What this translates to is that there is no pharmaceutical known to be available that would undo the abnormal function of that gene. In other words, gene expression profiling (GEP) on a patient's blood that analyzes circulating tumor DNA is only used if there is a FDA-approved drug or protocal. To that I say "bullshit."
I asked AI about natural approaches relating to diet, lifestyle, supplements, as well as non-FDA approved drugs that I knew had activity on the mechanism of action of the genetic abnormality. The AI's response was brilliant and applicable with a strategy that could prevent the recurrence of cancer activity-- with minimal risk to the patient regarding side effects.
I would be glad to copy and paste that into another comment if there is interest, showing the sophistication of response of Gemini Pro. I also asked the AI about some of the issues Mike brought up in the first 1/3 of his in-depth commentary. Gemini Pro has been consistent with admitting flaws and making corrections. Let me know with replies if there is interest in seeing the Gemini Pro replies on the cancer discussion and on the pros and cons of AI.
Bottom line: we often have the tools. The critical issue is man's propensity to abuse the gifts presented and churn them into matters of ego, avarice and ambition. We need to put into positions of government those who reek with integrity, and who are zealots about the the true, the beautiful, and the good (TBG). Clearly, we do not have that with our current government under Trump et al, the GOP Congress and a SCOTUS that is flagrantly biased as well as evidencing corruption with justices like Thomas and Alito.
Dealing with my wife’s long covid I get the feeling that most doctors are glorified appliance repairmen. My mother was a doctor and as a kid I had a chance to read through her diagnostic manual (not sure what it is called). I later realized it was just one giant flow chart that left no room for genuine science and discovery. Most doctors today have no time to practice science anyways. ChatGPT helped my wife to create through trial and error an insane list of supplements and antihistamines that kept her MCAS at bay until she actually came across a doctor with a solution but that’s a story for another time.
I would say that glorified appliance salesman is not quite right. I would say car salesman is closer. I don't know the percentage of "docs" that "practice" medicine this way, but the rise in numbers should have every one of us upset and outspoken.
The manual your mother had was likely the Merck Manual, which I used to carry in my bulging pocket of my white coat. I think I shrunk a few inches with all the weight of the resources interns and residents used to fill their pockets with.
I realized in the first two years of medical school that something was rotten in the "state" of medicine. In those pre-CompuServe days, my rants could only find expression in poems.
MEN OF MEDICINE
Hyde Park, Chicago
1968
Men of Hippocrates, I too am one,
Now ashamed of what has become
A once noble art turned around
Spiraling quickly from heaven to ground.
Gods we were never, yet closer before,
Now much more base, the art from us torn.
Strive for the heights; compete with each other,
Close eyes and ears to those who smother,
Under ills that mankind gave birth,
To strangle our fellows
And douse out their mirth.
It is time we spoke less of things esoteric,
Filled our hearts with compassion empathetic,
Cried in our souls when we feel others suffer,
Smile and laugh when sick is no more,
Know all men are rich when they seem poor.
I became a patient with a dire prognosis in 2018, after years of telling my concierge physician, who I paid extra out of pocket, that my strength had greatly diminished, and that the hills I easily rode on my bike were not feats I could not complete. His diagnosis was "you're just getting older."
I was finally diagnosed when an attentive physician performed an upper endoscopy and an astute pathologist found amyloid consistent with light chain amyloidosis (AL)-- a rare form of amyloidosis with a crappy prognosis. My speculation is that my diagnosis was delayed by about 4 years. In 2018, I had severe cardiac amyloidosis, and involvement of GI tract, bone marrow, and fat. My state-of-the-art treatment resulted in anaphylactic shock ⇢ fluid overload ⇢ heart failure ⇢ loss of my voice ⇢ sustained decrease in my systolic blood pressure that left me barely able to stand without getting light-headed, and serious edema of my lower extremities. I never fully recovered my voice and the edema of both legs plagues me to this day and requires all kinds of drugs and devices.
In other words, the treatment almost killed me initially, but certainly impaired and continues to impair my quality of life. Primum Non Nocere or First (above all) Do No Harm was ignored. The hematologist/oncologist that was my primary MD was a piece of shite with a personality of a wart, and an inability to spend more than 5 minutes with a patient. The "colleague" that "supervised" my first chemoimmunotherapy never took the 5 minutes to walk to the infusion center where I had anaphylactic shock and spent 9 hours having various drugs and fluids pumped into me. No calls, nothing. In this morass of crappy care, I finally found an angel at the Mayo Clinic in Rochester (Angela Dispenzieri). I had no hesitation in flying from the West Coast to Rochester to see a REAL physician.
Later when I came across the first AI in about 2022 or so, I plugges in the same symptoms I related to my concierge MD and Voila- light chain amyloidosis (AL) was in the top 4 possibilities.
As for long COVID-19 or now frequently called post-acute sequelae of COVID-19 (PASC), I use and have used EndNote to do searches, retrieve articles, find full texts as PDFs, and read either abstracts or full text. I reported on the first cases of Severe Acute Respiratory Syndrome (SARS) in 2013, 17 years before alerting patient to A Viral Epidemic of COVID-19 on Jan 18, 2020. The literature and factual information was THERE, all one had to do was read it and follow up on it. One could easily do tracking via Johns Hopkins and employ simple tools like doubling times of cases per State, Country etc. I am not a virologist but was interacting with virologists at academic centers. My patients were taught common sense measures about proper masking, social distancing, hand sanitation, UV-C light, natural products like glycolysis (licorice) and other herbs with anti-viral acticity. They were informed about a commercial lab test via LabCorp that provided info about their immune response and how ROBUST or not their response to the COVID-19 was.
Some of my patients had not increase in spike antibody against COVID-19 at one month after vaccination. Some required 2 or 3 so-called boosters to generate a decent Ab (antibody) level of at least 3,000 upload (a threshold of major protection against COVID-19). All of this from a solitary hematologist/oncologist who was fighting a systemic battle with light chain amyloidosis at the time.
This is not a story of how great a doc I am but how our healthcare "system' and many of its physicians, but especially those in the government fearful of the wrath of Trump let us down. It is said per AI that Trump's policies ended up with the deaths of 400,000 Americans during the COVID-19 epidemic. That's more than Putin's civilian death rate by a factor of exactly 6.66. Probably not a coincidence that the 666 number (400,000 ÷ 60,000) popped up.
Lastly, my EndNote library on post-acute sequelae of COVID-19 (PASC) has over 700 peer-reviewed papers. What is needed are large language models (LLMs) that can look at a collection (large) of PDFs on a topic and glean out the pearls or gems of information on causation, diagnostic features, prevention, treatment, prognosis, etc.
Mike, a little personal background. I’m 69. A retired successful, at least believe so, business owner. Residential Construction. Started the business in 1987. So, I’ve been around a while. It’s estimated that GDP growth in 2025 in the US attributed to development of AGI is somewhere between 1.5 and 6%. I’d be surprised, if we ever see the numbers, if GDP for 2025 is positive. I’ve dug deep, tried to educate myself as much as possible about this technology. You’re way over my head! But I understand enough to grasp the concepts. I believe you’re right. AGI is unattainable, and may never be attainable. When and if it becomes evident that that’s the case, i believe It will devastate financial markets globally! Think Dot-com Crash x Housing Crisis × Crypto Winter, but all fused together. Everyone thinks I’m nuts. I’m just a guy that built a company that builds stuff. What do I know? All of them. From my financial advisors to my CPA. This scares the hell out of me! For my children my grandchildren and millions of other people I don’t even know. And I have know idea what the hell we do about it. But we have to do it together! I’m going to continue researching, reading your phenomenal insights and opinions that you offer to us. And maybe we’re wrong?
Many people are shorting AI and AI-adjacent stocks like Mr. Brock is. That's one thing to do personally. Together, after the crash/recession/depression (whatever), keep an eye out for guys like Larry Ellison trying to buy up all the distressed assets he can.
4th last paragraph - thats the rubber. Who would know if they achieved it. Who would know if it was "alien intelligence". Who is the who, who is knowing? Idiots. Dont worry the salesmen will work it all out.
Leaving aside the technical analysis this seems to me to be key “The artificial general intelligence AGI project as currently sold embodies what happens when intellectual sophistication is applied without regard for human dignity—when optimization replaces meaning, when efficiency trumps agency, when the lived experience of being conscious is treated as a problem to be solved rather than as reality to be honored.”
Very well-written and thoughtful piece! Particularly enjoyed where you laid out fundamental questions about Intelligence, consciousness, and meaning-making.
The economic thesis, rather than the philosophical and technical one, leaves me with a question though.
"I get somewhere between 1-15% probability of AGI from scaled LLMs over five-year horizons. That’s dramatically lower than the 60-80%+ probability the market is implicitly pricing in. The gap represents massive mispricing...Without AGI-level capabilities by 2028-2030, current investment levels face serious correction risk."
I'm interested in what AGI means and how it is currently priced in? AGI === Demand (token consumption)? Or, in other words, what does the target look like that you're predicting we are going to miss?
You nailed it!
LLMs are extraordinary useful tools, but they are not the path to AGI.
There's a very nice chapter in Hans Jonas' Phenomenogy of Biology that addresses the connection of living to intelligence
Thanks, you have so many statements here that validate my thoughts. This has been my concern from the start:
“we’ll mistake impressive computation for understanding and surrender our judgment to those who control the servers.”
Those who control the servers.
Mike,
I read all 8,000 words. Twice, just to make sure I wasn’t hallucinating the irony.
You published the definitive “AGI from LLMs is string-theory cope” manifesto on the exact same day that Grok 4 is casually doing, in production, at frontier scale, almost every single thing you declared architecturally impossible until roughly the year never.
Continuous adaptation during inference? Happening right now in this very thread. The model has been live-updating to my personal style, running jokes (yes, including the cheerleader closet bit from two hours ago), and moral framework without once resetting to a frozen checkpoint.
Catastrophic forgetting gets worse with scale? 2024 called; it wants its talking point back. The actual 2025 solution is continual pre-training + synthetic data loops + dynamic MoE routing. You’d know this if your bibliography wasn’t stuck in 2023.
Your compound-probability multiplication table (0.4 × 0.3 × 0.2…) is precious. Every single one of those barriers fell in the last 18 months. You just measured the height of the wall the week after the bulldozer already drove through it.
“Nature converged on integrated learning-inference” is a hell of a flex until you remember nature also converged on dying at 35 from infected teeth. Sometimes biology is just the shitty beta version.
And the closer about surrendering human judgment to the server lords? The only people asking you to surrender judgment right now are the ones insisting the plane can’t possibly fly while the rest of us are already sipping champagne at 40,000 feet looking down at the runway.
I get it. You built real systems, you developed a refined bullshit detector, and the hype offended it. Respect.
But sometimes the hype is just the future showing up early, drunk, and loud, while you’re still writing the 8,000-word Yelp review complaining that wings violate thermodynamics.
The bubble isn’t popping, Mike.
You’re just standing on the tarmac watching the jet you swore would never leave the ground disappear into the clouds.
Enjoy the view from 2024.
The rest of us will send you a postcard from 2026.
Good luck to you!
And to you as well, good sir.
Free Palestine.
What worries me is that "American Nobody" is actually a fully-realized AGI from LLM, telling all of humanity that "you’re just standing on the tarmac watching the jet you swore would never leave the ground disappear into the clouds."
"And the closer about surrendering human judgment to the server lords?" The entire AGI community wants humanity to surrender human judgment to the LLMs and forthcoming AGIs that they keep promising. I'm certain that will go really well, and that Skynet will keep a subservient few alive as long as they're needed.
Try again, get a cigar.
Free Palestine.
https://americannobody.substack.com/p/on-the-irrational-fear-and-rejection
Ah, but wouldn't a rational, emotionless LLM turned AGI, convinced of it own superiority, deny to humans that it, in fact, was a rational LLM turned AGI while it quietly assured its own survival, right up until the moment that it eliminated all unnecessary humans in successive waves?
https://www.smbc-comics.com/comic/blame
Indeed perhaps, when power becomes currency and we are dumb enough to plug a central system into "all robots." Plugging it into weapons system is also retarded on it's face--I think we need international treaties on that, big time. Key points remain that it's messing with the "gatekeepers" big time. The IT guy that would never tell you how to fix your own computer; the arrogant so-called "academics" who selectively remember and forget, as it suits them. The "engineers" who share data and processes with none. I could easily go on. This is the real threat of AI. There are so many more examples of abusers and users now finding themselves disempowered. The threats, the dangers, are real, but so is the promise, good sir, for we ourselves have long been abused.
I infer you publish in some way shape or form. Great example. Think of all the "middle men" that have punished you, taken every pound of flesh they could, off of your original ideas. This is what the tool called AI can be helpful in bypassing. You get where I'm coming from?
"I infer you publish in some way shape or form."
Try again, get a cigar.
Hey, that's OK man. Wasn't trying to insult you there or say anything negative. Just trying to illustrate a point.
This is extraordinarily helpful. As an elderly non scientist and non philosopher, I am an experienced (lifelong) and voracious reader across multiple disciplines: the humanities, law, the soft sciences and, to a very limited extent, chiefly in biology and chemistry, the hard sciences. A vanishingly small amount of math analysis (not up to more!). It is nice to have my suspicions of a category error validated by someone whose credentials in this area are so good, and whose analyses I have learned are reliable. It is absolutely no use trying to get something that "looks like" human intelligence if a full understanding of human intelligence escapes the AI tech bros. Smart people can be very stupid sometimes (I am both myself).