agree, as a tech person watching this bubble exponentially inflate, even before the step up in hype about AI and LLM's pre-2022. Inference at the edge is the future, for most use cases its better, cheaper. And yet their trying to hypnotize/distract the retail investor with more and more datacenters-when its getting harder to build one by the day. And with supply chain disruptions from the middle east it will be more impossible to even maintain our grid let alone add capacity; consider how many more weather disasters we will have and the demand to replace equipment worldwide. We already have multiyear waiting list for transformers.
First, the rent of generative AI must be captured by those whose jobs it is replacing, not by investors who already had the capital to allocate and happened to do so correctly or by executives who flew the coop before their bad business decisions came crashing down. We have failed consistently to do this in the past and each time it has been a step on the road toward the current late-stage capitalist dystopia that we've been in at least since Kropotkin penned "The Conquest of Bread".
Second, despite this article, I am still of the view that generative AI must be seared from existence. It is having two incredibly negative effects. First, it is devaluing the real talent, real skill, and real effort that creative people have put into their abilities at writing and/or visual art, to the point where they now need disclaimers that no AI was involved in their production. (Further, there are copyright concerns around the training data for LLMs and graphics generators, and I think all such data used must be licenced from rightsholders.) Second, the proliferation of AI-generated articles on various topics threatens to poison something that has made humanity as successful as we have been, which is intergenerational knowledge transfer. Trained experts across numerous fields have repeatedly demonstrated that when LLMs produce "information" about their fields, the LLM output is very far from reliably correct. But with little way to distinguish between text written by an LLM and text written by an actual expert, the casual reader, including future generations needing to learn a given topic, has no way of knowing which knowledge is valid and which is not.
It's little use having a technological revolution that serves only to enrich those who are already rich and threatens to destroy the circumstances that permitted the revolution to occur in the first place.
Agreed. This is not my wheelhouse, but it def helped me have a better understanding, a clearer picture for what I have innately been feeling about this whole expansion.
You just wrote a note about capitalism. This is another of its signatures. Periodic economic crises caused by overproduction, which is generated by universal competition among private firms with a license to amass gigantic wealth by owning the society’s means of production. It’s fancier this time because AI and other sophisticated technology is involved, but it’s the same old collapse.
Unregulated capitalism will consume itself and tend towards monarchy. I believe this to be true. But I don't think the mere existence of a capitalist economy need necessarily be unsustainable. It is a proper function of government to regulate markets towards providing positive social utility. To allow a frontier of creativity through competition. But also to ensure an economic safety net. So that everyone has a house, and healthcare and healthy food to eat and a chance to do their best and contribute to society. We have enough wealth in this country to do all this, and its time for a new deal for the American people.
Capitalism can be productive, and it can become extractive when its untethered from the pursuit of the common good. A lot of businesses contribute to the common good. Some businesses grow to a certain size, and they then capture entire markets, and the politicians and then the government and we get what we have today. I think we can have a world where people are free to start a business, grow a business and succeed. And that doesn't have to be incompatible with making sure everyone is housed, educated, and given opportunities to grow and lead a meaningful life.
I have no argument with this but I’d revise the standard discourse around idea of restraining capitalist prerogatives through regulation, because such framing endows a capitalist economy with an aura of being the default, which only afterwards is to be rationally controlled, but to which society must defer in the first instance as if it were a law of nature. It is not a law of nature, and is not entitled to be assumed. To do so is to preemptively disenfranchise the commons and subordinate it in stature to the pursuit of private gain.
We should start from the proposition that certain public goods are not for sale - end of story - and then capitalism can have the remainder so long as it does not infringe on those public goods.
What those public goods are is debatable. I think it includes basically everything we need to live decently: housing, food, education, and health care right off the bat, but also libraries, parks, art, access to nature, and to the democratic process. One can think of others, but it must include limits on the quantity of wealth any one person can own or control, not just because it’s just and protects the commons, but because it is ludicrous to assert that any one person deserves more than that limit no matter what innovation they might have contributed - wealth beyond that number just means that other people’s contributions are not being acknowledged, or they’re being stolen. I think that limit should be set much lower than most: $20,000,000 (that number should be uncontroversial - alas, it’s not).
I will write my comment in 2-3 parts over the weekend as I get more time. Here is the first part: I think the thesis is directionally right around SaaS, but it overgeneralizes by treating “SaaS” as a single thing.
I’ve been implementing enterprise-level solutions across ERP, CRM, and custom applications for Fortune 50 companies, the three largest federal departments, and many other environments for over 25 years as a consultant. At this point, I run the IT organization and have moved away from consulting, so I do not have much skin in the game beyond maintaining the system well—whether that system is built in-house or provided by a vendor. From where I sit, that operational perspective matters, because it changes the question from “can this be built?” to “can this be sustained, governed, secured, and improved over time without creating a bigger problem than the one you were trying to solve?” and what I’ve seen is not just a reduction in coding effort, but a structural shift in where the hard work lives. We went from technical teams that were often twice the size of the business analyst team to environments where that ratio is reversed in the last 10 years. That matters because in many enterprise contexts, the bottleneck is no longer writing software. It is understanding the business deeply enough to implement, govern, and maintain it correctly.
That is why I think the application's complexity matters much more than the article allows.
Modern SaaS is already highly configurable. If a company is willing to standardize workflows and business processes around leading practices, many SaaS products are relatively easy to maintain with a much smaller internal team than was required in the past. In those cases, the value is not just that the vendor wrote the code. The value is that the vendor has already packaged years of process assumptions, operational patterns, controls, upgrades, and support into something that can be run at scale.
Where I agree strongly with the post is that AI is compressing the cost of building and modifying software, and that this will pressure parts of the SaaS market. But I do not think the conclusion should be that SaaS broadly gets dismantled. I think the correct conclusion is that some categories are highly exposed, while others remain durable for much longer.
ERP is the clearest example. In ERP, you are not just building screens and workflows. You are combining finance, procurement, inventory, manufacturing, reporting, controls, permissions, auditability, and, often, regulatory requirements into a single operating system for the company. Even if AI makes it much easier to generate code, that does not mean it makes it easy to own and maintain institutional correctness across that system. For mid-sized and large companies, I think broad in-house replacement of ERP cores is still a long way off.
By contrast, for other categories—CRM, ServiceNow-like workflows, internal case management, departmental apps, lightweight operational systems, and various custom extensions—the economics are already shifting. In those areas, in-house or semi-custom solutions may soon have an edge, if they do not already.
The other missing variable in the post is that replicability is not the same as advisability. Yes, most SaaS software can be replicated. That does not mean most companies should do it. If you want to maintain a large internal team to continuously upgrade the system, handle security, satisfy audit and compliance requirements, manage integrations, and preserve business continuity, you can. But I would not recommend that path for most companies. Business and domain knowledge are critical, and most IT departments lack an end-to-end understanding of the solution from both the business and technical sides to own that responsibility safely.
So I think the strongest version of the thesis is not “SaaS is going away.” It is something more like this:
AI will unevenly compress the moat of software categories whose main value came from packaging relatively standard workflows into configurable products. The more a system depends on cross-functional integration, process discipline, controls, compliance, and deep domain knowledge, the more durable the incumbent platforms will remain.
Put differently, here is how I would segment it:
- Simple departmental tools — High vulnerability
- Easier workflows
- Limited control burden
- Easier to replicate or replace with AI-assisted custom solutions
- CRM / case management / ticketing — Medium-high to high vulnerability
- Often configurable and reproducible
- More exposed where differentiation comes from workflow fit rather than deep platform complexity
- Strong candidates for AI-generated or bespoke solutions
- Especially where the process is specific to the company
- Service management platforms — Medium vulnerability
- Replaceable in parts
- Still valuable where governance, ecosystem, and enterprise controls matter
- ERP core systems — Low vulnerability in the near term
- Deeply coupled processes
- Significant audit, security, compliance, and business continuity requirements
- Hard to replace wholesale, even if pieces around them become easier to build
- Regulated industry systems — Low to medium vulnerability
- Validation, traceability, and risk management matter as much as functionality
- Replacement cost is not just technical; it is organizational and regulatory
- Large cross-enterprise platforms — Medium vulnerability
- Difficult to replace end-to-end
- More likely to be surrounded, extended, or selectively displaced than fully rebuilt
So yes, I agree with the article that AI changes the economics of software. I also agree that many current assumptions about value capture are too broad and too bullish. But I think the real story is not that all SaaS is threatened equally. It is that application complexity, process standardization, control requirements, and domain knowledge determine where AI breaks the model first.
agree, as a tech person watching this bubble exponentially inflate, even before the step up in hype about AI and LLM's pre-2022. Inference at the edge is the future, for most use cases its better, cheaper. And yet their trying to hypnotize/distract the retail investor with more and more datacenters-when its getting harder to build one by the day. And with supply chain disruptions from the middle east it will be more impossible to even maintain our grid let alone add capacity; consider how many more weather disasters we will have and the demand to replace equipment worldwide. We already have multiyear waiting list for transformers.
I have two remarks on this article.
First, the rent of generative AI must be captured by those whose jobs it is replacing, not by investors who already had the capital to allocate and happened to do so correctly or by executives who flew the coop before their bad business decisions came crashing down. We have failed consistently to do this in the past and each time it has been a step on the road toward the current late-stage capitalist dystopia that we've been in at least since Kropotkin penned "The Conquest of Bread".
Second, despite this article, I am still of the view that generative AI must be seared from existence. It is having two incredibly negative effects. First, it is devaluing the real talent, real skill, and real effort that creative people have put into their abilities at writing and/or visual art, to the point where they now need disclaimers that no AI was involved in their production. (Further, there are copyright concerns around the training data for LLMs and graphics generators, and I think all such data used must be licenced from rightsholders.) Second, the proliferation of AI-generated articles on various topics threatens to poison something that has made humanity as successful as we have been, which is intergenerational knowledge transfer. Trained experts across numerous fields have repeatedly demonstrated that when LLMs produce "information" about their fields, the LLM output is very far from reliably correct. But with little way to distinguish between text written by an LLM and text written by an actual expert, the casual reader, including future generations needing to learn a given topic, has no way of knowing which knowledge is valid and which is not.
It's little use having a technological revolution that serves only to enrich those who are already rich and threatens to destroy the circumstances that permitted the revolution to occur in the first place.
Generative AI delenda est.
https://journa.host/@jeremiak/113811327999722586
https://existentialcomics.com/comic/540
Thank you for presenting this informed and important perspective in such an accessable way
Agreed. This is not my wheelhouse, but it def helped me have a better understanding, a clearer picture for what I have innately been feeling about this whole expansion.
You just wrote a note about capitalism. This is another of its signatures. Periodic economic crises caused by overproduction, which is generated by universal competition among private firms with a license to amass gigantic wealth by owning the society’s means of production. It’s fancier this time because AI and other sophisticated technology is involved, but it’s the same old collapse.
And capitalism has always been, ultimately, doomed to consume itself and fail — just like fascism.
Unregulated capitalism will consume itself and tend towards monarchy. I believe this to be true. But I don't think the mere existence of a capitalist economy need necessarily be unsustainable. It is a proper function of government to regulate markets towards providing positive social utility. To allow a frontier of creativity through competition. But also to ensure an economic safety net. So that everyone has a house, and healthcare and healthy food to eat and a chance to do their best and contribute to society. We have enough wealth in this country to do all this, and its time for a new deal for the American people.
Capitalism can be productive, and it can become extractive when its untethered from the pursuit of the common good. A lot of businesses contribute to the common good. Some businesses grow to a certain size, and they then capture entire markets, and the politicians and then the government and we get what we have today. I think we can have a world where people are free to start a business, grow a business and succeed. And that doesn't have to be incompatible with making sure everyone is housed, educated, and given opportunities to grow and lead a meaningful life.
I have no argument with this but I’d revise the standard discourse around idea of restraining capitalist prerogatives through regulation, because such framing endows a capitalist economy with an aura of being the default, which only afterwards is to be rationally controlled, but to which society must defer in the first instance as if it were a law of nature. It is not a law of nature, and is not entitled to be assumed. To do so is to preemptively disenfranchise the commons and subordinate it in stature to the pursuit of private gain.
We should start from the proposition that certain public goods are not for sale - end of story - and then capitalism can have the remainder so long as it does not infringe on those public goods.
What those public goods are is debatable. I think it includes basically everything we need to live decently: housing, food, education, and health care right off the bat, but also libraries, parks, art, access to nature, and to the democratic process. One can think of others, but it must include limits on the quantity of wealth any one person can own or control, not just because it’s just and protects the commons, but because it is ludicrous to assert that any one person deserves more than that limit no matter what innovation they might have contributed - wealth beyond that number just means that other people’s contributions are not being acknowledged, or they’re being stolen. I think that limit should be set much lower than most: $20,000,000 (that number should be uncontroversial - alas, it’s not).
I will write my comment in 2-3 parts over the weekend as I get more time. Here is the first part: I think the thesis is directionally right around SaaS, but it overgeneralizes by treating “SaaS” as a single thing.
I’ve been implementing enterprise-level solutions across ERP, CRM, and custom applications for Fortune 50 companies, the three largest federal departments, and many other environments for over 25 years as a consultant. At this point, I run the IT organization and have moved away from consulting, so I do not have much skin in the game beyond maintaining the system well—whether that system is built in-house or provided by a vendor. From where I sit, that operational perspective matters, because it changes the question from “can this be built?” to “can this be sustained, governed, secured, and improved over time without creating a bigger problem than the one you were trying to solve?” and what I’ve seen is not just a reduction in coding effort, but a structural shift in where the hard work lives. We went from technical teams that were often twice the size of the business analyst team to environments where that ratio is reversed in the last 10 years. That matters because in many enterprise contexts, the bottleneck is no longer writing software. It is understanding the business deeply enough to implement, govern, and maintain it correctly.
That is why I think the application's complexity matters much more than the article allows.
Modern SaaS is already highly configurable. If a company is willing to standardize workflows and business processes around leading practices, many SaaS products are relatively easy to maintain with a much smaller internal team than was required in the past. In those cases, the value is not just that the vendor wrote the code. The value is that the vendor has already packaged years of process assumptions, operational patterns, controls, upgrades, and support into something that can be run at scale.
Where I agree strongly with the post is that AI is compressing the cost of building and modifying software, and that this will pressure parts of the SaaS market. But I do not think the conclusion should be that SaaS broadly gets dismantled. I think the correct conclusion is that some categories are highly exposed, while others remain durable for much longer.
ERP is the clearest example. In ERP, you are not just building screens and workflows. You are combining finance, procurement, inventory, manufacturing, reporting, controls, permissions, auditability, and, often, regulatory requirements into a single operating system for the company. Even if AI makes it much easier to generate code, that does not mean it makes it easy to own and maintain institutional correctness across that system. For mid-sized and large companies, I think broad in-house replacement of ERP cores is still a long way off.
By contrast, for other categories—CRM, ServiceNow-like workflows, internal case management, departmental apps, lightweight operational systems, and various custom extensions—the economics are already shifting. In those areas, in-house or semi-custom solutions may soon have an edge, if they do not already.
The other missing variable in the post is that replicability is not the same as advisability. Yes, most SaaS software can be replicated. That does not mean most companies should do it. If you want to maintain a large internal team to continuously upgrade the system, handle security, satisfy audit and compliance requirements, manage integrations, and preserve business continuity, you can. But I would not recommend that path for most companies. Business and domain knowledge are critical, and most IT departments lack an end-to-end understanding of the solution from both the business and technical sides to own that responsibility safely.
So I think the strongest version of the thesis is not “SaaS is going away.” It is something more like this:
AI will unevenly compress the moat of software categories whose main value came from packaging relatively standard workflows into configurable products. The more a system depends on cross-functional integration, process discipline, controls, compliance, and deep domain knowledge, the more durable the incumbent platforms will remain.
Put differently, here is how I would segment it:
- Simple departmental tools — High vulnerability
- Easier workflows
- Limited control burden
- Easier to replicate or replace with AI-assisted custom solutions
- CRM / case management / ticketing — Medium-high to high vulnerability
- Often configurable and reproducible
- More exposed where differentiation comes from workflow fit rather than deep platform complexity
- Internal workflow apps / portals / dashboards — High vulnerability
- Strong candidates for AI-generated or bespoke solutions
- Especially where the process is specific to the company
- Service management platforms — Medium vulnerability
- Replaceable in parts
- Still valuable where governance, ecosystem, and enterprise controls matter
- ERP core systems — Low vulnerability in the near term
- Deeply coupled processes
- Significant audit, security, compliance, and business continuity requirements
- Hard to replace wholesale, even if pieces around them become easier to build
- Regulated industry systems — Low to medium vulnerability
- Validation, traceability, and risk management matter as much as functionality
- Replacement cost is not just technical; it is organizational and regulatory
- Large cross-enterprise platforms — Medium vulnerability
- Difficult to replace end-to-end
- More likely to be surrounded, extended, or selectively displaced than fully rebuilt
So yes, I agree with the article that AI changes the economics of software. I also agree that many current assumptions about value capture are too broad and too bullish. But I think the real story is not that all SaaS is threatened equally. It is that application complexity, process standardization, control requirements, and domain knowledge determine where AI breaks the model first.
Essentially doing something ten times faster and cheaper actually costs everything. That's what I just read.
Very helpful response. Thanks!