AI is not AI. I once took a Microsoft person in my class to task for the utter bullshit in their advertising (to be fair he'd worked on it). It's a LLM + stochastic prediction with a few knobs on. Basically scalled up machine learning. This is further propped up by visual/animation slop, also mostly from machine learning. You cannot get to "genuine" AI from here. All the talk of AI ending humanity is bunk - well this version of AI anyway. This is why ChatGPT can't write usable Visual Basic code for PowerPoint but can for Word. Why? Because Microsoft produced different versions of VB with different object models for each bit of the Office suite. Word & Excel had good documentation, PowerPoint and the rest, not so much. ChatGPT is essentially missing something to crib from. There is no road from what we have now to AGI: for all intents and purposes what we have is closer to Hadoop being used to interrogate a data lake than a "thinking machine." I'd imagine it will all go horribly wrong sooner rather than later. And then we'll all find out why he needs that bunker.
We have American Intelligence, not Artificial intelligence. Our intelligence Agencies have subcontracted their duties to Silicon valley and unleashed them on us.
The symmetry with nepotism and wealth allowing nascent artists a chance to break through is rather striking. Tokenize (or commodify) every aspect of life, throw in flood information suppression, even inadvertently, and you can claim some filmy facsimile of it all for yourself--if you can afford the model. If people are forced to contribute to it or face starvation.
The output of an LLM looks a lot like the output of a corporate middle manager. The inference drawn by corporate executives, that LLMs are intelligent, went in the wrong direction.
Also, as I've noted before, generative AI is an atrocity in creative fields, both image and text generation. It is almost vomit-worthy to see much of that output and it completely devalues actual work and talent. I hope the copyright-infringement lawsuits brought against companies like OpenAI and Anthropic bankrupt them out of business. Further, of course, there's the poisoning of intergenerational knowledge transfer which will kill our continued ability to flourish as a species.
And then there's the noise-pollution impacts on anyone who has a data centre forced on their area; see, for instance, this video of what it sounds like when you live half a mile from one of those monstrosities: https://www.reddit.com/r/Sarnia/comments/1syfvzu/the_sounds_you_hear_living_half_a_mile_away_from/ And consider that they are running at all hours of the day, and just imagine trying to sleep with that noise.
Generative AI delenda est. I give not one single flying fuck about ostensible productivity gains if this is the cost.
EDIT: As some are pointing out, go look at the various versions of the footage released concerning the apparent attempted shooting at the White House Correspondents' Dinner. At least one of them is clearly altered by AI methods; how are we to know if others have not had the same treatment, only with more work put into them to disguise it better? (Another example is that video of Benjamin Netanyahu in a coffee shop when he hadn't been seen in public for a few days after the start of the attacks on Iran by Israel and the US, where certain apparent inconsistencies in the video--for instance, the foam on the coffee not changing after he takes a sip--had people wondering whether he was actually dead.) The current US administration is already clearly post-truth; AI tools are only permitting them to be even more so.
"The Analytical Engine has no pretensions to originate anything. It can do whatever we know how to order it to perform." So wrote Ada Lovelace almost 200 years ago, describing the computing machine her friend Charles Babbage wanted to build. Alan Turing called it "Lady Lovelace's Objection," and it is as true of today's AI as it would have been for the Analytical Engine back then.
I don't disagree, but I wonder if it will matter that AI is not really AI according to the Aristotelian view of what intelligence is. How will it play out is the question, as it is with any large-scale, potentially world-altering technology. Do we ride upon the railroad or does it ride upon us? The history of technological revolutions suggests that, at least in most cases that matter, one can expect it to be the latter unless there is a radical change in how our species confronts technological "development". Hitherto, we have not expected or demanded, at the level of culture or ethics (or law, for that matter), that technology satisfy the precautionary principle before it is let loose into the wild. That we start doing so is the necessary radical change that needs to be made.
It is no longer acceptable (this was obvious when Ted Kaczynski was arrested) to be heedless of the principles of technology assessment to the degree that a new technological wave is just passively embraced as if by rote. There is enough in the warnings of insiders, enough in the initial evidence of what AI can do, more than enough contraindications in the accumulated technological record, and more than enough criticism of technological development in general as not necessarily good for humanity that any new technology such as AI should by now be greeted with the utmost hostile skepticism.
One principle is that one does not judge a new technology solely on the basis of how it benefits you (Jerry Mander). This applies, I believe, to the widespread random tinkering with AI that we've seen, and the genuflecting starstruck at what it can do, thereby enabling its greater sophistication and all of the negative political and economic consequences we can expect from its refinement. By now there should be a forceful social sanction against all of us individually for this, of the same sort as a sanction against behaving selfishly in any other context, or as one against certain criminal offenses.
Another principle is that, if a given technology CAN be used for good OR ill, then it WILL be used for good AND ill to the extent of its capacities and probably others no one imagined. This is a rebuttal to the old saw that technology is neutral, it just depends how it is used, by whom, and for what purposes. Perhaps, but not likely, not yet in human history. Technology is always a bargain, not an unadulterated advance. It always destroys the past. This is what the Luddites objected to and what they should be most remembered for. We are all entitled to decide if it is a bargain we want to make.
@Grace Blakeley you might find this an interesting perspectif, as itvpoints also at the creation of a permanent underclass of ‘input-workers’ for AI, which does not get enough attention. And highlights other arguments forcthe AI bubble
I very much agree. “Artificial intelligence” is as much an abuse of the word intelligence as IQ has been. Now that many non-computer nerds understand what an LLM is, the AI boosters have transitioned to AGI, as if the word “general” will convince us AI is equivalent to human intelligence. In the short run, AI is not working for businesses that adopt it, is not eliminating its errors and growth limitations, and is meeting resistance in the effort to build big enough data centers to run what capabilities it has. This implies that unlike the internet, which is flexible in application and improvement, AI may eventually fail, taking down the segments of the economy dependent on its growth and circular financing. The real danger of AI is unintelligent actors like Pete Hegseth handing weapons control over to a faulty AI, with consequences like the double bombing of an Iranian girls’ school by an AI that mistook it for a legitimate target. Think of it this way: automobiles didn’t eliminate humankind, but in the hands of humans they are a leading cause of death and disability and have made pollution worse and cities less affordable. A harness maker in 1901 might have been right to warn us away from cars, but for the wrong reason.
RE: "the willingness to ask what intelligence is" and "to accept the answer." I appreciate your concern about placing AI activity in the same category with human mental activity (i.e., perhaps buying into replacing humans with computers) and I agree there may be value in alerting the public to the differences. I don't agree with overstating the differences as involving fundamentally different categories, specifically where production of effectively intelligent outputs is concerned. I think you agree that pattern matching is fundamental within human intelligence, too. Once again (i.e., similar to a recent AI-related post) you assert your premises instead of truly arguing for them. For example, you use Aristotle in some way, concerning connection of intelligence and moral/ethical ends. As you know better than I, Aristotle got a lot of things very wrong. LLMs are intelligent in the sense that they, in effect, grasp as ends what the questioner is asking about, and they produce and provide pertinent, impressive informational responses (as you also recognize).
The thing is, I'm not a reductionist. And I know most people in the academic world are. The Academy is very much inside a reductionist tradition. Completely in the STEM world. And in the humanities you can find islands of anti-reductionism. But in the STEM fields, people are completely reductionist and have reductionist theories of intelligence in their head. I was one of them for most of my life and I've now reasoned myself out of that view.
Not the first time I've felt your response is in the direction of ad hominem instead of responding to the substance of the suggestion in my otherwise supportive comment. The suggestion is to do better than to rely on Aristotle, in this instance. Yes as I have mentioned previously I am an academic. Social Psych PhD, which I imagine you can picture better than most. In 1977 I took a grad course on AI as such. In 1978, a month long workshop at Yale (Roger Schank--obviously not LLM at the time) using LISP on PDP-10. I concluded at that time that computers were extremely far from exhibiting intelligence so I got away from "AI" (!) and I stuck with Psychology. Thus in this domain I arrive at my opinions reasonably informed and unencumbered by a priori motivation to reach an opinion. However, as they say, a word here is doing a lot of work: "reasonably."
It's not really an ad hominem. It's just a metaphysical disagreement about what intelligence is. I'm just acknowledging it. I could be proven wrong. My intuitions based on what I know simply point another way.
“The systems require expert humans to write the prompts, generate the responses, evaluate the outputs, label the failures“ only one of these is strictly true at the moment.
Still, even though replicating that fundamental human intuition or something beyond that sounds impossible, I wouldn’t bet against it. If you’re of the belief that human creativity is boundless then all of this is just a matter of time. Tech marketing is always hyperbolic but nevertheless its undeniable this is a major step in the intelligence direction and we got here a lot faster than many could’ve predicted.
I am not just learned in philosophy, I am also very learned in computer science and I have been spending some quality time with these technologies and I feel confident that my conclusion is correct. I risk an egg on my face by doing so. But I don't feel like that egg is forthcoming.
AI is not AI. I once took a Microsoft person in my class to task for the utter bullshit in their advertising (to be fair he'd worked on it). It's a LLM + stochastic prediction with a few knobs on. Basically scalled up machine learning. This is further propped up by visual/animation slop, also mostly from machine learning. You cannot get to "genuine" AI from here. All the talk of AI ending humanity is bunk - well this version of AI anyway. This is why ChatGPT can't write usable Visual Basic code for PowerPoint but can for Word. Why? Because Microsoft produced different versions of VB with different object models for each bit of the Office suite. Word & Excel had good documentation, PowerPoint and the rest, not so much. ChatGPT is essentially missing something to crib from. There is no road from what we have now to AGI: for all intents and purposes what we have is closer to Hadoop being used to interrogate a data lake than a "thinking machine." I'd imagine it will all go horribly wrong sooner rather than later. And then we'll all find out why he needs that bunker.
Thank you. You explained what I intuïtively felt but could not explain clearly.
We have American Intelligence, not Artificial intelligence. Our intelligence Agencies have subcontracted their duties to Silicon valley and unleashed them on us.
What we did was merged American intelligence Agencies with silicon valley. We have Privatized American Intelligence.
The symmetry with nepotism and wealth allowing nascent artists a chance to break through is rather striking. Tokenize (or commodify) every aspect of life, throw in flood information suppression, even inadvertently, and you can claim some filmy facsimile of it all for yourself--if you can afford the model. If people are forced to contribute to it or face starvation.
The output of an LLM looks a lot like the output of a corporate middle manager. The inference drawn by corporate executives, that LLMs are intelligent, went in the wrong direction.
Also, as I've noted before, generative AI is an atrocity in creative fields, both image and text generation. It is almost vomit-worthy to see much of that output and it completely devalues actual work and talent. I hope the copyright-infringement lawsuits brought against companies like OpenAI and Anthropic bankrupt them out of business. Further, of course, there's the poisoning of intergenerational knowledge transfer which will kill our continued ability to flourish as a species.
And then there's the noise-pollution impacts on anyone who has a data centre forced on their area; see, for instance, this video of what it sounds like when you live half a mile from one of those monstrosities: https://www.reddit.com/r/Sarnia/comments/1syfvzu/the_sounds_you_hear_living_half_a_mile_away_from/ And consider that they are running at all hours of the day, and just imagine trying to sleep with that noise.
Generative AI delenda est. I give not one single flying fuck about ostensible productivity gains if this is the cost.
EDIT: As some are pointing out, go look at the various versions of the footage released concerning the apparent attempted shooting at the White House Correspondents' Dinner. At least one of them is clearly altered by AI methods; how are we to know if others have not had the same treatment, only with more work put into them to disguise it better? (Another example is that video of Benjamin Netanyahu in a coffee shop when he hadn't been seen in public for a few days after the start of the attacks on Iran by Israel and the US, where certain apparent inconsistencies in the video--for instance, the foam on the coffee not changing after he takes a sip--had people wondering whether he was actually dead.) The current US administration is already clearly post-truth; AI tools are only permitting them to be even more so.
"The Analytical Engine has no pretensions to originate anything. It can do whatever we know how to order it to perform." So wrote Ada Lovelace almost 200 years ago, describing the computing machine her friend Charles Babbage wanted to build. Alan Turing called it "Lady Lovelace's Objection," and it is as true of today's AI as it would have been for the Analytical Engine back then.
Brilliant.,Thank you! I am glad to have "automatic translation" in my lexicon.
If someone like Elon Musk can manipulate it… it's merely a tool.
I don't disagree, but I wonder if it will matter that AI is not really AI according to the Aristotelian view of what intelligence is. How will it play out is the question, as it is with any large-scale, potentially world-altering technology. Do we ride upon the railroad or does it ride upon us? The history of technological revolutions suggests that, at least in most cases that matter, one can expect it to be the latter unless there is a radical change in how our species confronts technological "development". Hitherto, we have not expected or demanded, at the level of culture or ethics (or law, for that matter), that technology satisfy the precautionary principle before it is let loose into the wild. That we start doing so is the necessary radical change that needs to be made.
It is no longer acceptable (this was obvious when Ted Kaczynski was arrested) to be heedless of the principles of technology assessment to the degree that a new technological wave is just passively embraced as if by rote. There is enough in the warnings of insiders, enough in the initial evidence of what AI can do, more than enough contraindications in the accumulated technological record, and more than enough criticism of technological development in general as not necessarily good for humanity that any new technology such as AI should by now be greeted with the utmost hostile skepticism.
One principle is that one does not judge a new technology solely on the basis of how it benefits you (Jerry Mander). This applies, I believe, to the widespread random tinkering with AI that we've seen, and the genuflecting starstruck at what it can do, thereby enabling its greater sophistication and all of the negative political and economic consequences we can expect from its refinement. By now there should be a forceful social sanction against all of us individually for this, of the same sort as a sanction against behaving selfishly in any other context, or as one against certain criminal offenses.
Another principle is that, if a given technology CAN be used for good OR ill, then it WILL be used for good AND ill to the extent of its capacities and probably others no one imagined. This is a rebuttal to the old saw that technology is neutral, it just depends how it is used, by whom, and for what purposes. Perhaps, but not likely, not yet in human history. Technology is always a bargain, not an unadulterated advance. It always destroys the past. This is what the Luddites objected to and what they should be most remembered for. We are all entitled to decide if it is a bargain we want to make.
@Grace Blakeley you might find this an interesting perspectif, as itvpoints also at the creation of a permanent underclass of ‘input-workers’ for AI, which does not get enough attention. And highlights other arguments forcthe AI bubble
I very much agree. “Artificial intelligence” is as much an abuse of the word intelligence as IQ has been. Now that many non-computer nerds understand what an LLM is, the AI boosters have transitioned to AGI, as if the word “general” will convince us AI is equivalent to human intelligence. In the short run, AI is not working for businesses that adopt it, is not eliminating its errors and growth limitations, and is meeting resistance in the effort to build big enough data centers to run what capabilities it has. This implies that unlike the internet, which is flexible in application and improvement, AI may eventually fail, taking down the segments of the economy dependent on its growth and circular financing. The real danger of AI is unintelligent actors like Pete Hegseth handing weapons control over to a faulty AI, with consequences like the double bombing of an Iranian girls’ school by an AI that mistook it for a legitimate target. Think of it this way: automobiles didn’t eliminate humankind, but in the hands of humans they are a leading cause of death and disability and have made pollution worse and cities less affordable. A harness maker in 1901 might have been right to warn us away from cars, but for the wrong reason.
RE: "the willingness to ask what intelligence is" and "to accept the answer." I appreciate your concern about placing AI activity in the same category with human mental activity (i.e., perhaps buying into replacing humans with computers) and I agree there may be value in alerting the public to the differences. I don't agree with overstating the differences as involving fundamentally different categories, specifically where production of effectively intelligent outputs is concerned. I think you agree that pattern matching is fundamental within human intelligence, too. Once again (i.e., similar to a recent AI-related post) you assert your premises instead of truly arguing for them. For example, you use Aristotle in some way, concerning connection of intelligence and moral/ethical ends. As you know better than I, Aristotle got a lot of things very wrong. LLMs are intelligent in the sense that they, in effect, grasp as ends what the questioner is asking about, and they produce and provide pertinent, impressive informational responses (as you also recognize).
The thing is, I'm not a reductionist. And I know most people in the academic world are. The Academy is very much inside a reductionist tradition. Completely in the STEM world. And in the humanities you can find islands of anti-reductionism. But in the STEM fields, people are completely reductionist and have reductionist theories of intelligence in their head. I was one of them for most of my life and I've now reasoned myself out of that view.
Not the first time I've felt your response is in the direction of ad hominem instead of responding to the substance of the suggestion in my otherwise supportive comment. The suggestion is to do better than to rely on Aristotle, in this instance. Yes as I have mentioned previously I am an academic. Social Psych PhD, which I imagine you can picture better than most. In 1977 I took a grad course on AI as such. In 1978, a month long workshop at Yale (Roger Schank--obviously not LLM at the time) using LISP on PDP-10. I concluded at that time that computers were extremely far from exhibiting intelligence so I got away from "AI" (!) and I stuck with Psychology. Thus in this domain I arrive at my opinions reasonably informed and unencumbered by a priori motivation to reach an opinion. However, as they say, a word here is doing a lot of work: "reasonably."
It's not really an ad hominem. It's just a metaphysical disagreement about what intelligence is. I'm just acknowledging it. I could be proven wrong. My intuitions based on what I know simply point another way.
“The systems require expert humans to write the prompts, generate the responses, evaluate the outputs, label the failures“ only one of these is strictly true at the moment.
Still, even though replicating that fundamental human intuition or something beyond that sounds impossible, I wouldn’t bet against it. If you’re of the belief that human creativity is boundless then all of this is just a matter of time. Tech marketing is always hyperbolic but nevertheless its undeniable this is a major step in the intelligence direction and we got here a lot faster than many could’ve predicted.
I am not just learned in philosophy, I am also very learned in computer science and I have been spending some quality time with these technologies and I feel confident that my conclusion is correct. I risk an egg on my face by doing so. But I don't feel like that egg is forthcoming.