15 Comments
User's avatar
Whit Blauvelt's avatar

Beware the assumption that machine memory is of a kind with human memory. You write "Memory is not consciousness." Machine memory, certainly not, any more than a photograph or phonograph is conscious. We know where machine memory is kept. Despite decades of research, how and where human memory resides has not been solved. There are serious proposals in physics that natural laws are not outside the universe, somehow containing it in their rules, but are evolved and evolving habits -- which implies memory of a sort embedded in very existence -- and speculation from there that human memory may be continuous with the universe's, an akashic record or collective unconscious.

Admittedly, weird stuff. The point, at minimum, is that we don't know. Human memory may be quite unlike computer memory. Human memory may be, in some way, even conscious.

Mike Brock's avatar

I think this is a good point. And I certainly didn't mean to over-specify my case to imply the implication you guard against, appropriately, here.

Alan Farago's avatar

This is a very good analysis, Mike.

Unevieuxsac's avatar

“But if LLMs are translators that cannot and will not become conscious, then democratic deliberation remains epistemically necessary. Human judgment about values cannot be delegated. Expertise informs but democracy chooses. Consciousness experiencing value is irreplaceable.”

Great piece beautiful evocative writing about a technical subject working through the logic and upholding the necessity of the human elements missing from the equation.

You do a great job working through to this conclusion. What you write is necessary as the hype becomes greater.

Thank you for your brain, your human brain which can work its way through the claims to the core of the subject, that choosing is based on values, human values.

1207's avatar

I love the Star Trek reference. I've often thought about Star Trek and the way information synthesis is used productively on that show. or in black panther. my question to you, as a technical expert, is, can each llm be tweaked to "domain narrow" according to a guiding set of principles? for example, does Gemini "domain narrow" differently than co-pilot or chat gpt? can the domain narrowing be influenced to produce more or less of certain principles? I know that there are people who say use chat GPT for this or use Gemini for that or co-pilot for that, so I understand that functionally they have their strengths. but on the back end, do they have different principles? forgive my technical vocabulary limitation.

David L. Smith's avatar

‘Buried the lede.’

M Randall's avatar

Yes, well argued. My concern is that far too many people want their meaning to be given. Determining meaning, and bearing the weight of consciousness, used to be called Existentialism. A weight too much for most to bear. Hence the attraction of conspiracy, group identity - such as far right fascism or far left Marxism, or fundamentalist sects; all to provide precooked meaning.

Irene Mclachlan's avatar

Thanks for your insights Mike. They make a lot of sense.

Stephen Strum, MD, FACP's avatar

"That’s genuine pattern recognition across domains. That’s genuine pattern recognition across domains. What it can’t do: Feel the weight."

"LLMs are extraordinary at domain narrowing."

Why is the above so difficult for so many people-- no matter their level of formal education?

I think it is the case because some people are mentally constipated. They cannot imagine. They cannot think outside the box. An earlier expression of AI was in the use of artificial neural nets (ANNs) that I routinely employed in analyzing objective data for men presenting with prostate cancer. Many of my colleagues just were unable to grasp the pattern recognition. The idea of putting in a patient's age, sex, lab value, and physical exam findings and derive output based on a training set that showed that patient's individual risk of having cancer already spread outside the prostate gland-- and therefore needing a different approach to treatment ⇢ "that's impossible" or "You really can't believe that, right?"

That LLMs with their huge database can articulate an approach to preventing cancer recurrence based on a gene variation obtained from the patient's blood-- how can that be.

Jesus, if a person can scour the peer-reviewed literature and find this after 100 hours of work, do you not believe that a LLM can do this in a minute ± and do it better? Of course it can.

What those naysayers don't get is that the LLM, so far from what I have experienced, is objective. Example.

Patient: I manage my glucose health by checking insulin levels, fructosamine levels, HbA1c and random glucose.

MD SBS: Well, in the 60 years of reading the peer-reviewed literature and actual patient care of well over 1,000 patients, I find that using a continuous glucose monitoring (CGM) device now made cheap (e.g., Freestyle Libre System by Abbott, or Stello by ?) and testing your body under stress with that device used at 2 hrs after each meal per day x 3-4 days, and coupled with a detailed food diary of what you ate at each meal, allows me to identify the culprits in your diet that cause excessive glucose levels that linger too long in your body and can do damage. This was nicely presented in the article by Zeevi et al. (for those interested: https://www.dropbox.com/scl/fi/2s8wj23w32iffi3axwejq/Zeevi-15-Personalized-glucometer-based-testing-CGM.pdf?rlkey=q1wex2uzmhn7o7qpq9i9g02te&dl=0 is a link to the Zeevi paper).

Presenting the above to an AI like Gemini Pro gave a very fair but complete answer. Will it pursuade the Patient? Don't know since some people don't listen to any voice of experience but will persist in following their own advice, despite lack of experience.

Using AI all day long is not without problems. I killed 3 hours with Gemini Pro instructing me how to convert my Kindle books into PDFs, and the entire process failed and left me fully frustrated. But 95% of the time, I have been awed by the clarity and effectiveness of this particular AI model. It has already elevated my practice of medicine and made my life easier for just about everything I ask it (e.g., what's the right time of the year to plant potatoes?).

The important first step is using the AI that's right for you. I do not find Word's Copilot that helpful. I did not care for 7 other AIs that others may do well with. So much of an AI is like so much of what a person is all about due to family upbringing and education. How talented are the programmers, how virtuous, do they listen to User feedback, do they update the versions, and so on.

Ken Rose's avatar

The perfection of a political system isn’t its ability to be right all the time, but to be able to ADMIT ITS MISTAKES. When the MAGA idiots talk about America being a Constitutional Republic, I point out that IRAN is a Republic and has a constitution. A constitution that basically says to follow the Koran, but a constitution nonetheless. Perfect law handed down by God or inspired by Him is SHARIA, not our Constitution.

The Founding Fathers put checks and balances into the Constitution because they knew people had different values which needed to be deliberated and weighed against each other and compromises made. The amendment process admits that they might be in error in places altogether.

The Enlightenment was an effort to eliminate superstition and force and find a more natural order in the world based on reason that allowed for human freedom. Hobbes talks about the need for a Leviathan to preserve Order lest lower instincts bring Chaos. Locke argues for Freedom in order to prevent Tyranny. That is the primeval battle our nation was forged in. Right for Order, Left for Freedom.

Whose judgement is judged superior is relative. I have to remind people the man who wrote “All men are created equal,” was also shtuping his dead wife’s half sister who he OWNED.

Who would want a computer, no matter how smart, to decide what we should do? We have to WORK THINGS OUT FOR OURSELVES. That is the real human factor necessary. One person’s Freedom (for example, Peter Thiel’s) is, well basically, Serfdom for the rest of us. (Except Curtis Yarvin, of course.)

The problem with Classical Liberalism is choosing whose values are “higher,” who is more ethical. Who is “civilized” enough to vote or even HAVE a Democracy. Britain and France divvied up the Middle East with the promise of democracy but kept insisting they weren’t ready for it yet as Fascists and Communists schemed behind the scenes, luring them away from freedom. It’s still not ready.

Any pocket of Unfreedom we foster around the world will eventually come back to us. Any injustice within the system we tolerate like Slavery or Genocide Will metastasize into a cancer to come for our lives.

Daniel Pareja's avatar

AI is a twisted simulacrum of thought. It is a sick parody of life.

It's the smartass techbro's answer to the Turing Test.

Suzanne White's avatar

Thank you for again clearly and precisely laying out your thoughts. I have playing with the idea that intelligence is not the same as wisdom.

It has always seemed curious to me that in the very patriarchal times of Ancient Greece that the deity figure for wisdom was Athena. She was supposed to have been born directly from the head of Zeus wearing full armor. My fertile imagination has jumped to the idea that wisdom requires the female aspect of empathy to inform male strength; thanks to the wheels in my head that started spinning after reading your piece Mike.

laparque's avatar

Can you provide a guess on what percentage of AI execs remain self deluded on the category error, and what percentage know exactly what they're doing, appointing themselves as the people who will make society's essential normative choices?

Carol Case's avatar

Thank you for this deeply important observation and explanation. You've put into words an underlying discomfort with theses systems that is so difficult to articulate.