10 Comments
User's avatar
T8's avatar

Clear and succinct. Love it!

But oh what a problem when wielded by the better-than bros. 🤖

joAn's avatar

Yes! This is really great. Having worked on the periphery of large data analysis since the early '80's, observed assessments based thereon, the 'founder factor' bias was palpable. It mirrors your very coherently laid-out Garbage In, Garbage Out analysis. Touche!! And, there are always some very interesting human stories to illustrate this point... and colorfully, as well :) Thanks, Mike!

LM's avatar

Very thought provoking! Thanks for writing about this—so important, yet so under-examined.

A few paradoxes and complications arose for me as I read this:

-in economics, “utility” can supposedly be measured, but it’s undefined. It’s satisfaction, or happiness, or preferences, basically anything a consumer receives in a transaction that an economist can’t define. Is it meaningful to measure something you can’t define? I suppose the answer is yes but it doesn’t seem straightforward.

-economists think of utility to the individual and any collective utility is a sum of individual utility. Are you thinking democracy is our best way to aggregate individual utilities? Or is it something more? Or is it the process through which we need to regulate AI?

-it seems free will—defining your own values and preferences—is the foundation on which this discussion is built. I can only imagine how many angels the Curtis Yarvins of the world will conjure to dance on the head of this pin to determine whether we even need free will or if it even exists!

Whit Blauvelt's avatar

Free will even goes beyond "defining your own values." Consider artistic production. A poet who defines her values, then seeks to write a poem displaying those values as defined, produces poetry inferior to the poet who writes (in part) to discover her values. We do not love by defining our values as to what to love; we discover our love as we live in this world. Love is often taken to be the highest human value; it's undefinable. So AI programmed to utilitarian values will leave out love, just as too much of our current utilitarian-design economic system does.

LM's avatar

Good point. Yes, the act of deciding plays a key role, if not being the entire point of preference (or utility or whatever).

Carl A. Jensen's avatar

Why should it be surprising that there's not a technological solution to the human tensions and anxieties around values or that values issues can drive technology in excessive and self-defeating ways?

Whether you see the Tower of Babel as literal or metaphorical, it conveys persuasive prognosis.

Stacy DePue's avatar

This helps but my brain has a hard time grasping the “alignment problem” I need to read more about it bc I just don’t get it

susan chapin's avatar

Yet “free will” as it interacts with democratic engagement is so easily manipulated

Brianna's avatar

So well written. I'm not so sure about the conclusion, though.

Alexander Kurz's avatar

I think what the alignment problem shows is that humans themselves are not aligned. After all, AI is trained on human generated input. Not sure whether that is your point.