Ever since AGI's formalization in AIXI it has been obvious how to align it:

Construct the AGI's SDT utility function component (Sequential Decision
Theory) to call out to the human for the human's valuation of a consequence.

This is _so_ obvious -- and has been now for DECADES -- that it seems to me
the only reasonable explanation for the ongoing hysteria over "alignment"
is that people don't want to admit that what they're *really* afraid of is
other people.  And, yes, I know that one reason to be afraid of other
people is that they might remove themselves from the loop so as to avoid
the labor of continuous micro-evaluations of consequences -- but that's not
the framing of the hysteria over "alignment" is it?

It seems to me that the real reason for the hysteria is to avoid admitting
that the powers that be have done a horrible job of paying attention to the
consent of the governed.

Maybe it would help them to realize that all their insular contempt for the
consent of the governed built up over at least the last half century has
not resulted in sniper rifles taking out the cooling systems of the few
dozen or so data centers.  At least not *yet*.

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Te8aae875ccd49383-M79593194c7bd98911f81f4a4
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to