Hi James,

The approach you describe is pretty much what I said in:
https://arxiv.org/abs/1411.1373
Especially in Chapter 7.

I always find your posts interesting.

Bill

On Mon, 22 Jan 2024, James Bowery wrote:
Ever since AGI's formalization in AIXI it has been obvious how to align it:
Construct the AGI's SDT utility function component (Sequential Decision Theory) 
to
call out to the human for the human's valuation of a consequence.

This is _so_ obvious -- and has been now for DECADES -- that it seems to me the
only reasonable explanation for the ongoing hysteria over "alignment" is that
people don't want to admit that what they're really afraid of is other people. 
And, yes, I know that one reason to be afraid of other people is that they might
remove themselves from the loop so as to avoid the labor of continuous
micro-evaluations of consequences -- but that's not the framing of the hysteria
over "alignment" is it?

It seems to me that the real reason for the hysteria is to avoid admitting that
the powers that be have done a horrible job of paying attention to the consent 
of
the governed.

Maybe it would help them to realize that all their insular contempt for the
consent of the governed built up over at least the last half century has not
resulted in sniper rifles taking out the cooling systems of the few dozen or so
data centers.  At least not yet.

Artificial General Intelligence List / AGI / see discussions + participants +
delivery options Permalink


------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Te8aae875ccd49383-M35da7b7b0a66a460577df7c0
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to