>From chapter 7 of https://arxiv.org/abs/1411.1373

> Assume that each agent πd communicates with a single human via natural
> language and visually, so the set D enumerates surrogate agents and the
> humans with whom they communicate. The key is that, because they
> communicate with just one person and not with each other, none of the
> agents πd develops a model of human society and therefore cannot pose the
> same threat as posed by our imagined Omniscience AI.


Think about the personal AI as an extension of the neocortex which
apparently evolved to generated world models as decision support for the
deeper brain structures that place values on projected consequences of
actions.

Each of us "develops a model of human society" based on our limited data
and within the AIXI components this corresponds to the Solomonoff Induction
components job of creating the KC program of prior observations.  The SDT
component relies on this to predict consequences of actions that would,
under an "aligned" AGI, call out to the human's neocortex thence deeper
brain structures for utility valuation.

Inescapably the real problem is the conflict between individuals and
whatever we think of as their "agency".  Any attempt to hide this conflict
between individuals by some elite group designing-in conflict resolution
protocols is just going to get the data centers shut down.

On Mon, Jan 22, 2024 at 2:33 PM Bill Hibbard via AGI <agi@agi.topicbox.com>
wrote:

> Hi James,
>
> The approach you describe is pretty much what I said in:
> https://arxiv.org/abs/1411.1373
> Especially in Chapter 7.
>
> I always find your posts interesting.
>
> Bill
>
> On Mon, 22 Jan 2024, James Bowery wrote:
> > Ever since AGI's formalization in AIXI it has been obvious how to align
> it:
> > Construct the AGI's SDT utility function component (Sequential Decision
> Theory) to
> > call out to the human for the human's valuation of a consequence.
> >
> > This is _so_ obvious -- and has been now for DECADES -- that it seems to
> me the
> > only reasonable explanation for the ongoing hysteria over "alignment" is
> that
> > people don't want to admit that what they're really afraid of is other
> people.
> > And, yes, I know that one reason to be afraid of other people is that
> they might
> > remove themselves from the loop so as to avoid the labor of continuous
> > micro-evaluations of consequences -- but that's not the framing of the
> hysteria
> > over "alignment" is it?
> >
> > It seems to me that the real reason for the hysteria is to avoid
> admitting that
> > the powers that be have done a horrible job of paying attention to the
> consent of
> > the governed.
> >
> > Maybe it would help them to realize that all their insular contempt for
> the
> > consent of the governed built up over at least the last half century has
> not
> > resulted in sniper rifles taking out the cooling systems of the few
> dozen or so
> > data centers.  At least not yet.
> >
> > Artificial General Intelligence List / AGI / see discussions +
> participants +
> > delivery options Permalink
> >
> >
> ------------------------------------------
> Artificial General Intelligence List: AGI
> Permalink:
> https://agi.topicbox.com/groups/agi/Te8aae875ccd49383-M35da7b7b0a66a460577df7c0
> Delivery options: https://agi.topicbox.com/groups/agi/subscription
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Te8aae875ccd49383-M7b976749d3717282c643b1da
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to