Re: [agi] Why isn't this the obvious approach to "alignment"?

2024-01-27 Thread Matt Mahoney
>
>
> 
>
The alignment problem has to address two threats: AI controlled by people
and AI not controlled by people. Most of our attention has been on the
second type even though it is a century away at the current rate of Moore's
law. Self replicating nanotechnology will be a threat when its computing
capacity exceeds that of DNA based life. That can happen because plants
currently use only 0.3% of available sunlight (90,000 terawatts) to make
carbohydrates (210 billion tons of carbon per year, or 20% of the
biosphere, at 4 Kcal/g). Solar panels already achieve 20-30% efficiency.

Assuming that global computing capacity doubles every 2 years, it will take
a century for the current 10^24 bits of storage capacity to match the 10^37
bits stored in all the world's DNA. We are also far below the computing
power of 10^29 DNA copy and 10^31 amino acid transcription operations per
second.

The kind of AI that we need to worry about now is the kind that gives us
everything we want, or at least everything that the owners of the AI want.
When your work no longer has value because machines can do it better, then
your only sources of income will be the AI that you own, your personal
information (for training AI), and government assistance. Your personal
information only has value in proportion to your buying power, thus
widening the power law distribution of wealth that is necessary to make an
economy work. It takes money to make money.

Income redistribution through taxes and benefits only solves part of the
problem. When you don't need other people, they don't need you either, or
know or care that you exist. When it is easier, safer, and more convenient
to live alone in our private virtual worlds, we stop having children and
lose our ability to communicate with other people even if we wanted to. We
are evolving short term to a mostly African and Muslim population and
longer term to a population that rejects technology, birth control, and
women's rights, provided we don't go extinct first. That will slow down
Moore's law before we have to worry about the other type of AI.




> k
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Te8aae875ccd49383-M71b3c193d1ee58acd4bef862
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Why isn't this the obvious approach to "alignment"?

2024-01-22 Thread James Bowery
>From chapter 7 of https://arxiv.org/abs/1411.1373

> Assume that each agent πd communicates with a single human via natural
> language and visually, so the set D enumerates surrogate agents and the
> humans with whom they communicate. The key is that, because they
> communicate with just one person and not with each other, none of the
> agents πd develops a model of human society and therefore cannot pose the
> same threat as posed by our imagined Omniscience AI.


Think about the personal AI as an extension of the neocortex which
apparently evolved to generated world models as decision support for the
deeper brain structures that place values on projected consequences of
actions.

Each of us "develops a model of human society" based on our limited data
and within the AIXI components this corresponds to the Solomonoff Induction
components job of creating the KC program of prior observations.  The SDT
component relies on this to predict consequences of actions that would,
under an "aligned" AGI, call out to the human's neocortex thence deeper
brain structures for utility valuation.

Inescapably the real problem is the conflict between individuals and
whatever we think of as their "agency".  Any attempt to hide this conflict
between individuals by some elite group designing-in conflict resolution
protocols is just going to get the data centers shut down.

On Mon, Jan 22, 2024 at 2:33 PM Bill Hibbard via AGI 
wrote:

> Hi James,
>
> The approach you describe is pretty much what I said in:
> https://arxiv.org/abs/1411.1373
> Especially in Chapter 7.
>
> I always find your posts interesting.
>
> Bill
>
> On Mon, 22 Jan 2024, James Bowery wrote:
> > Ever since AGI's formalization in AIXI it has been obvious how to align
> it:
> > Construct the AGI's SDT utility function component (Sequential Decision
> Theory) to
> > call out to the human for the human's valuation of a consequence.
> >
> > This is _so_ obvious -- and has been now for DECADES -- that it seems to
> me the
> > only reasonable explanation for the ongoing hysteria over "alignment" is
> that
> > people don't want to admit that what they're really afraid of is other
> people.
> > And, yes, I know that one reason to be afraid of other people is that
> they might
> > remove themselves from the loop so as to avoid the labor of continuous
> > micro-evaluations of consequences -- but that's not the framing of the
> hysteria
> > over "alignment" is it?
> >
> > It seems to me that the real reason for the hysteria is to avoid
> admitting that
> > the powers that be have done a horrible job of paying attention to the
> consent of
> > the governed.
> >
> > Maybe it would help them to realize that all their insular contempt for
> the
> > consent of the governed built up over at least the last half century has
> not
> > resulted in sniper rifles taking out the cooling systems of the few
> dozen or so
> > data centers.  At least not yet.
> >
> > Artificial General Intelligence List / AGI / see discussions +
> participants +
> > delivery options Permalink
> >
> >
> --
> Artificial General Intelligence List: AGI
> Permalink:
> https://agi.topicbox.com/groups/agi/Te8aae875ccd49383-M35da7b7b0a66a460577df7c0
> Delivery options: https://agi.topicbox.com/groups/agi/subscription
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Te8aae875ccd49383-M7b976749d3717282c643b1da
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Why isn't this the obvious approach to "alignment"?

2024-01-22 Thread Bill Hibbard via AGI

Hi James,

The approach you describe is pretty much what I said in:
https://arxiv.org/abs/1411.1373
Especially in Chapter 7.

I always find your posts interesting.

Bill

On Mon, 22 Jan 2024, James Bowery wrote:

Ever since AGI's formalization in AIXI it has been obvious how to align it:
Construct the AGI's SDT utility function component (Sequential Decision Theory) 
to
call out to the human for the human's valuation of a consequence.

This is _so_ obvious -- and has been now for DECADES -- that it seems to me the
only reasonable explanation for the ongoing hysteria over "alignment" is that
people don't want to admit that what they're really afraid of is other people. 
And, yes, I know that one reason to be afraid of other people is that they might
remove themselves from the loop so as to avoid the labor of continuous
micro-evaluations of consequences -- but that's not the framing of the hysteria
over "alignment" is it?

It seems to me that the real reason for the hysteria is to avoid admitting that
the powers that be have done a horrible job of paying attention to the consent 
of
the governed.

Maybe it would help them to realize that all their insular contempt for the
consent of the governed built up over at least the last half century has not
resulted in sniper rifles taking out the cooling systems of the few dozen or so
data centers.  At least not yet.

Artificial General Intelligence List / AGI / see discussions + participants +
delivery options Permalink



--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Te8aae875ccd49383-M35da7b7b0a66a460577df7c0
Delivery options: https://agi.topicbox.com/groups/agi/subscription