--- Jef Allbright <[EMAIL PROTECTED]> wrote:

> On 7/2/07, Stathis Papaioannou <[EMAIL PROTECTED]>
> wrote:
> > On 02/07/07, Jef Allbright <[EMAIL PROTECTED]>
> wrote:
> >
> > > While I agree with you in regard to decoupling
> intelligence and any
> > > particular goals, this doesn't mean goals can be
> random or arbitrary.
> > > To the extent that striving toward goals (more
> realistically:
> > > promotion of values) is supportable by
> intelligence, the values-model
> > > must be coherent.
> >
> > I'm not sure what you mean by "coherent". If I
> make it my life's work
> > to collect seashells, because I want to have the
> world's biggest
> > seashell collection, how does that rate as a goal
> in terms of
> > arbitrariness and coherence?
> 
> To be meaningful (of course necessarily
> subjectively), a goal is an
> expected outcome of the effective expression of an
> agent's values.

I think we're getting terms mixed up here. By
"values", do you mean the "ends", the ultimate moral
objectives that the AGI has, things that the AGI
thinks are good across all possible situations? That's
what I've been meaning by "supergoals". A goal isn't
an "expected outcome" in the sense that it's what the
AGI thinks will happen; it's what the AGI wants to
happen, the target of the optimization.

> Our values do not exist in isolation;

This applies to humans and other evolved creatures; it
need not apply to AGIs. An AGI can have the value (or
supergoal) of "turn the planet into cheesecake" in
total isolation.

> they represent
> a complex model
> of a desired state, driving our actions such that we
> affect our
> environment in the direction of reducing the
> difference between the
> perceived model and our values model.

Er, it doesn't work like that, exactly. I, and I
suspect everyone else, would prefer a universe in
which humanity is intact and the other 99.99999999999%
of the universe is arranged in the worst possible way
to a universe in which the Earth is a smoking pile of
ash and the other 99.99999999999% of the universe is
arranged into glorious beautiful tapestries of light.

>  Wash, rinse,
> and repeat
> continuously.
> 
> It can be apparent that a more coherent values
> matrix is more
> effectively realized; and a less coherent model
> tends to be at odds
> with itself.

Agreed.

>  It's important to note that for any
> adaptive system -- I
> mean, organism

An AGI need not qualify as an "organism" in the
biological sense because it need not reproduce.

> -- while actions are always driven by
> the present
> model, the model continues to be updated as a result
> of selection for
> "what works."

Also agreed.

> A "goal" of having the world's biggest seashell
> collection may be seen
> more effectively as expression of a set of values,

In your terminology, consider an AGI that has a lone
value- not in the context of any other values- of
having the world's biggest seashell collection. That's
what we're referring to.

> dominated perhaps
> in the case of a human primate by well-known evolved
> biases toward
> "more is better" and "special is better".

If a human decides to collect as many seashells as
possible, you can express it in terms of
back-of-the-mind evolutionary pressures. If an AGI
decides that, you can't express it in terms of
anything else because there isn't an "anything else"
to express it in terms of. The AGI simply wants
seashells, like a chess-playing AI wants to win.

>  To the
> extent that the
> overall "values-model"  (of which these particular
> values are a part)
> is coherent, these values will tend to be
> effectively expressed.
> 
> - Jef
> 
> -----
> This list is sponsored by AGIRI:
> http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
>
http://v2.listbox.com/member/?&;
> 

 - Tom


 
____________________________________________________________________________________
Bored stiff? Loosen up... 
Download and play hundreds of games for free on Yahoo! Games.
http://games.yahoo.com/games/front

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=9043686-c6d804

Reply via email to