--- Jef Allbright <[EMAIL PROTECTED]> wrote:

> On 7/1/07, Stathis Papaioannou <[EMAIL PROTECTED]>
> wrote:
> 
> > If its top level goal is to allow its other goals
> to vary randomly,
> > then evolution will favour those AI's which decide
> to spread and
> > multiply, perhaps consuming humans in the process.
> Building an AI like
> > this would be like building a bomb and letting it
> decide when and
> > where to go off.
> 
> For years I've observed and occasionally
> participated in these
> discussions of humans (however augmented and/or
> organized) vis-à-vis
> volitional superintelligent AI, and it strikes me as
> quite
> significant, and telling of our understanding of our
> own nature, that
> rarely if ever is there expressed any consideration
> of the importance
> of the /coherence/ of goals expressed within a given
> context --
> presumably the AI operating within a much wider
> context than the
> human(s).
> 
> There's a common presumption that agents must act to
> maximize some
> supergoal, but this conception lacks a model for the
> supercontext
> defining the expectations necessary for any such
> goal to be
> meaningful.

"Supercontext"? "Meaningful"? What does that even
mean?

>  In the words of Cosmides and Tooby,
> [adaptive agents] are
> not fitness maximizers, but adaptation executors.

Yes, but they were referring to evolved organisms, not
optimization processes in general. There's no reason
why an AGI has to act like an evolved organism,
blindly following pre-written adaptations.

> In
> a complex
> evolving environment,

Do you mean evolving in the Darwinian sense or the
"changing over time" sense?

> prediction fails in proportion
> to contextual
> precision,

Again, what does this even mean?

> so increasing intelligence entails an
> increasingly coherent
> model of perceived reality,
> applied to promotion of
> an agent's present
> (and evolving) values into the future.

Most goal systems are stable under reflection- while
an agent might modify itself to have different
immediate goals, the high-level goal is naturally
stable because any modification to it means that the
new agent will do things that are less desirable under
the original goal than the current agent.

> While I agree with you in regard to decoupling
> intelligence and any
> particular goals, this doesn't mean goals can be
> random or arbitrary.

Why not?

> To the extent that striving toward goals (more
> realistically:
> promotion of values) is supportable by intelligence,
> the values-model
> must be coherent.

What the heck is a "values-model"? If its goal system
is incoherent, a self-modifying agent will modify
itself until it stumbles upon a coherent goal system,
at which point the goal system will be stable under
reflection and so won't have any incentive to
self-modify.

> - Jef
> 
> -----
> This list is sponsored by AGIRI:
> http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
>
http://v2.listbox.com/member/?&;

 - Tom


 
____________________________________________________________________________________
We won't tell. Get more on shows you hate to love 
(and love to hate): Yahoo! TV's Guilty Pleasures list.
http://tv.yahoo.com/collections/265 

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to