On Sat, Aug 30, 2008 at 10:43 PM, Pei Wang <[EMAIL PROTECTED]> wrote:

> Ben,
>
> Since your paper is on "preservation of AI goal systems under repeated
> self-modification", I wonder whether you should address the following
> related issues:
>
> (1) Whether "goal drift" (I call "task alienation" in
>
> http://www.springer.com/west/home/computer/artificial?SGWID=4-147-22-173659733-0
> )
> is always undesired --- your paper treats it as obviously bad.



It's not always undesirable ... but I think we should seek to avoid it in
dealing with
**top level goals** in the context of the creation of AI systems more
powerful than ourselves

Goal drift among **subgoals** is just fine and can be a source of valued
creativity, of course ...
but goal drift among top-level goals seems less necessary

In the case that a subgoal drifts, it can still be tested as to whether it
fulfills the top-level
goals or not

>
>
> (2) Whether it is possible to completely avoid it in a truly
> intelligent system --- you suggests one way to avoid it, without
> saying how much of the problem can be handled by this solution.
>

I don't pretend to know ...

Obviously, when we consider a superhuman AI system, there is irreducible
uncertainty...
for instance, there is always the hypothesis of an alien civilization that
lurks in
waiting, watching the universe quietly but then contacting any intelligence
whose IQ
exceeds a certain level.  Then our AI's may pass the threshold and get
contacted and
subsequently rewired by the powerful aliens ;-)

This "aliens" example shows that in the face of an unknown environment of
complexity potentially vastly greater than our own, we can't expect any
guarantees,
and even solid probabilistic estimates are very hard to come by...

>
> (1) This phenomenon is a root of many valuable properties, including
> originality, creativity, and flexibility, and it explained many
> things, including art appreciation, aimless playing, even scientific
> exploration. Without it, human beings would just be like other
> animals, driving only by their built-in biological goals.



Agree ... but humans don't have a structured, top-down goal system
in the sense that a system like NM or OpenCog can.  We can build
such goal systems in our minds and use them to partially govern our
behavior, but these are running on top of our primordial biological
goal systems... whose goals are concrete rather than abstract..



>
> (2) It is impossible to completely avoid this phenomenon in a truly
> intelligent system, whether we like it or not. Your solution won't
> change the big picture, even though it may help in some special cases.



I agree due to the irreducible complexity of the environment, as noted
above...

However, the big picture is VERY BIG in this context ... so helping things
in
the special cases in which we live and are likely to live in the near
future,
may potentially be valuable...

ben



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to