>  The three most common of these assumptions are:
>
>    1) That it will have the same motivations as humans, but with a
>  tendency toward the worst that we show.
>
>    2) That it will have some kind of "Gotta Optimize My Utility
>  Function" motivation.
>
>    3) That it will have an intrinsic urge to increase the power of its
>  own computational machinery.
>
>  There are other assumptions, but these seem to be the big three.

And IMO, the truth is likely to be more complex...

For instance,  a Novamente-based AGI will have an explicit utility
function, but only a percentage of the system's activity will be directly
oriented toward fulfilling this utility function

Some of the system's activity will be "spontaneous" ... i.e. only
implicitly goal-oriented .. and as such may involve some imitation
of human motivation, and plenty of radically non-human stuff...

ben g

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com

Reply via email to