On Thu, Jun 12, 2008 at 6:44 AM, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote:
> If you have a fixed-priority utility function, you can't even THINK ABOUT the
> choice. Your pre-choice function will always say "Nope, that's bad" and
> you'll be unable to change. (This effect is intended in all the RSI stability
> arguments.)
>
> But people CAN make choices like this. To some extent it's the most important
> thing we do. So an AI that can't won't be fully human-level -- not a true
> AGI.

Even though there is no general agreement on the AGI definition, my
impression is that most of the community members understand that:
Humans demonstrate GI, but being "fully human-level" is not
necessarily required for "true AGI".
In some ways, it might even hurt the problem solving abilities.

Regards,
Jiri Jelinek


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com

Reply via email to