Got ya, thanks for the clarification. That brings up another question. Why
do we want to make an AGI?



On 8/27/08, Matt Mahoney <[EMAIL PROTECTED]> wrote:
>
>  An AGI will not design its goals. It is up to humans to define the goals
> of an AGI, so that it will do what we want it to do.
>
> Unfortunately, this is a problem. We may or may not be successful in
> programming the goals of AGI to satisfy human goals. If we are not
> successful, then AGI will be useless at best and dangerous at worst. If we
> are successful, then we are doomed because human goals evolved in a
> primitive environment to maximize reproductive success and not in an
> environment where advanced technology can give us whatever we want. AGI will
> allow us to connect our brains to simulated worlds with magic genies, or
> worse, allow us to directly reprogram our brains to alter our memories,
> goals, and thought processes. All rational goal-seeking agents must have a
> mental state of maximum utility where any thought or perception would be
> unpleasant because it would result in a different state.
>
> -- Matt Mahoney, [EMAIL PROTECTED]
>



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to