Goals and motives are the same thing, in the sense that I mean them.
We want the AGI to want to do what we want it to do.

>Failure is an extreme danger, but it's not only failure to design safely 
>that's a danger.  Failure to design a successful AGI at all could be 
>nearly as great a danger.  Society has become too complex to be safely 
>managed by the current approaches...and things aren't getting any simpler.


No, technology is the source of complexity, not the cure for it. But that is 
what we want. Life, health, happiness, freedom from work. AGI will cost $1 
quadrillion to build, but we will build it because it is worth that much. And 
then it will kill us, not against our will, but because we want to live in 
simulated worlds with magic genies.
 -- Matt Mahoney, [EMAIL PROTECTED]



----- Original Message ----
From: Charles Hixson <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Wednesday, August 27, 2008 7:16:53 PM
Subject: Re: AGI goals (was Re: Information theoretic approaches to AGI (was 
Re: [agi] The Necessity of Embodiment))

Matt Mahoney wrote:
> An AGI will not design its goals. It is up to humans to define the 
> goals of an AGI, so that it will do what we want it to do.
Are you certain that this is the optimal approach?  To me it seems more 
promising to design the motives, and to allow the AGI to design it's own 
goals to satisfy those motives.  This provides less fine grained control 
over the AGI, but I feel that a fine-grained control would be 
counter-productive.

To me the difficulty is designing the motives of the AGI in such a way 
that they will facilitate human life, when they must be implanted in an 
AGI that currently has no concept of an external universe, much less any 
particular classes of inhabitant therein.  The only (partial) solution 
that I've been able to come up with so far (i.e., identify, not design) 
is based around imprinting.  This is fine for the first generation 
(probably, if everything is done properly), but it's not clear that it 
would be fine for the second generation et seq.  For this reason RSI is 
very important.  It allows all succeeding generations to be derived from 
the first by cloning, which would preserve the initial imprints.
>
> Unfortunately, this is a problem. We may or may not be successful in 
> programming the goals of AGI to satisfy human goals. If we are not 
> successful, ... unpleasant because it would result in a different state.
>  
> -- Matt Mahoney, [EMAIL PROTECTED]
>
Failure is an extreme danger, but it's not only failure to design safely 
that's a danger.  Failure to design a successful AGI at all could be 
nearly as great a danger.  Society has become too complex to be safely 
managed by the current approaches...and things aren't getting any simpler.


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to