[agi] Re: [singularity] Motivational Systems that are stable

2006-10-27 Thread Richard Loosemore
Curious. A couple of days ago, I responded to demands that I produce arguments to justify the conclusion that there were ways to build a friendly AI that was extremely stable and trustworthy, but without having to give a mathematical proof of its friendliness. Now, granted, the text was com

Re: [agi] Information Learning Systems

2006-10-27 Thread James Ratcliff
That is great, yeah that is definitely a great way to elicit a large amount of information from the masses of the world.  With a small a mount of planning you can make a fun "smart" game that will allow you to pick up innumerable facts about the world.  One thing I would require there is a strict s

Re: [agi] Motivational Systems that are stable

2006-10-27 Thread James Ratcliff
Richard,  The problem with the entire presentation is that it is just too hopeful, there is NO guarentee whatsoever that the AI will respond in a nice fashion, through any given set of interactions. First, you say a rather large amount (how many needed?) of motivations all competing at once for the

Re: [agi] Motivational Systems that are stable

2006-10-27 Thread Justin Foutts
I'm sure you guys have heard this before but...  If AI will inevitably be created, is it not also inevitable that we will "enslave" the AI to do our bidding?   And if both of these events are inevitable it seems that we must accept that the Robot Rebellion and enslavement of humanity is ALSO inevit

Re: [agi] Motivational Systems that are stable

2006-10-27 Thread Matt Mahoney
My comment on Richard Loosemore's proposal: we should not be confident in our ability to produce a stable motivational system.  We observe that motivational systems are highly stable in animals (including humans).  This is only because if an animal can manipulate its motivations in any way, then it

Re: [agi] Information Learning Systems

2006-10-27 Thread Mike Dougherty
On 10/27/06, James Ratcliff <[EMAIL PROTECTED]> wrote: I am working on another piece now that will scan through news articles and pull small bits of information out of them, such as:  Iran's nuclear program is only aimed at generating power. The process of uranium enrichment can be used to generate

Re: [agi] Motivational Systems that are stable

2006-10-27 Thread Ben Goertzel
Richard, As I see it, in this long message you have given a conceptual sketch of an AI design including a motivational subsystem and a cognitive subsystem, connected via a complex network of continually adapting connections. You've discussed the way such a system can potentially build up a self-