Re: [agi] Motivational Systems that are stable

2006-10-30 Thread James Ratcliff
So, it looks like to really create any kind of system like this, a black-box seperate programming ability in some sort must be created, wherin we can create the program to 'hard-wire' in the reward system, seperate from the main AI unit, where they cannot in any way change it to reward itself. The

Re: [agi] Motivational Systems that are stable

2006-10-29 Thread Mark Waser
Although I understand, in vague terms, what ideaRichard is attempting to express, I don't seewhy having"massive numbers of weak constraints" or "large numbers of connections from [the]motivational system to [the]thinking system." gives any more reason to believe it is reliably Friendly

Re: [agi] Motivational Systems that are stable

2006-10-28 Thread James Ratcliff
I disagree that humans really have a "stable motivational system" or would have to have a much more strict interpretation of that phrase. Overall humans as a society have in general a stable system (discounting war and etc) But as individuals, too many humans are unstable in many small if not

Re: [agi] Motivational Systems that are stable

2006-10-28 Thread Richard Loosemore
This is why I finished my essay with a request for comments based on an understanding of what I wrote. This is not a comment on my proposal, only a series of unsupported assertions that don't seem to hang together into any kind of argument. Richard Loosemore. Matt Mahoney wrote: My

Re: [agi] Motivational Systems that are stable

2006-10-28 Thread Matt Mahoney
- Original Message From: James Ratcliff [EMAIL PROTECTED]To: agi@v2.listbox.comSent: Saturday, October 28, 2006 10:23:58 AMSubject: Re: [agi] Motivational Systems that are stableI disagree that humans really have a "stable motivational system" or would have to have a much more strict

Re: [agi] Motivational Systems that are stable

2006-10-28 Thread Hank Conn
PROTECTED] To: agi@v2.listbox.comSent: Saturday, October 28, 2006 10:23:58 AMSubject: Re: [agi] Motivational Systems that are stable I disagree that humans really have a stable motivational system or would have to have a much more strict interpretation of that phrase. Overall humans as a society have

Re: [agi] Motivational Systems that are stable

2006-10-28 Thread Richard Loosemore
Hank Conn wrote: Although I understand, in vague terms, what idea Richard is attempting to express, I don't see why having massive numbers of weak constraints or large numbers of connections from [the] motivational system to [the] thinking system. gives any more reason to believe it is

Re: [agi] Motivational Systems that are stable

2006-10-27 Thread James Ratcliff
Richard, The problem with the entire presentation is that it is just too hopeful, there is NO guarentee whatsoever that the AI will respond in a nice fashion, through any given set of interactions.First, you say a rather large amount (how many needed?) of motivations all competing at once for the

Re: [agi] Motivational Systems that are stable

2006-10-27 Thread Justin Foutts
I'm sure you guys have heard this before but... If AI will inevitably be created, is it not also inevitable that we will "enslave" the AI to do our bidding?And if both of these events are inevitable it seems that we must accept that the Robot Rebellion and enslavement of humanity is ALSO

Re: [agi] Motivational Systems that are stable

2006-10-27 Thread Matt Mahoney
My comment on Richard Loosemore's proposal: we should not be confident in our ability to produce a stable motivational system. We observe that motivational systems are highly stable in animals (including humans). This is only because if an animal can manipulate its motivations in any way, then it

Re: [agi] Motivational Systems that are stable

2006-10-27 Thread Ben Goertzel
Richard, As I see it, in this long message you have given a conceptual sketch of an AI design including a motivational subsystem and a cognitive subsystem, connected via a complex network of continually adapting connections. You've discussed the way such a system can potentially build up a

[agi] Motivational Systems that are stable

2006-10-25 Thread Richard Loosemore
Ben Goertzel wrote: Loosemore wrote: The motivational system of some types of AI (the types you would classify as tainted by complexity) can be made so reliable that the likelihood of them becoming unfriendly would be similar to the likelihood of the molecules of an Ideal Gas suddenly