Hi,
 
Because we use a lot of evolutionary learning methods, it will work more like:
 
A whole populatoin of Novamentes (10 or so for starters, later perhaps much more) repeatedly try out new MindAgents (cognitive-control objects) on some test-cognitive-problems and see how well it does.  Another Novamente, the controller, studies which of the new MindAgents work well, and mines patterns among these, creating new MindAgents to try out.... 
 
So there is no human in the learning loop....
 
Furthermore, for a human to understand the intricate details of a learned procedure (e.g. an automatically learned MindAgent) may be very hard....  Just as understanding the details of our own adaptively learned neural wiring is very hard....
 
-- Ben
-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]On Behalf Of deering
Sent: Monday, July 05, 2004 2:54 PM
To: [EMAIL PROTECTED]
Subject: Re: [agi] Teaching AI's to self-modify

Ben, I hope you are going to keep a human in the loop. 
 
 
Human in the loop scenario:
 
The alpha Novamente makes a suggestion about some change to its software.
The human implements the change on the beta Novamente running on a separate machine, and tests it.
If it seems to be an improvement, it is incorporated into the alpha Novamente.
 
 
Human not in the loop scenario:
 
The Novamente looks at its code.
The Novamente makes changes to its code, and reboots itself.
The Novamente looks at its code.
The Novamente makes changes to its code, and reboots itself.
The Novamente looks at its code.
The Novamente makes changes to its code, and reboots itself.
The humans wonder what the hell is going on.
 
 
Mike Deering.


To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to