On 31/05/2007, at 2:37 PM, Benjamin Goertzel wrote:
Eliezer considers my approach too risky in terms of the odds of accidentally creating a nasty AI; I consider his approach to have an overly high risk of delaying the advent of Friendly AI so long that some other nasty danger wrecks humanity before it happens...
Whereas I think both of you are nuts and that human augmentation will happen long before we need to worry about AI's. :)
Whatever computers can do, they can be plugged into people's brains. Your "guaranteed Friendly AI" is going to be... people!
(before they're processed into Soylent Green, that is! :) Steve. -- [EMAIL PROTECTED] ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8
