On 10/29/07, Mike Tintner <[EMAIL PROTECTED]> wrote: > What's that got to do with superAGI's? This: the whole idea of a superAGI > "taking off" rests on the assumption that the problems we face in life are > soluble if only we - or superAGI's- have more brainpower.
Umm, no? The idea of a takeoff only rests on the assumption that the AGI in question is superintelligent /relative to humans/. I don't think anybody has ever seriously claimed that AGIs could perfectly solve /every/ problem in the universe - only that they have the potential to be vastly smarter than humans. -- http://www.saunalahti.fi/~tspro1/ | http://xuenay.livejournal.com/ Organizations worth your time: http://www.singinst.org/ | http://www.crnano.org/ | http://lifeboat.com/ ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=4007604&id_secret=58700752-9c027f