On 1/24/08, Richard Loosemore <[EMAIL PROTECTED]> wrote: > Theoretically yes, but behind my comment was a deeper analysis (which I > have posted before, I think) according to which it will actually be very > difficult for a negative-outcome singularity to occur. > > I was really trying to make the point that a statement like "The > singularity WILL end the human race" is completely ridiculous. There is > no WILL about it.
Richard, I'd be curious to hear your opinion of Omohundro's "The Basic AI Drives" paper at http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf (apparently, a longer and more technical version of the same can be found at http://selfawaresystems.files.wordpress.com/2008/01/nature_of_self_improving_ai.pdf , but I haven't read it yet). I found the arguments made relatively convincing, and to me, they implied that we do indeed have to be /very/ careful not to build an AI which might end up destroying humanity. (I'd thought that was the case before, but reading the paper only reinforced my view...) -- http://www.saunalahti.fi/~tspro1/ | http://xuenay.livejournal.com/ Organizations worth your time: http://www.singinst.org/ | http://www.crnano.org/ | http://lifeboat.com/ ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=90642622-a4687d