On 04/06/07, Derek Zahn <[EMAIL PROTECTED]> wrote:
I wonder if a time will come when the personal security of AGI researchers or
conferences will be a real concern.  Stopping AGI could be a high priority
for existential-risk wingnuts.

I think this is the view put forward by Hugo De Garis.  I used to
regard his views as little more than an amusing sci-fi plot, but more
recently I am slowly coming around to the view that there could emerge
a rift between those who want to build human-rivaling intelligences
and those who don't, probably at first amongst academics then later in
the rest of society.  I think it's quite possible that todays
existential riskers may turn into tomorrows neo-luddite movement.  I
also think that some of those promoting AI today may switch sides as
they see the prospect of a singularity becoming more imminent.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Reply via email to