Derek,

This is how I responded to the below quoted comment from Don Detrich in
your email



Admittedly there are many possible dangers with future AGI technology. We
can think of a million horror stories and in all probability some of the
problems that will crop up are things we didn’t anticipate. At this point
it is pure conjecture.



True, the threat is pure conjecture, if by that you mean reasoning without
proof.  But that is not the proper standard for judging threats.  If you
had lived you life by disregarding all threats except those that had proof
you almost certainly would have died in early childhood.



 All new technologies have dangers, just like life in general. We can’t
know the kinds of personal problems and danger we will face in our future.




True, many other new technologies involve threats, and certainly among
them are nano-technology and bio-technology, which have potentials for
severe threats.  But there is something particularly threatening about a
technology can purposely try to outwit us that, particularly if networked,
it could easily be millions of times more intelligent than we are, and
that would be able to understand and hack the lesser computer
intelligences that we depend our lives on millions of times faster than
any current team of humans.  Just as it is hard to image a world in which
humans long stayed enslaved to cows, it is hard to imagine one in which
machines much brighter than we are stayed enslaved to us.



It should also be noted that the mere fact there have not been any major
disasters in fields as new as biotechology and nanotechnolgy in no ways
means that all concern for such threats were or are foolish.  The levies
in New Orleans held for how many years before they proved insufficient.


Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-----Original Message-----
From: Derek Zahn [mailto:[EMAIL PROTECTED]
Sent: Friday, September 28, 2007 5:45 PM
To: agi@v2.listbox.com
Subject: RE: [agi] HOW TO CREATE THE BUZZ THAT BRINGS THE BUCKS


Don Detrich writes:



AGI Will Be The Most Powerful Technology In Human History – In Fact, So
Powerful that it Threatens Us <<


Admittedly there are many possible dangers with future AGI technology. We
can think of a million horror stories and in all probability some of the
problems that will crop up are things we didn’t anticipate. At this point
it is pure conjecture. All new technologies have dangers, just like life
in general.



It'll be interesting to see if the "horror stories" about AGI follow the
same pattern as they did for Nanotechnology... After many years and
dollars of real nanotechnology research, the simplistic vision of the lone
wolf researcher stumbling on a runaway self-replicator that turns the
planet into gray goo became much more complicated and unlikely.  Plus you
can only write about gray goo for so long before it gets boring.



Not to say that AGI is necessarily the same as Nanotechnology in its
actual risks, or even that gray goo is less of an actual risk than writers
speculated about, but it will be interesting to see if the scenario of a
runaway self-reprogramming AI becomes similarly passe.



  _____

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?
<http://v2.listbox.com/member/?&;
> &

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=48004725-0222a5

Reply via email to