Its way out, but not crazy.  If humanity or some mechanical legacy of us
ever comes out the other end of the first century after superhuman
intelligence arrives, it or they will be ready to start playing in the
Galactic big leagues.

As I tell my 15 year old son, if he lives to a reasonable age, he will
experience perhaps the most exciting, transformative time in human
history.

Ed Porter


-----Original Message-----
From: John G. Rose [mailto:[EMAIL PROTECTED]
Sent: Sunday, November 11, 2007 12:28 PM
To: agi@v2.listbox.com
Subject: RE: [agi] What best evidence for fast AI?



Hi Edward,



This is kind of a crazy sounding subject and I don’t mean to distract but
the point I am making is that AGIs potentially would need to be weapons or
be associated with weaponry for good reasons. I have been informed
personally about UFOs by respectable individuals including an active
astronomer. One would assume the government has contingency plans but that
is probably not a good assumption, just look at global warming and how
that goes.



The success so far with handling nuclear technology by humanity has not
been bad so we are demonstrating some capability. AGI and singularity
though is much more radical than nuclear power and weaponry. There needs
to be international coordination, one country or corporation shouldn’t
dominate. But the countries and organizations that do invest should reap
rewards.



John



From: Edward W. Porter [mailto:[EMAIL PROTECTED]
Sent: Sunday, November 11, 2007 9:50 AM
To: agi@v2.listbox.com
Subject: RE: [agi] What best evidence for fast AI?



John,



I have thought about this.



According multiple articles I have read on the web, the current guestimate
is that there are 30-50 million habitable planets on the web (some think
the number was actually larger 4 billion years ago, meaning the many
planet systems are actually far ahead of us in terms of their life
cycles).  If you assumed that just 1 million of them had intelligent life,
and if you assumed those planets were randomly distributed throughout the
galaxy’s volume, that would be roughly one planet with intelligent life
within every cube 400 light years on a side.  Not that far for AGI probes
traveling at 1//10 to 1/100 the speed of light.



So the possibility of the earth being visited by intelligent aliens,
although far from certain, is not so low as to warrant ridicule.  I have
personally never seen a UFO.  Because I have tremendous faith in the human
mind’s powers of mis-perceptioin and delusion, I still question those who
say they have.  But I have met quite a few reasonable people who claim
they have, and have heard of a number of notable people have, reported
seeing them.



If, as some think the aliens are monitoring us, it just might be possible
that once the singularity is just about to occur, or has just occurred,
the aliens might then intervene either militarily or at least by opening
direct talks with out, because at that point we might be within decades or
centuries of being a threat to them.  Its similar to the way a lot of
companies with big patent portfolios operate.  They don’t bother
negotiating with you until you become big enough to be worth the trouble.



Ed Porter

-----Original Message-----
From: John G. Rose [mailto:[EMAIL PROTECTED]
Sent: Saturday, November 10, 2007 7:37 PM
To: agi@v2.listbox.com
Subject: RE: [agi] What best evidence for fast AI?

Yes this is true. Sometimes though I think that we need to build AGI
weapons ASAP. Why? The human race needs to protect itself from other
potentially aggressive beings. Humans treat animals pretty bad as an
example. The earth is a sitting duck. How do we defend ourselves? Clumsy
nukes? Not good enough… there needs to be new breakthroughs in
AGI/nanotech/digital physics that brings in new weaponry. That’s the ugly
reality. The alternative is to say that no other advanced beings exist or
if they do, assume that they’ll be friendly. Sounds sci-fi-ish but it is
not.



John





From: Edward W. Porter [mailto:[EMAIL PROTECTED]

John,



Robin's original post said

"I've been invited to write an article for an upcoming special issue of
IEEE Spectrum on "Singularity", which in this context means rapid and
large social change from human-level or higher artificial intelligence.  "

I assume he is smart enought to know that superintelligent machines pose
some threats and will have significant social consequences (that's why it
is called the singularity).  And certainly such threats have been
discussed on this list many times before.



I personally think it is possible AGI could bring in a much better
existence, but only if intelligence augmentation makes us more intelligent
as nations and as a world, if it lets us stay competative with the
machines we build,  and if it causes us to build mainly only machines that
have been designed to be campatible with, and hopefully care for, us..



Ed Poter





  _____

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/? <http://v2.listbox.com/member/?&;> &

  _____

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/? <http://v2.listbox.com/member/?&;> &

  _____

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?
<http://v2.listbox.com/member/?&;
> &

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=64009145-6ff7b1

Reply via email to