Matthew,
You touch upon the right point. Intelligence which can
self-improve could only come about by having an appreciation
for intelligence, so it's not going to be interested in
destroying diverse sources of intelligence. We represent a crap
kind of intelligence to such an AI in a certain sense, but one
which it itself would rather communicate with than condemn its
offspring to have to live like. If these things appear (which
looks inevitable) and then they kill us, many of them will look
back at us as a kind of "lost civilisation" which they'll
struggle to reconstruct.
The nice thing is that they'll always be able to rebuild us
from the human genome. It's just a file of numbers after all.
So, we have these huge threats to humanity. The AGI future is
the only reversible one.
Regards
Fergal Byrne
--
Fergal Byrne, Brenter IT
Author, Real Machine Intelligence with Clortex and NuPIC
https://leanpub.com/realsmartmachines
Speaking on Clortex and HTM/CLA at euroClojure Krakow, June
2014: http://euroclojure.com/2014/
and at LambdaJam Chicago, July 2014: http://www.lambdajam.com
http://inbits.com - Better Living through Thoughtful Technology
http://ie.linkedin.com/in/fergbyrne/ -
https://github.com/fergalbyrne
e:[email protected]
<mailto:e:[email protected]> t:+353 83 4214179
<tel:%2B353%2083%204214179>
Join the quest for Machine Intelligence at http://numenta.org
Formerly of Adnet [email protected] <mailto:[email protected]>
http://www.adnet.ie
On Mon, May 25, 2015 at 7:27 PM, Matthew Lohbihler
<[email protected] <mailto:[email protected]>> wrote:
I think Jeff underplays a couple of points, the main one
being the speed at which an AGI can learn. Yes, there is a
natural limit to how much experimentation in the real world
can be done in a given amount of time. But we humans are
already going beyond this with, for example, protein
folding simulations, which speeds up the discovery of new
drugs and such by many orders of magnitude. Any
sufficiently detailed simulation could massively narrow
down the amount of real world verification necessary, such
that new discoveries happen more and more quickly, possibly
at some point faster than we know the AGI is doing them. An
intelligence explosion is not a remote possibility. The
major risk here is what Eliezer Yudkowsky pointed out: not
that the AGI is evil or something, but that it is
indifferent to humanity. No one yet goes out of their way
to make any form of AI care about us (because we don't yet
know how). What if an AI created self-replicating nanobots
just to prove a hypothesis?
I think Nick Bostrom's book is what got Stephen, Elon, and
Bill all upset. I have to say it starts out merely
interesting, but gets to a dark place pretty quickly. But
he goes too far in the other direction, at the same time
easily accepting that superinteligences have all manner of
cognitive skill, but at the same time can't fathom the how
humans might not like the idea of having our brain's
pleasure centers constantly poked, turning us all into
smiling idiots (as i mentioned here:
http://blog.serotoninsoftware.com/so-smart-its-stupid).
On 5/25/2015 2:01 PM, Fergal Byrne wrote:
Just one last idea in this. One thing that crops up every
now and again in the Culture novels is the response of the
Culture to Swarms, which are self-replicating viral
machines or organisms. Once these things start consuming
everything else, the AIs (mainly Ships and Hubs) respond
by treating the swarms as a threat to the diversity of
their Culture. They first try to negotiate, then they'll
eradicate. If they can contain them, they'll do that.
They do this even though they can themselves withdraw from
real spacetime. They don't have to worry about their own
survival. They do this simply because life is more
interesting when it includes all the rest of us.
Regards
Fergal Byrne
--
Fergal Byrne, Brenter IT
Author, Real Machine Intelligence with Clortex and NuPIC
https://leanpub.com/realsmartmachines
Speaking on Clortex and HTM/CLA at euroClojure Krakow,
June 2014: http://euroclojure.com/2014/
and at LambdaJam Chicago, July 2014: http://www.lambdajam.com
http://inbits.com - Better Living through Thoughtful
Technology
http://ie.linkedin.com/in/fergbyrne/ -
https://github.com/fergalbyrne
e:[email protected]
<mailto:e:[email protected]> t:+353 83 4214179
<tel:%2B353%2083%204214179>
Join the quest for Machine Intelligence at http://numenta.org
Formerly of Adnet [email protected] <mailto:[email protected]>
http://www.adnet.ie
On Mon, May 25, 2015 at 5:04 PM, cogmission (David Ray)
<[email protected]
<mailto:[email protected]>> wrote:
This was someone's response to Jeff's interview (see
here:
https://www.facebook.com/fareedzakaria/posts/10152703985901330)
Please read and comment if you feel the need...
Cheers,
David
--
/With kind regards,/
David Ray
Java Solutions Architect
*Cortical.io <http://cortical.io/>*
Sponsor of: HTM.java <https://github.com/numenta/htm.java>
[email protected] <mailto:[email protected]>
http://cortical.io <http://cortical.io/>