Hi David,



It's also not a good idea to jump all over people who are just trying to get 
their heads round something and get it wrong (this I know because I do that too 
often as well). If you're trying to help someone understand something, almost 
always the wrong way to begin is by attacking their current understanding. It 
just makes them defensive.




This does not apply to people who claim to know plenty about a subject. So, for 
example, when I claim to KNOW that prediction MUST precede recognition, I'm 
actually demanding attacks on that idea. I want to be certain in a scientific 
sense, and the best way is to invite smart people to try and prove me wrong. 
The "worst case" scenario is I am proven wrong in my original idea, but we now 
know why. That will always lead to something better.




Now, when asked again and again about these fears coming from highly regarded 
authorities in other fields, Jeff understandably treats the question in a 
certain way. Instead of discussing "how" we can do this (Jeff's lifelong 
quest), this is a completely different question about what'll happen if we do 
it and we mess up the deployment. It is an important question, but it's not 
what Jeff is interested in. It's hard enough to get anywhere near there, and 
then it's even harder to make a smart computer dangerous. It's much easier to 
give the dangerous powers to bad people than it is to build a smart computer 
and make it bad and then give it the power to be dangerous. So it's in a 
certain sense a really stupid question. People who are scared of the 
possibility of dangerous AI should spend their time figuring out how to keep 
power away from bad intelligences, regardless of their substrate.




Clearly, as history shows, we're much worse at this than we are at developing 
potentially destructive technologies. AI's are hardly likely to make that less 
true.




(I wrote this before the video was linked to, so I'll send this and then watch 
that - on my phone which can't do both).




Cheers




FergalĀ 



--

Fergal Byrne, Brenter IT

Author, Real Machine Intelligence with Clortex and NuPIC 
https://leanpub.com/realsmartmachines

Speaking on Clortex and HTM/CLA at euroClojure Krakow, June 2014: 
http://euroclojure.com/2014/
and at LambdaJam Chicago, July 2014: http://www.lambdajam.com

http://inbits.com - Better Living through Thoughtful Technology
http://ie.linkedin.com/in/fergbyrne/ - https://github.com/fergalbyrne

e:[email protected] t:+353 83 4214179
Join the quest for Machine Intelligence at http://numenta.org
Formerly of Adnet [email protected] http://www.adnet.ie

On Mon, May 25, 2015 at 5:04 PM, cogmission (David Ray)
<[email protected]> wrote:

> This was someone's response to Jeff's interview (see here:
> https://www.facebook.com/fareedzakaria/posts/10152703985901330)
> Please read and comment if you feel the need...
> Cheers,
> David
> -- 
> *With kind regards,*
> David Ray
> Java Solutions Architect
> *Cortical.io <http://cortical.io/>*
> Sponsor of:  HTM.java <https://github.com/numenta/htm.java>
> [email protected]
> http://cortical.io

Reply via email to