On Thursday 04 October 2007 11:50:21 am, Bob Mottram wrote:
> To me this seems like elevating that status of nanotech to magic.
> Even given RSI and the ability of the AGI to manufacture new computing
> resources it doesn't seem clear to me how this would enable it to
> prevent other AGIs from also reaching RSI capability.  

Hear, hear and again I say hear, hear!

There's a lot of "and then a miracle occurs in step 2" in the "we build a 
friendly AI and it takes over the world and saves our asses" type reasoning 
we see so much of. (Or the "somebody builds an unfriendly AI and it takes 
over the world and wipes us out" reasoning as well.)

We can't build a system that learns as fast as a 1-year-old just now. Which is 
our most likely next step: (a) A system that does learn like a 1-year-old, or 
(b) a system that can learn 1000 times as fast as an adult?

Following Moore's law and its software cognates, I'd say give me the former 
and I'll give you the latter in a decade. With lots of hard work. Then and 
only then will you have something that's able to improve itself faster than a 
high-end team of human researchers and developers could. 

Furthermore, there's a natural plateau waiting for it. That's where it has to 
leave off learning by absorbing knowledge fom humans (reading textbooks and 
research papers, etc) and doing the actual science itself. 

I have heard NO ONE give an argument that puts a serious dent in this, to my 
way of thinking.

Josh


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=50014668-f60c12

Reply via email to