Richard Loosemore:> I am not sure I understand.> > There is every reason to 
think that "a currently-envisionable AGI would > be millions of times "smarter" 
than all of humanity put together."> > Simply build a human-level AGI, then get 
it to bootstrap to a level of, > say, a thousand times human speed (easy 
enough: we are not asking for > better thinking processes, just faster 
implementation), then ask it to > compact itself enough that we can afford to 
build and run a few billion > of these systems in parallel
 
This viewpoint assumes that human intelligence is essentially trivial; I see no 
evidence for this and tend to assume that a properly-programmed gameboy is not 
going to pass the turing test.  I realize that people on this list tend to be 
more optimistic on this subject so I do accept your answer as one viewpoint.  
It is surely a minority view, though, and my question only makes sense if you 
assume significant limitations in the capability of near-term hardware.
 

-------------------------------------------
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com

Reply via email to