It seems that the debate over recursive self improvement depends on what you 
mean by "improvement". If you define improvement as intelligence as defined by 
the Turing test, then RSI is not possible because the Turing test does not test 
for superhuman intelligence. If you mean improvement as more memory, faster 
clock speed, more network bandwidth, etc., then yes, I think it is reasonable 
to expect Moore's law to continue after we are all uploaded. If you mean 
improvement in the sense of competitive fitness, then yes, I expect evolution 
to continue, perhaps very rapidly if it is based on a computing substrate other 
than DNA. Whether you can call it "self" improvement or whether the result is 
desirable is debatable. We are, after all, pondering the extinction of Homo 
Sapiens and replacing it with some unknown species, perhaps gray goo. Will the 
nanobots look back at this as an improvement, the way we view the extinction of 
Homo Erectus?

My question is whether RSI is mathematically possible in the context of 
universal intelligence, i.e. expected reward or prediction accuracy over a 
Solomonoff distribution of computable environments. I believe it is possible 
for Turing machines if and only if they have access to true random sources so 
that each generation can create successively more complex test environments to 
evaluate their offspring. But this is troubling because in practice we can 
construct pseudo-random sources that are nearly indistinguishable from truly 
random in polynomial time (but none that are *provably* so).

 -- Matt Mahoney, [EMAIL PROTECTED]


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to