I suspect that there's minimal value in thinking about mundane 'self
improvement' (e.g. among humans or human institutions) in an attempt to
understand AGI-RSI, and that thinking about 'weak RSI' (e.g. in a GA system
or some other non-self-aware system) has value, but only insofar as it can
contribute to an AGI-RSI system (e.g. the mechanics of Combo in OpenCog).
Drawing the conclusion that strong RSI is impossible because it has not yet
been observed is absurd, because there's no known system in existence today
that is capable of strong RSI. A system capable of strong RSI must have
broad abilities to deeply understand, reprogram and recompile its
constituent parts before it can strongly recursively self improve, that is,
before it can create improved versions of itself (potentially heavily
modified versions that must demonstrate their superior fitness in a
competitive environment) where the unique creations repeat the process to
yield yet greater improvements ad infinitum.

-dave



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to