>
>
> Isn't it an evolutionary stable strategy for the modification system
> module to change to a state where it does not change itself?1



Not if the top-level goals are weighted toward long-term growth


> Let me
> give you a just so story and you can tell me whether you think it
> likely. I'd be curious as to why you don't.
>
> Let us say the AI is trying to learn a different language (say french
> with its genders), so the system finds it is better to concentrate its
> change on the language modules as these need the most updating. So a
> modification to the modification module that completely concentrates
> the modifications on the language module should be the best at that
> time. But then it would be frozen forever and once the need to vary
> the language module was past it wouldn't be able to go back to
> modifying other modules. Short sighted I know, but I have yet to come
> across an RSI system that isn't either short sighted or limited to
> what it can prove.


You seem to be assuming that subgoal alienation will occur, and the
long-term goal of dramatically increasing intelligence will be forgotten
in favor of the subgoal of improving NLP.  But I don't see why you
make this assumption; this seems an easy problem to avoid in a
rationally-designed AGI system, although not so easy in the context
of human psychology.

-- BenG



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to