About recursive self-improvement ... yes, I have thought a lot about it, but
don't have time to write a huge discourse on it here

One point is that if you have a system with N interconnected modules, you
can approach RSI by having the system separately think about how to improve
each module.  I.e. if there are modules A1, A2,..., AN ... then you can for
instance hold A1,...,A(N-1) constant while you think about how to improve
AN.  One can then iterate through all the modules and improve them in
sequence.   (Note that the modules are then doing the improving of each
other.)

What algorithms are used for the improving itself?

There is the evolutionary approach: to improve module AN, just make an
ensemble of M systems ... all of which have the same code for A1,...,A(N-1)
but different code for AN.   Then evolve this ensemble of varying artificial
minds using GP or MOSES or some such.

And then there is the probabilistic logic approach: seek rigorous
probability bounds of the odds that system goals will be better fulfilled if
AN is replaced by some candidate replacement AN'.

All this requires that the system's modules be represented in some language
that is easily comprehensible to (hence tractably modifiable by) the system
itself.  OpenCog doesn't take this approach explicitly right now, but we
know how to make it do so.  Simply make MindAgents in LISP or Combo rather
than C++.  There's no strong reason not to do this ... except that Combo is
slow right now (recently benchmarked at 1/3 the speed of Lua), and we
haven't dealt with the foreign-function interface stuff needed to plug in
LISP MindAgents (but that's probably not extremely hard).   We have done
some experiments before expressing, for instance, a simplistic PLN deduction
MindAgent in Combo.

In short the OpenCogPrime architecture explicitly supports a tractable path
to recursive self-modification.

But, notably, one would have to specifically "switch this feature on" --
it's not going to start doing RSI unbeknownst to us programmers.

And the problem of predicting where the trajectory of RSI will end up is a
different one ... I've been working on some theory in that regard (and will
post something on the topic w/ in the next couple weeks) but it's still
fairly speculative...

-- Ben G

On Fri, Aug 29, 2008 at 6:59 AM, Mike Tintner <[EMAIL PROTECTED]>wrote:

>
>
> Dave Hart: MT:Sorry, I forgot to ask for what I most wanted to know - what
> form of RSI in any specific areas has been considered?
>
> To quote Charles Babbage, I am not able rightly to apprehend the kind of
> confusion of ideas that could provoke such a question.
>
> The best we can hope for is that we participate in the construction and
> guidance of future AGIs such they they are able to, eventually, invent,
> perform and carefully guide RSI (and, of course, do so safely every single
> step of the way without exception).
> Dave,
>
> On the contrary, it's an important question. If an agent is to self-improve
> and keep self-improving, it has to start somewhere - in some domain of
> knowledge, or some technique/technology of problem-solving...or something.
> Maths perhaps or maths theorems.?Have you or anyone else ever thought about
> where, and how? (It sounds like the answer is, no).  RSI is for AGI a
> v.important concept - I'm just asking whether the concept has ever been
> examined with the slightest grounding in reality, or merely pursued as a
> logical conceit..
>
> The question is extremely important because as soon as you actually examine
> it, something v. important emerges - the systemic interconectedness of the
> whole of culture, and the whole of technology, and the whole of an
> individual's various bodies of knowledge, and you start to see why evolution
> of any kind in any area of biology or society, technology or culture is such
> a difficult and complicated business. RSI strikes me as a last-century,
> local-minded concept, not one of this century where we are becoming aware of
> the global interconnectedness and interdependence of all systems.
>
> ------------------------------
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome " - Dr Samuel Johnson



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to