Hi,

>
> ... I would say that things like the "scale invariance of knowledge"
> problem are an aspect of the CSP:  the mechanism does not necessarily
> work as expected when things scale up, because long-range dependencies
> begin to take their toll and make the overall behavior impossible to
> predict from the local mechanisms.


I agree the details of the overall behavior of a system like NM will be
impossible to predict, but obviously the question is whether the important
high-level properties of the system will be possible to predict.  I think
so,
but as noted have not proved it.



>  It is a subtle point, but suffice it
> to say that when I use the blanket term "complex systems problem" I
> really mean this as a shorthand for a bunch of Gotchas that come along
> when the proposed AGI gets up to such a size that subtle interactions
> between its components start to dominate the behavior.


The subtle interactions btw components WILL dominate the behavior,
but the idea is that the design is made to channel these interactions toward
the system's overall intelligence...


>
>
> Another example would be the reliability of inference control engines:
> inference itself is designed to make sure that truth preservation
> happens, but the control structures that govern the actual extent of the
> inferences carried out are the things that actually govern the overall
> intelligence of the system.


Yes.  This is a very deep and central problem.  The interaction btw
PLN inference, MOSES evolutionary learning and economic attention allocation
to achieve scalable inference control
is central to the Novamente design, as I emphasized in particular in my
Google Tech Talk for example.

This may qualify as the main "untested, critical IMO very plausible
hypothesis /
research problem" at the heart of the NM design, actually.  It is something
I have
thought about a lot, and that can't really be experimented with till PLN,
MOSES
and attention allocation are themselves more fully advanced.

-- Ben

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=64177107-4131e2

Reply via email to