Ed Porter wrote:
Richard,

It seems we both agree that systems, like Copycat's, that relatively
successfully harness and control complexity for a desired purpose need to be
explored on a much larger scale to better understand what, if any, problems
result from such increases in scale.  One would expect that such
scale-related problems will occur, but how hard they will be to solve is the
issue.

I would expect that most intelligently designed large Novamente-type systems
would fall into this category.  In my own ideas for a roughly Novamente-type
system, I have been seeking a relatively uniform very very rough
approximation of the cortico/basil-ganglia/thalamic architecture, all
operating under the control of set of top level goals and a system for
administering +- experiential related rewards.  This architecture would
basically be similar across most of the machine, to eliminate the number of
design choices and/or non experientially set parameters.

Much of the system's complexity would be experientially learned complexity,
much of the learned goals would be behaviors or states that have been shown
by learned experience to serve the top level goals.  This strong
experiential bias would be one of the guiding hands (actually it would be a
set of millions of such guiding hands) that hopefully would tend to keep the
system from suddenly going weird on us.
As I said before, in my system most new thoughts and behaviors would be
created by processes of recollection from similar contexts of various
scopes, of generalizations of such recollections, of context specific
instantiations of such generalizations, and of probabilistically favored
mappings and stitching together of such generalizations or pieces of such
recollections -- all with a certain amount of randomness thrown in, as in
Copycat.
Yes, there would be a tremendous number of degrees of freedom, but there
would be a tremendous number of sources of guidance and review from the best
matching prior experiences of the past successes and failures of the most
similar perceptions, thoughts, or behaviors in the most similar contexts.
With such guidance, there is reason to believe that even a system large
enough to compute human-level world knowledge would stay largely within the
realm of common sense and not freak out.  It should have enough randomness
to fairly often think strange new thoughts, but it should have enough
common-sense from its vase experiences to judge roughly as well as a human
when to, and when not to, act on such strange new ideas.

It is my guess that there is a good chance the types guiding hands that make
copycat work can be successfully extended and multiplied and applied to
allow a Novamente-type system to successfully, usefully, and continuously
compute from a human-level world knowledge.

But I agree totally with what I think you are saying, i.e., that we should
be seeking to constantly try such architecture in larger and larger projects
to better understand the potential gotchas and to better understand the type
of guiding hands such systems need to avoid the undesired effects of
complexity.
I would appreciate knowing what parts of the above you agree and disagree
with.  And if you have some particular suggestion for how the best
extrapolate the Copycat approach not mentioned above, please tell me.


Ed

Very briefly:

I would be very careful to distinguish between "experiential" learned mechanisms, and "designed" mechanisms, and the complexity introduced by these two.

Allowing the system to adapt to the world by giving it flexible mechanisms that *build* mechanisms (which it then uses), is one way to get the system to do some of the work of "fitting parameters" (as ben would label it), or reducing the number of degrees of freedom that we have to deal with.

But that would be different from *our* efforts, as designers of the system, to design different possible mechanisms, then do tests to establish what kind of system behavior they cause. We have to do this "generate and test" experimentation in parallel with the system's own attempts to adapt and build new internal mechanisms. They are two different processes, both of which are designed to home in on the best design for an AGI, and they do need to be considered separately.

The other major comment that I have is that the *main* strategy that I have for reducing the number of degrees of freedom (in the design) is to keep the design as close as possible to the human cognitive system.

This is where my approach and the Novamente approach part company in a serious way. I believe that the human design has already explored the space of possible solutions for us (strictly speaking it is evolution that did the exploration when it tried out all kinds of brain edsigns over the eons). I believe that this will enable us to drastically reduce the number of possibilities we have to explore, thus making the project feasible.

My problem is that it may be tempting to see a "ground-up" AGI design (in which we just get a little inspiration from the human system, but mostly we ignore it) as just as feasible when in fact it may well get bogged down in dead ends within the space of possible AGI designs.

Example: suppose you choose to represent all facts by things that have a "truth value" attached to them, along with (say) another number specifying the "reliability" of that truth value. Who is to say that this design decision can be adapted to work in the general case, when the system is scaled up? Does it have consequences when the system is scaled up? Does it get tangled up in fabulously difficult issues when it we try to extend it to represent complex facts? I am not saying the idea is bankrupt, but it is entirely possible that by commiting ourselves to this design right at the outset, we close off so much of the design space that there are NO solutions to the full AGI problem, starting with that assumption. (Putting it another way, the consequences of the decision create high-level behavior that is not what we expect, and there is no parameter adjustment in the world that lets us get the overall behavior to reach complete intelligence).

It is because of dangers like that that I try to stay as close to the human design as possible, to separate my design decisions into "framework" level and "implementation" level, to keeop the framework as simple as possible, and to postpone as long as possible any commitment to implementation-level decisions.



Richard Loosemore.











-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73643810-470708

Reply via email to