Richard, With regard to your below post:

>RICHARD LOOSEMORE #######>Allowing the system to adapt to the world by
giving it flexible mechanisms that *build* mechanisms (which it then uses),
is one way to get the system to do some of the work of "fitting parameters"
(as ben would label it), or reducing the number of degrees of freedom that
we have to deal with.

But that would be different from *our* efforts, as designers of the system,
to design different possible mechanisms, then do tests to establish what
kind of system behavior they cause.  We have to do this "generate and test"
experimentation in parallel with the system's own attempts to adapt and
build new internal mechanisms.  They are two different processes, both of
which are designed to home in on the best design for an AGI, and they do
need to be considered separately.

ED PORTER #######> I don't understand in exactly what ways you think the
experientially learned and the designed features should be treated
differently, and how this relates to the potential pitfalls of complexity.

Of course they would normally be considered differently (you have to
directly design one, the other is learned automatically by a system you
design).  I think there needs to be joint development of them, because the
designed mechanisms are intended to work with the learned ones, and vice
versa.

In the system I have been thinking of, most of the experientially learned
patterns are largely drawn from, or synthesized from, recorded experience in
a relatively direct manner, not from some sort of Genetic Algorithm that
searches large spaces to find some algorithm which compactly represents
large amounts of experiences.  This close connection with sensed, behaved,
or thought experience tends to make such systems more stable.

But it is not clear to me that all experientially learned things are
necessarily more safe than designed things.  For example, Novamente uses
MOSES, which is a genetic algorithm learning tool.  I think such a tool is
not directly needed for an AGI and probably has no direct analogy in the
brain. I think the brain uses something that is a rough combination of
copycats type of relaxation type assembly, with something like the
superimposed probabilities of hecht-neilsen's confabulation to explore new
problem spaces, and that this process is repeated over and over again when
trying to solve complex problems with the various good features of
successive attempts being remembered as part of an increasing learned
vocabulary of patterns from which new synthesis are more likely to be
performed (all of which is arguably an analog of GA.  

I can, however, understand how a Genetic algorithm like MOSES could add
tremendous learning, exploratory, and perhaps even representational power to
an AGI, particularly for certain classes of problems.  BUT I HAVE LITTLE
UNDERSTANDING FOR EXACTLY WHAT TYPE OF COMPLEXITY DANGERS SUCH GENETIC
ALGORITHM PRESENTS.  GAs have been successfully used for multiple purposes,
particularly where one has a clearly defined and measurable fitness
function.  But it is not clear to me what happens if you use GAs to control
an AGI's relatively high levels of behavior in a complex environment for
which there would often not be any simply applicable fitness function.  Nor
is it clear to me what happens if you have large number of GA controlled
systems interacting with each other.  

It would seem to me they would have much more potential for knarliness than
my more experientially based learning systems, but I really don't know.

Ben would probably know much more about this than most. 
 

>RICHARD LOOSEMORE #######>The other major comment that I have is that the
*main* strategy that I have for reducing the number of degrees of freedom
(in the design) is to keep the design as close as possible to the human
cognitive system.

This is where my approach and the Novamente approach part company in a
serious way.  I believe that the human design has already explored the space
of possible solutions for us (strictly speaking it is evolution that did the
exploration when it tried out all kinds of brain edsigns over the eons).  I
believe that this will enable us to drastically reduce the number of
possibilities we have to explore, thus making the project feasible.

My problem is that it may be tempting to see a "ground-up" AGI design (in
which we just get a little inspiration from the human system, but mostly we
ignore it) as just as feasible when in fact it may well get bogged down in
dead ends within the space of possible AGI designs.

ED PORTER #######> You might be right, you might be wrong.  It is my
intuition that you do not need to reverse engineer the human brain to build
AGI's.  I think some of the types of design mistakes you envision from not
waiting until we get the whole picture on how the brain works, will
probability require some significant software revisions, but such revisions
are common in development of complex systems of a new type.  I think we
should move forward now building AGI's now, on PCs, and hopefully at least
on systems that cost say 10K to 100K to see what works and doesn't work.

>RICHARD LOOSEMORE #######>It is because of dangers like that that I try to
stay as close to the human design as possible, to separate my design
decisions into "framework" level and "implementation" level, to keeop the
framework as simple as possible, and to postpone as long as possible any
commitment to implementation-level decisions.

ED PORTER #######>  First, please give me an example of what you consider to
be your famework level and your implementation level.

Second, If there was no cost whatsoever to delaying the development of AGI
until we reverse engineered the brain, it seems like it would be the optimal
policy to delay any substantial development of AGI until you had the benefit
of understanding in detail how the brain worked.

But I think there are costs to such delay.  For one, there are many problems
powerful AGI could help us better deal with.  Also, I think building a power
AGI's within a decade is within the power of any group, including a lot of
ugly ones, that has a few hundred million dollars to spend on it.  Thus, I
think it is important that relatively decent institutions start developing
it soon.  I don't think we have the time to wait until the brain is reverse
engineered or else others much less moral than ourselves will get it befores
us.

ED Porter


-----Original Message-----
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: Friday, December 07, 2007 12:19 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Evidence complexity can be controlled by guiding hands

Ed Porter wrote:
> Richard,
> 
> It seems we both agree that systems, like Copycat's, that relatively
> successfully harness and control complexity for a desired purpose need to
be
> explored on a much larger scale to better understand what, if any,
problems
> result from such increases in scale.  One would expect that such
> scale-related problems will occur, but how hard they will be to solve is
the
> issue.
> 
> I would expect that most intelligently designed large Novamente-type
systems
> would fall into this category.  In my own ideas for a roughly
Novamente-type
> system, I have been seeking a relatively uniform very very rough
> approximation of the cortico/basil-ganglia/thalamic architecture, all
> operating under the control of set of top level goals and a system for
> administering +- experiential related rewards.  This architecture would
> basically be similar across most of the machine, to eliminate the number
of
> design choices and/or non experientially set parameters.
> 
> Much of the system's complexity would be experientially learned
complexity,
> much of the learned goals would be behaviors or states that have been
shown
> by learned experience to serve the top level goals.  This strong
> experiential bias would be one of the guiding hands (actually it would be
a
> set of millions of such guiding hands) that hopefully would tend to keep
the
> system from suddenly going weird on us.  
> 
> As I said before, in my system most new thoughts and behaviors would be
> created by processes of recollection from similar contexts of various
> scopes, of generalizations of such recollections, of context specific
> instantiations of such generalizations, and of probabilistically favored
> mappings and stitching together of such generalizations or pieces of such
> recollections -- all with a certain amount of randomness thrown in, as in
> Copycat.  
> 
> Yes, there would be a tremendous number of degrees of freedom, but there
> would be a tremendous number of sources of guidance and review from the
best
> matching prior experiences of the past successes and failures of the most
> similar perceptions, thoughts, or behaviors in the most similar contexts.
> With such guidance, there is reason to believe that even a system large
> enough to compute human-level world knowledge would stay largely within
the
> realm of common sense and not freak out.  It should have enough randomness
> to fairly often think strange new thoughts, but it should have enough
> common-sense from its vase experiences to judge roughly as well as a human
> when to, and when not to, act on such strange new ideas.
> 
> It is my guess that there is a good chance the types guiding hands that
make
> copycat work can be successfully extended and multiplied and applied to
> allow a Novamente-type system to successfully, usefully, and continuously
> compute from a human-level world knowledge.
> 
> But I agree totally with what I think you are saying, i.e., that we should
> be seeking to constantly try such architecture in larger and larger
projects
> to better understand the potential gotchas and to better understand the
type
> of guiding hands such systems need to avoid the undesired effects of
> complexity.   
> 
> I would appreciate knowing what parts of the above you agree and disagree
> with.  And if you have some particular suggestion for how the best
> extrapolate the Copycat approach not mentioned above, please tell me. 


Ed

Very briefly:

I would be very careful to distinguish between "experiential" learned 
mechanisms, and "designed" mechanisms, and the complexity introduced by 
these two.

Allowing the system to adapt to the world by giving it flexible 
mechanisms that *build* mechanisms (which it then uses), is one way to 
get the system to do some of the work of "fitting parameters" (as ben 
would label it), or reducing the number of degrees of freedom that we 
have to deal with.

But that would be different from *our* efforts, as designers of the 
system, to design different possible mechanisms, then do tests to 
establish what kind of system behavior they cause.  We have to do this 
"generate and test" experimentation in parallel with the system's own 
attempts to adapt and build new internal mechanisms.  They are two 
different processes, both of which are designed to home in on the best 
design for an AGI, and they do need to be considered separately.

The other major comment that I have is that the *main* strategy that I 
have for reducing the number of degrees of freedom (in the design) is to 
keep the design as close as possible to the human cognitive system.

This is where my approach and the Novamente approach part company in a 
serious way.  I believe that the human design has already explored the 
space of possible solutions for us (strictly speaking it is evolution 
that did the exploration when it tried out all kinds of brain edsigns 
over the eons).  I believe that this will enable us to drastically 
reduce the number of possibilities we have to explore, thus making the 
project feasible.

My problem is that it may be tempting to see a "ground-up" AGI design 
(in which we just get a little inspiration from the human system, but 
mostly we ignore it) as just as feasible when in fact it may well get 
bogged down in dead ends within the space of possible AGI designs.

Example:  suppose you choose to represent all facts by things that have 
a "truth value" attached to them, along with (say) another number 
specifying the "reliability" of that truth value.  Who is to say that 
this design decision can be adapted to work in the general case, when 
the system is scaled up?  Does it have consequences when the system is 
scaled up?  Does it get tangled up in fabulously difficult issues when 
it we try to extend it to represent complex facts?  I am not saying the 
idea is bankrupt, but it is entirely possible that by commiting 
ourselves to this design right at the outset, we close off so much of 
the design space that there are NO solutions to the full AGI problem, 
starting with that assumption.  (Putting it another way, the 
consequences of the decision create high-level behavior that is not what 
we expect, and there is no parameter adjustment in the world that lets 
us get the overall behavior to reach complete intelligence).

It is because of dangers like that that I try to stay as close to the 
human design as possible, to separate my design decisions into 
"framework" level and "implementation" level, to keeop the framework as 
simple as possible, and to postpone as long as possible any commitment 
to implementation-level decisions.



Richard Loosemore.











-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73749795-1abb8f

<<attachment: winmail.dat>>

Reply via email to