What I dont see then, is anywhere where System 2 ( a neural net?) is better 
than system 1, or where it avoids the complexity issues.

I dont have a goal of system 2 from system one yet.

James

Richard Loosemore <[EMAIL PROTECTED]> wrote: 
Well, this wasn't quite what I was pointing to:  there will always be a 
need for parameter tuning.  That goes without saying.

The point was that if an AGI developer were to commit to system 1, they 
are never going to get to the (hypothetical) system 2 by anything as 
trivial as parameter tuning.  Therefore parameter tuning is useless for 
curing the complex systems problem.

That is why I do not accept that parameter tuning is an adequate 
response to the problem.



Richard Loosemore



James Ratcliff wrote:
> James: Either of these systems described will have a Complexity Problem, 
> any AGI will because it is a very complex system. 
> System 1  I dont believe is strictly practical, as few Truth values can 
> be stored locally directly to the frame.  More realistic is there may be 
> a temporary value such as:
>   "I like cats"  t=0.9
> Which is calculated from some other backing facts, such as
> I said I like cats. t=1.0
> I like Rosemary (a cat) t=0.8
> 
>  >then parameter tuning will never be good enough, it will have to be a 
> huge and very >serious new approach to making our AGI designs flexible 
> at the design level.
> System 2, though it uses unnamed parameters, would still need to 
> determine these temporary values.  Any representation system must have 
> parameter tuning in some form.
> 
> Either of these systems has the same problem though, of updating the 
> information, such as
> Seen: "I dont like Ganji (a cat)" both systems must update their 
> representation to update with this new information.
> 
> Neither a symbol-system nor a neural network (closest you mean by system 
> 2?) has been shown able to scale up to a larger system needed for an 
> AGI, but neither has been shown ineffective I dont believe either.
> 
> Whether a system explicity or implicitly stores the information I 
> believe you must be able to ask it the reasoning behind any thought 
> process.  This can be done with either system, and may give a very long 
> answer, but once you get a system that makes decicions and cannot 
> explain its reasoning, that is a very scary thought, and it is truly 
> acting irrationaly as I see it.
> 
> While you cant extract a small portion of the representation from system 
> 1 or two outside of the whole, you must be able to print out the 
> calculated values that a Frame type system shows.
> 
> James
> 
> */Richard Loosemore /* wrote:
> 
>     James Ratcliff wrote:
>      > >However, part of the key to intelligence is **self-tuning**.
>      >
>      > >I believe that if an AGI system is built the right way, it can
>     effectively
>      > >tune its own parameters, hence adaptively managing its own
>     complexity.
>      >
>      > I agree with Ben here, isnt one of the core concepts of AGI the
>     ability
>      > to modify its behavior and to learn?
> 
>     That might sound like a good way to proceed, but now consider this.
> 
>     System 1: Suppose that the AGI is designed with a "symbol system" in
>     which the symbols are very much mainstream-style symbols, and one
>     aspect of them is that there are "truth-values" associated with the
>     statements that use those symbols (as in "I like cats", t=0.9).
> 
>     Now suppose that the very fact that truth values were being
>     *explicitly* represented and manipulated by the system was causing
>     it to run smack bang into the Complex Systems Problem.
> 
>     In other words, suppose that you cannot get that kind of design to
>     work because when it scales up the whole truth-value maintenance
>     mechanism just comes apart.
> 
> 
>     System 2: Suppose, further, that the only AGI systems that really do
>     work are ones in which the symbols never use "truth values" but use
>     other stuff (for which there is no interpretation) and that the
>     thing we call a "truth value" is actually the result of an operator
>     that can be applied to a bunch of connected symbols. This
>     [truth-value = external operator] idea is fundamentally different
>     from [truth-value = internal parameter] idea, obviously.
> 
>     Now here is my problem: how would "parameter-tuning" ever cause that
>     first AGI design to realise that it had to abandon one bit of its
>     architecture and redesign itself?
> 
>     Surely this is more than parameter tuning? There is no way it could
>     imply stop working and completely redesign all of its internal
>     architecture to not use the t-values, and make the operators etc etc.!
> 
>     So here is the rub: if the CSP does cause this kind of issue (and
>     that is why I invented the CSP idea in the first place, because it
>     was precisely those kinds of architectural issues that seemed
>     wrong), then parameter tuning will never be good enough, it will
>     have to be a huge and very serious new approach to making our AGI
>     designs flexible at the design level.
> 
> 
>     Does that make sense?
> 
> 
> 
> 
>     Richard Loosemore
> 
> 
> 
> 
> 
> 
> 
> 
>      > This will have to be done with a large amount of self-tuning, as
>     we will
>      > not be changing parameters for every action, that wouldnt be
>     efficient.
>      > (this part does not require actual self-code writing just yet)
>      >
>      > Its more a matter of finding out a way to guide the AGI in
>     changing the
>      > parameters, checking the changes and reflecting back over the
>     changes to
>      > see if they are effective for future events.
>      >
>      > What is needed at some point is being able to converse at a high
>     level
>      > with the AGI, and correcting their behaviour, such as "Dont touch
>     that,
>      > cause it will have a bad effect" and having the AGI do all of the
>      > parameter changing and link building and strengthening/weakening
>      > necessary in its memory. It may do this in a very complex way and
>     may
>      > effect many parts of its systems, but by multiple reinforcement we
>      > should be able to guide the overall behaviour if not all of the
>      > parameters directly.
>      >
>      > James Ratcliff
>      >
>      >
>      > */Benjamin Goertzel /* wrote:
>      >
>      > > Conclusion: there is a danger that the complexity that even Ben
>      > agrees
>      > > must be present in AGI systems will have a significant impact
>     on our
>      > > efforts to build them. But the only response to this danger at the
>      > > moment is the bare statement made by people like Ben that "I do not
>      > > think that the danger is significant". No reason given, no explicit
>      > > attack on any component of the argument I have given, only a
>      > statement
>      > > of intuition, even though I have argued that intuition cannot in
>      > > principle be a trustworthy guide here.
>      >
>      > But Richard, your argument ALSO depends on intuitions ...
>      >
>      > I'll try, though, to more concisely frame the reason I think your
>      > argument
>      > is wrong.
>      >
>      > I agree that AGI systems contain a lot of complexity in the
>     dynamical-
>      > systems-theory sense.
>      >
>      > And I agree that tuning all the parameters of an AGI system
>     externally
>      > is likely to be intractable, due to this complexity.
>      >
>      > However, part of the key to intelligence is **self-tuning**.
>      >
>      > I believe that if an AGI system is built the right way, it can
>      > effectively
>      > tune its own parameters, hence adaptively managing its own
>     complexity.
>      >
>      > Now you may say there's a problem here: If AGI component A2 is to
>      > tune the parameters of AGI component A1, and A1 is complex, then
>      > A2 has got to also be complex ... and who's gonna tune its
>     parameters?
>      >
>      > So the answer has got to be that: To effectively tune the parameters
>      > of an AGI component of complexity X, requires an AGI component of
>      > complexity a bit less than X. Then one can build a self-tuning AGI
>      > system,
>      > if one does the job right.
>      >
>      > Now, I'm not saying that Novamente (for instance) is explicitly built
>      > according to this architecture: it doesn't have N components wherein
>      > component A_N tunes the parameters of component A_(N+1).
>      >
>      > But in many ways, throughout the architecture, it relies on this
>     sort of
>      > fundamental logic.
>      >
>      > Obviously it is not the case that every system of complexity X can
>      > be parameter-tuned by a system of complexity less than X. The
>     question
>      > however is whether an AGI system can be built of such components.
>      > I suggest the answer is yes -- and furthermore suggest that this is
>      > pretty much the ONLY way to do it...
>      >
>      > Your intuition is that this is not possible, but you don't have a
>     proof
>      > of this...
>      >
>      > And yes, I realize the above argument of mine is conceptual only --
>      > I haven't
>      > given a formal definition of complexity. There are many, but that
>     would
>      > lead into a mess of math that I don't have time to deal with
>     right now,
>      > in the context of answering an email...
>      >
>      > -- Ben G


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



_______________________________________
James Ratcliff - http://falazar.com
Looking for something...
       
---------------------------------
Looking for last minute shopping deals?  Find them fast with Yahoo! Search.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=74796618-3cce63

Reply via email to