Re: [agi] None of you seem to be able ...

2007-12-11 Thread Richard Loosemore

James Ratcliff wrote:
What I dont see then, is anywhere where System 2 ( a neural net?) is 
better than system 1, or where it avoids the complexity issues.


I was just giving an example of the degree of flexibility required - the 
exact details of this example are not important.


My point was that dealing with the complex systems problem requires you 
to explore an extremely lareg range of *architectural* choices, and 
there is no way that these could be explored by "parameter tuning" (at 
least the way that this phrase is being used here).


What I am devising is a systematic way to parameterize those 
architectural choices, but that is orders of magnitude more 
sophisticated than the kind of paramter tuning that Ben (and others) 
would talk about.



I dont have a goal of system 2 from system one yet.


And I can't parse this sentence.




Richard Loosemore




James

*/Richard Loosemore <[EMAIL PROTECTED]>/* wrote:


Well, this wasn't quite what I was pointing to: there will always be a
need for parameter tuning. That goes without saying.

The point was that if an AGI developer were to commit to system 1, they
are never going to get to the (hypothetical) system 2 by anything as
trivial as parameter tuning. Therefore parameter tuning is useless for
curing the complex systems problem.

That is why I do not accept that parameter tuning is an adequate
response to the problem.



Richard Loosemore



James Ratcliff wrote:
 > James: Either of these systems described will have a Complexity
Problem,
 > any AGI will because it is a very complex system.
 > System 1 I dont believe is strictly practical, as few Truth
values can
 > be stored locally directly to the frame. More realistic is there
may be
 > a temporary value such as:
 > "I like cats" t=0.9
 > Which is calculated from some other backing facts, such as
 > I said I like cats. t=1.0
 > I like Rosemary (a cat) t=0.8
 >
 > >then parameter tuning will never be good enough, it will have to
be a
 > huge and very >serious new approach to making our AGI designs
flexible
 > at the design level.
 > System 2, though it uses unnamed parameters, would still need to
 > determine these temporary values. Any representation system must
have
 > parameter tuning in some form.
 >
 > Either of these systems has the same problem though, of updating the
 > information, such as
 > Seen: "I dont like Ganji (a cat)" both systems must update their
 > representation to update with this new information.
 >
 > Neither a symbol-system nor a neural network (closest you mean by
system
 > 2?) has been shown able to scale up to a larger system needed for an
 > AGI, but neither has been shown ineffective I dont believe either.
 >
 > Whether a system explicity or implicitly stores the information I
 > believe you must be able to ask it the reasoning behind any thought
 > process. This can be done with either system, and may give a very
long
 > answer, but once you get a system that makes decicions and cannot
 > explain its reasoning, that is a very scary thought, and it is truly
 > acting irrationaly as I see it.
 >
 > While you cant extract a small portion of the representation from
system
 > 1 or two outside of the whole, you must be able to print out the
 > calculated values that a Frame type system shows.
 >
 > James
 >
 > */Richard Loosemore /* wrote:
 >
 > James Ratcliff wrote:
 > > >However, part of the key to intelligence is **self-tuning**.
 > >
 > > >I believe that if an AGI system is built the right way, it can
 > effectively
 > > >tune its own parameters, hence adaptively managing its own
 > complexity.
 > >
 > > I agree with Ben here, isnt one of the core concepts of AGI the
 > ability
 > > to modify its behavior and to learn?
 >
 > That might sound like a good way to proceed, but now consider this.
 >
 > System 1: Suppose that the AGI is designed with a "symbol system" in
 > which the symbols are very much mainstream-style symbols, and one
 > aspect of them is that there are "truth-values" associated with the
 > statements that use those symbols (as in "I like cats", t=0.9).
 >
 > Now suppose that the very fact that truth values were being
 > *explicitly* represented and manipulated by the system was causing
 > it to run smack bang into the Complex Systems Problem.
 >
 > In other words, suppose that you cannot get that kind of design to
 > work because when it scales up the whole truth-value maintenance
 > mechanism just comes apart.
 >
 >
 > System 2: Suppose, further, that the only AGI systems that really do
 > work are ones in which the symbols never use "truth values" but use
 > other stuff (for which there is no

Re: [agi] None of you seem to be able ...

2007-12-11 Thread James Ratcliff
What I dont see then, is anywhere where System 2 ( a neural net?) is better 
than system 1, or where it avoids the complexity issues.

I dont have a goal of system 2 from system one yet.

James

Richard Loosemore <[EMAIL PROTECTED]> wrote: 
Well, this wasn't quite what I was pointing to:  there will always be a 
need for parameter tuning.  That goes without saying.

The point was that if an AGI developer were to commit to system 1, they 
are never going to get to the (hypothetical) system 2 by anything as 
trivial as parameter tuning.  Therefore parameter tuning is useless for 
curing the complex systems problem.

That is why I do not accept that parameter tuning is an adequate 
response to the problem.



Richard Loosemore



James Ratcliff wrote:
> James: Either of these systems described will have a Complexity Problem, 
> any AGI will because it is a very complex system. 
> System 1  I dont believe is strictly practical, as few Truth values can 
> be stored locally directly to the frame.  More realistic is there may be 
> a temporary value such as:
>   "I like cats"  t=0.9
> Which is calculated from some other backing facts, such as
> I said I like cats. t=1.0
> I like Rosemary (a cat) t=0.8
> 
>  >then parameter tuning will never be good enough, it will have to be a 
> huge and very >serious new approach to making our AGI designs flexible 
> at the design level.
> System 2, though it uses unnamed parameters, would still need to 
> determine these temporary values.  Any representation system must have 
> parameter tuning in some form.
> 
> Either of these systems has the same problem though, of updating the 
> information, such as
> Seen: "I dont like Ganji (a cat)" both systems must update their 
> representation to update with this new information.
> 
> Neither a symbol-system nor a neural network (closest you mean by system 
> 2?) has been shown able to scale up to a larger system needed for an 
> AGI, but neither has been shown ineffective I dont believe either.
> 
> Whether a system explicity or implicitly stores the information I 
> believe you must be able to ask it the reasoning behind any thought 
> process.  This can be done with either system, and may give a very long 
> answer, but once you get a system that makes decicions and cannot 
> explain its reasoning, that is a very scary thought, and it is truly 
> acting irrationaly as I see it.
> 
> While you cant extract a small portion of the representation from system 
> 1 or two outside of the whole, you must be able to print out the 
> calculated values that a Frame type system shows.
> 
> James
> 
> */Richard Loosemore /* wrote:
> 
> James Ratcliff wrote:
>  > >However, part of the key to intelligence is **self-tuning**.
>  >
>  > >I believe that if an AGI system is built the right way, it can
> effectively
>  > >tune its own parameters, hence adaptively managing its own
> complexity.
>  >
>  > I agree with Ben here, isnt one of the core concepts of AGI the
> ability
>  > to modify its behavior and to learn?
> 
> That might sound like a good way to proceed, but now consider this.
> 
> System 1: Suppose that the AGI is designed with a "symbol system" in
> which the symbols are very much mainstream-style symbols, and one
> aspect of them is that there are "truth-values" associated with the
> statements that use those symbols (as in "I like cats", t=0.9).
> 
> Now suppose that the very fact that truth values were being
> *explicitly* represented and manipulated by the system was causing
> it to run smack bang into the Complex Systems Problem.
> 
> In other words, suppose that you cannot get that kind of design to
> work because when it scales up the whole truth-value maintenance
> mechanism just comes apart.
> 
> 
> System 2: Suppose, further, that the only AGI systems that really do
> work are ones in which the symbols never use "truth values" but use
> other stuff (for which there is no interpretation) and that the
> thing we call a "truth value" is actually the result of an operator
> that can be applied to a bunch of connected symbols. This
> [truth-value = external operator] idea is fundamentally different
> from [truth-value = internal parameter] idea, obviously.
> 
> Now here is my problem: how would "parameter-tuning" ever cause that
> first AGI design to realise that it had to abandon one bit of its
> architecture and redesign itself?
> 
> Surely this is more than parameter tuning? There is no way it could
> imply stop working and completely redesign all of its internal
> architecture to not use the t-values, and make the operators etc etc.!
> 
> So here is the rub: if the CSP does cause this kind of issue (and
> that is why I invented the CSP idea in the first place, because it
> was precisely those kinds of architectural issues that seemed
> wrong), then parameter tuning will nev

Re: [agi] None of you seem to be able ...

2007-12-11 Thread Richard Loosemore


Well, this wasn't quite what I was pointing to:  there will always be a 
need for parameter tuning.  That goes without saying.


The point was that if an AGI developer were to commit to system 1, they 
are never going to get to the (hypothetical) system 2 by anything as 
trivial as parameter tuning.  Therefore parameter tuning is useless for 
curing the complex systems problem.


That is why I do not accept that parameter tuning is an adequate 
response to the problem.




Richard Loosemore



James Ratcliff wrote:
James: Either of these systems described will have a Complexity Problem, 
any AGI will because it is a very complex system. 
System 1  I dont believe is strictly practical, as few Truth values can 
be stored locally directly to the frame.  More realistic is there may be 
a temporary value such as:

  "I like cats"  t=0.9
Which is calculated from some other backing facts, such as
I said I like cats. t=1.0
I like Rosemary (a cat) t=0.8

 >then parameter tuning will never be good enough, it will have to be a 
huge and very >serious new approach to making our AGI designs flexible 
at the design level.
System 2, though it uses unnamed parameters, would still need to 
determine these temporary values.  Any representation system must have 
parameter tuning in some form.


Either of these systems has the same problem though, of updating the 
information, such as
Seen: "I dont like Ganji (a cat)" both systems must update their 
representation to update with this new information.


Neither a symbol-system nor a neural network (closest you mean by system 
2?) has been shown able to scale up to a larger system needed for an 
AGI, but neither has been shown ineffective I dont believe either.


Whether a system explicity or implicitly stores the information I 
believe you must be able to ask it the reasoning behind any thought 
process.  This can be done with either system, and may give a very long 
answer, but once you get a system that makes decicions and cannot 
explain its reasoning, that is a very scary thought, and it is truly 
acting irrationaly as I see it.


While you cant extract a small portion of the representation from system 
1 or two outside of the whole, you must be able to print out the 
calculated values that a Frame type system shows.


James

*/Richard Loosemore <[EMAIL PROTECTED]>/* wrote:

James Ratcliff wrote:
 > >However, part of the key to intelligence is **self-tuning**.
 >
 > >I believe that if an AGI system is built the right way, it can
effectively
 > >tune its own parameters, hence adaptively managing its own
complexity.
 >
 > I agree with Ben here, isnt one of the core concepts of AGI the
ability
 > to modify its behavior and to learn?

That might sound like a good way to proceed, but now consider this.

System 1: Suppose that the AGI is designed with a "symbol system" in
which the symbols are very much mainstream-style symbols, and one
aspect of them is that there are "truth-values" associated with the
statements that use those symbols (as in "I like cats", t=0.9).

Now suppose that the very fact that truth values were being
*explicitly* represented and manipulated by the system was causing
it to run smack bang into the Complex Systems Problem.

In other words, suppose that you cannot get that kind of design to
work because when it scales up the whole truth-value maintenance
mechanism just comes apart.


System 2: Suppose, further, that the only AGI systems that really do
work are ones in which the symbols never use "truth values" but use
other stuff (for which there is no interpretation) and that the
thing we call a "truth value" is actually the result of an operator
that can be applied to a bunch of connected symbols. This
[truth-value = external operator] idea is fundamentally different
from [truth-value = internal parameter] idea, obviously.

Now here is my problem: how would "parameter-tuning" ever cause that
first AGI design to realise that it had to abandon one bit of its
architecture and redesign itself?

Surely this is more than parameter tuning? There is no way it could
imply stop working and completely redesign all of its internal
architecture to not use the t-values, and make the operators etc etc.!

So here is the rub: if the CSP does cause this kind of issue (and
that is why I invented the CSP idea in the first place, because it
was precisely those kinds of architectural issues that seemed
wrong), then parameter tuning will never be good enough, it will
have to be a huge and very serious new approach to making our AGI
designs flexible at the design level.


Does that make sense?




Richard Loosemore








 > This will have to be done with a large amount of self-tuning, as
we will
 > not be changing parameters for every action, that wouldnt be
efficient.
 > (this part doe

Re: [agi] None of you seem to be able ...

2007-12-11 Thread Richard Loosemore

Mike Tintner wrote:


Richard:> Suppose, further, that the only AGI systems that really do 
work are ones
in which the symbols never use "truth values" but use other stuff (for 
which there is no interpretation) and that the thing we call a "truth 
value" is actually the result of an operator that can be applied to a 
bunch of connected symbols.  This [truth-value = external operator] 
idea is fundamentally different from [truth-value = internal 
parameter] idea, obviously.


I almost added to my last post that another reason the brain never 
seizes up is that its concepts (& its entire representational 
operations) are open-ended trees, relatively ill-defined and 
ill-structured, and therefore endlessly open to reinterpretation.  
Supergeneral concepts like "Go away," "Come here", "put this over 
there", or indeed "is that true?" enable it to be flexible and 
creatively adaptive, especially if it gets stuck - and find other ways, 
for example,  to "go" "come," "put" or deem as "true" etc.


Is this something like what you are on about?


Well, I agree that a true AGI will need this kind of flexibility.

That wasn't the issue I was addressing in the above quote, but by itself 
it is true that you need this.  It is easy to get this flexibility in an 
AGI:  it is just that AGI developers tend not to make that a priority, 
for a variety of reasons.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=74689203-45c79a


Re: [agi] None of you seem to be able ...

2007-12-11 Thread James Ratcliff
James: Either of these systems described will have a Complexity Problem, any 
AGI will because it is a very complex system.  
System 1  I dont believe is strictly practical, as few Truth values can be 
stored locally directly to the frame.  More realistic is there may be a 
temporary value such as:
  "I like cats"  t=0.9
Which is calculated from some other backing facts, such as 
I said I like cats. t=1.0
I like Rosemary (a cat) t=0.8

>then parameter tuning will never be good enough, it will have to be a huge and 
>very >serious new approach to making our AGI designs flexible at the design 
>level.
System 2, though it uses unnamed parameters, would still need to determine 
these temporary values.  Any representation system must have parameter tuning 
in some form.

Either of these systems has the same problem though, of updating the 
information, such as 
Seen: "I dont like Ganji (a cat)" both systems must update their representation 
to update with this new information.

Neither a symbol-system nor a neural network (closest you mean by system 2?) 
has been shown able to scale up to a larger system needed for an AGI, but 
neither has been shown ineffective I dont believe either.

Whether a system explicity or implicitly stores the information I believe you 
must be able to ask it the reasoning behind any thought process.  This can be 
done with either system, and may give a very long answer, but once you get a 
system that makes decicions and cannot explain its reasoning, that is a very 
scary thought, and it is truly acting irrationaly as I see it.

While you cant extract a small portion of the representation from system 1 or 
two outside of the whole, you must be able to print out the calculated values 
that a Frame type system shows.

James

Richard Loosemore <[EMAIL PROTECTED]> wrote: James Ratcliff wrote:
>  >However, part of the key to intelligence is **self-tuning**.
> 
>  >I believe that if an AGI system is built the right way, it can effectively
>  >tune its own parameters, hence adaptively managing its own complexity.
> 
> I agree with Ben here, isnt one of the core concepts of AGI the ability 
> to modify its behavior and to learn?

That might sound like a good way to proceed, but now consider this.

System 1: Suppose that the AGI is designed with a "symbol system" in which the 
symbols are very much mainstream-style symbols, and one aspect of them is that 
there are "truth-values" associated with the statements that use those symbols 
(as in "I like cats", t=0.9).

Now suppose that the very fact that truth values were being *explicitly* 
represented and manipulated by the system was causing it to run smack bang into 
the Complex Systems Problem.

In other words, suppose that you cannot get that kind of design to work because 
when it scales up the whole truth-value maintenance mechanism just comes apart.


System 2: Suppose, further, that the only AGI systems that really do work are 
ones in which the symbols never use "truth values" but use other stuff (for 
which there is no interpretation) and that the thing we call a "truth value" is 
actually the result of an operator that can be applied to a bunch of connected 
symbols.  This [truth-value = external operator] idea is fundamentally 
different from [truth-value = internal parameter] idea, obviously.

Now here is my problem:  how would "parameter-tuning" ever cause that first AGI 
design to realise that it had to abandon one bit of its architecture and 
redesign itself?

Surely this is more than parameter tuning?  There is no way it could imply stop 
working and completely redesign all of its internal 
architecture to not use the t-values, and make the operators etc etc.!

So here is the rub:  if the CSP does cause this kind of issue (and that is why 
I invented the CSP idea in the first place, because it was precisely those 
kinds of architectural issues that seemed wrong), then parameter tuning will 
never be good enough, it will have to be a huge and very serious new approach 
to making our AGI designs flexible at the design level.


Does that make sense?




Richard Loosemore








> This will have to be done with a large amount of self-tuning, as we will 
> not be changing parameters for every action, that wouldnt be efficient.  
> (this part does not require actual self-code writing just yet)
> 
> Its more a matter of finding out a way to guide the AGI in changing the 
> parameters, checking the changes and reflecting back over the changes to 
> see if they are effective for future events.
> 
> What is needed at some point is being able to converse at a high level 
> with the AGI, and correcting their behaviour, such as "Dont touch that, 
> cause it will have a bad effect" and having the AGI do all of the 
> parameter changing and link building and strengthening/weakening 
> necessary in its memory.  It may do this in a very complex way and may 
> effect many parts of its systems, but by multiple reinforcement we 
> should be able t

Re: [agi] None of you seem to be able ...

2007-12-11 Thread Mike Tintner


Richard:> Suppose, further, that the only AGI systems that really do work 
are ones
in which the symbols never use "truth values" but use other stuff (for 
which there is no interpretation) and that the thing we call a "truth 
value" is actually the result of an operator that can be applied to a 
bunch of connected symbols.  This [truth-value = external operator] idea 
is fundamentally different from [truth-value = internal parameter] idea, 
obviously.


I almost added to my last post that another reason the brain never seizes up 
is that its concepts (& its entire representational operations) are 
open-ended trees, relatively ill-defined and ill-structured, and therefore 
endlessly open to reinterpretation.  Supergeneral concepts like "Go away," 
"Come here", "put this over there", or indeed "is that true?" enable it to 
be flexible and creatively adaptive, especially if it gets stuck - and find 
other ways, for example,  to "go" "come," "put" or deem as "true" etc.


Is this something like what you are on about? 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=74601069-e39ad4


Re: Human Irrationality [WAS Re: [agi] None of you seem to be able ...]

2007-12-11 Thread James Ratcliff
irrationality  - is used to describe thinking and actions which are, or appear 
to be, less useful or logical than the other alternatives.
and rational would be the opposite of that.

This line of thinking is more concerned with the behaviour of the entities, 
which requires Goal orienting and other things.

An irrational being is NOT working effectively towards the goal according to 
this.  This may be necessary in order to determine new routes, unique solutions 
to a problem, and according to the description will be included in most AGI's I 
have heard described so far.

The other definition which seems to be in the air around here is 
irrational - acting without reason or logic.

An entity that acts without reason or logic entirely is a totally random being, 
will choose to do something for no reason, and will not ever find any goals or 
solutions without accidentily hitting them.

In AGI terms, any entity given multiple equally rewarding alternative paths to 
a goal may randomly select any of them.
This may be considered acting without reason, as there was no real basis for 
choosing 1 as opposed to 2, but it also may be very reasonable, as given any 
situation where either path can be chosen, choosing one is reasonable.  
(choosing no path at that point would indeed be irrational and pointless)

I havnt seen any solutions proposed that require any real level of "acting 
without reason"  and neural nets and others are all reasonable, though the 
reasoning may be complex and hidden from us, or hard to understand.

The example given previously about the computer system that changes its 
thinking in the middle of discovering a solution, is not irrational, as it is 
just contuing to follow its rules, it can still change those rules as it 
allows, and may have very good reason for doing so.

James Ratcliff

Mike Tintner <[EMAIL PROTECTED]> wrote: > Richard: Mike,
> I think you are going to have to be specific about what you mean by 
> "irrational" because you mostly just say that all the processes that could 
> possibly exist in computers are rational, and I am wondering what else is 
> there that "irrational" could possibly mean.  I have named many processes 
> that seem to me to fit the "irrational" definition, but without being too 
> clear about it you have declared them all to be just rational, so now I 
> have no idea what you can be meaning by the word.
>
Richard,

Er, it helps to read my posts. From my penultimate post to you:

"If a system can change its approach and rules of reasoning at literally any 
step of
problem-solving, then it is truly "crazy"/ irrational (think of a crazy
path). And it will be capable of producing all the human irrationalities
that I listed previously - like not even defining or answering the problem.
It will by the same token have the capacity to be truly creative, because it
will ipso facto be capable of lateral thinking at any step of
problem-solving. Is your system capable of that? Or anything close? Somehow
I doubt it, or you'd already be claiming the solution to both AGI and
computational creativity."

A rational system follows a set of rules in solving a problem  (which can 
incl. rules that self-modify according to metarules) ;  a creative, 
irrational system can change/break/create any and all rules (incl. 
metarules) at any point of solving a problem  -  the ultimate, by 
definition, in adaptivity. (Much like you, and indeed all of us, change the 
rules of engagement much of the time in our discussions here).

Listen, no need to reply - because you're obviously not really interested. 
To me that's ironic, though, because this is absolutely the most central 
issue there is in AGI. But no matter.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;


   
-
Looking for last minute shopping deals?  Find them fast with Yahoo! Search.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=74598181-2b0ae5

Re: [agi] None of you seem to be able ...

2007-12-11 Thread Richard Loosemore

James Ratcliff wrote:

 >However, part of the key to intelligence is **self-tuning**.

 >I believe that if an AGI system is built the right way, it can effectively
 >tune its own parameters, hence adaptively managing its own complexity.

I agree with Ben here, isnt one of the core concepts of AGI the ability 
to modify its behavior and to learn?


That might sound like a good way to proceed, but now consider this.

Suppose that the AGI is designed with a "symbol system" in which the 
symbols are very much mainstream-style symbols, and one aspect of them 
is that there are "truth-values" associated with the statements that use 
those symbols (as in "I like cats", t=0.9).


Now suppose that the very fact that truth values were being *explicitly* 
represented and manipulated by the system was causing it to run smack 
bang into the Complex Systems Problem.


In other words, suppose that you cannot get that kind of design to work 
because when it scales up the whole truth-value maintenance mechanism 
just comes apart.


Suppose, further, that the only AGI systems that really do work are ones 
in which the symbols never use "truth values" but use other stuff (for 
which there is no interpretation) and that the thing we call a "truth 
value" is actually the result of an operator that can be applied to a 
bunch of connected symbols.  This [truth-value = external operator] idea 
is fundamentally different from [truth-value = internal parameter] idea, 
obviously.


Now here is my problem:  how would "parameter-tuning" ever cause that 
first AGI design to realise that it had to abandon one bit of its 
architecture and redesign itself?


Surely this is more than parameter tuning?  There is no way it could 
simply stop working and completely redesign all of its internal 
architecture to not use the t-values, and make the operators etc etc.!


So here is the rub:  if the CSP does cause this kind of issue (and that 
is why I invented the CSP idea in the first place, because it was 
precisely those kinds of architectural issues that seemed wrong), then 
parameter tuning will never be good enough, it will have to be a huge 
and very serious new approach to making our AGI designs flexible at the 
design level.



Does that make sense?




Richard Loosemore








This will have to be done with a large amount of self-tuning, as we will 
not be changing parameters for every action, that wouldnt be efficient.  
(this part does not require actual self-code writing just yet)


Its more a matter of finding out a way to guide the AGI in changing the 
parameters, checking the changes and reflecting back over the changes to 
see if they are effective for future events.


What is needed at some point is being able to converse at a high level 
with the AGI, and correcting their behaviour, such as "Dont touch that, 
cause it will have a bad effect" and having the AGI do all of the 
parameter changing and link building and strengthening/weakening 
necessary in its memory.  It may do this in a very complex way and may 
effect many parts of its systems, but by multiple reinforcement we 
should be able to guide the overall behaviour if not all of the 
parameters directly.


James Ratcliff


*/Benjamin Goertzel <[EMAIL PROTECTED]>/* wrote:

 > Conclusion: there is a danger that the complexity that even Ben
agrees
 > must be present in AGI systems will have a significant impact on our
 > efforts to build them. But the only response to this danger at the
 > moment is the bare statement made by people like Ben that "I do not
 > think that the danger is significant". No reason given, no explicit
 > attack on any component of the argument I have given, only a
statement
 > of intuition, even though I have argued that intuition cannot in
 > principle be a trustworthy guide here.

But Richard, your argument ALSO depends on intuitions ...

I'll try, though, to more concisely frame the reason I think your
argument
is wrong.

I agree that AGI systems contain a lot of complexity in the dynamical-
systems-theory sense.

And I agree that tuning all the parameters of an AGI system externally
is likely to be intractable, due to this complexity.

However, part of the key to intelligence is **self-tuning**.

I believe that if an AGI system is built the right way, it can
effectively
tune its own parameters, hence adaptively managing its own complexity.

Now you may say there's a problem here: If AGI component A2 is to
tune the parameters of AGI component A1, and A1 is complex, then
A2 has got to also be complex ... and who's gonna tune its parameters?

So the answer has got to be that: To effectively tune the parameters
of an AGI component of complexity X, requires an AGI component of
complexity a bit less than X. Then one can build a self-tuning AGI
system,
if one does the job right.

Now, I'm not saying that Novamente (for instance) 

Re: [agi] None of you seem to be able ...

2007-12-11 Thread James Ratcliff
>However, part of the key to intelligence is **self-tuning**.

>I believe that if an AGI system is built the right way, it can effectively
>tune its own parameters, hence adaptively managing its own complexity.

I agree with Ben here, isnt one of the core concepts of AGI the ability to 
modify its behavior and to learn?

This will have to be done with a large amount of self-tuning, as we will not be 
changing parameters for every action, that wouldnt be efficient.  (this part 
does not require actual self-code writing just yet)

Its more a matter of finding out a way to guide the AGI in changing the 
parameters, checking the changes and reflecting back over the changes to see if 
they are effective for future events.

What is needed at some point is being able to converse at a high level with the 
AGI, and correcting their behaviour, such as "Dont touch that, cause it will 
have a bad effect" and having the AGI do all of the parameter changing and link 
building and strengthening/weakening necessary in its memory.  It may do this 
in a very complex way and may effect many parts of its systems, but by multiple 
reinforcement we should be able to guide the overall behaviour if not all of 
the parameters directly.

James Ratcliff


Benjamin Goertzel <[EMAIL PROTECTED]> wrote: > Conclusion:  there is a danger 
that the complexity that even Ben agrees
> must be present in AGI systems will have a significant impact on our
> efforts to build them.  But the only response to this danger at the
> moment is the bare statement made by people like Ben that "I do not
> think that the danger is significant".  No reason given, no explicit
> attack on any component of the argument I have given, only a statement
> of intuition, even though I have argued that intuition cannot in
> principle be a trustworthy guide here.

But Richard, your argument ALSO depends on intuitions ...

I'll try, though, to more concisely frame the reason I think your argument
is wrong.

I agree that AGI systems contain a lot of complexity in the dynamical-
systems-theory sense.

And I agree that tuning all the parameters of an AGI system externally
is likely to be intractable, due to this complexity.

However, part of the key to intelligence is **self-tuning**.

I believe that if an AGI system is built the right way, it can effectively
tune its own parameters, hence adaptively managing its own complexity.

Now you may say there's a problem here: If AGI component A2 is to
tune the parameters of AGI component A1, and A1 is complex, then
A2 has got to also be complex ... and who's gonna tune its parameters?

So the answer has got to be that: To effectively tune the parameters
of an AGI component of complexity X, requires an AGI component of
complexity a bit less than X.  Then one can build a self-tuning AGI system,
if one does the job right.

Now, I'm not saying that Novamente (for instance) is explicitly built
according to this architecture: it doesn't have N components wherein
component A_N tunes the parameters of component A_(N+1).

But in many ways, throughout the architecture, it relies on this sort of
fundamental logic.

Obviously it is not the case that every system of complexity X can
be parameter-tuned by a system of complexity less than X.  The question
however is whether an AGI system can be built of such components.
I suggest the answer is yes -- and furthermore suggest that this is
pretty much the ONLY way to do it...

Your intuition is that this is not possible, but you don't have a proof
of this...

And yes, I realize the above argument of mine is conceptual only -- I haven't
given a formal definition of complexity.  There are many, but that would
lead into a mess of math that I don't have time to deal with right now,
in the context of answering an email...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;


   
-
Be a better friend, newshound, and know-it-all with Yahoo! Mobile.  Try it now.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=74588401-fe7760

Re: [agi] None of you seem to be able ...

2007-12-11 Thread James Ratcliff
*I just want to jump in here and say I appreciate the content of this post as 
opposed to many of the posts of late which were just name calling and 
bickering... hope to see more content instead.*

Richard Loosemore <[EMAIL PROTECTED]> wrote: Ed Porter wrote:
> Jean-Paul,
> 
> Although complexity is one of the areas associated with AI where I have less
> knowledge than many on the list, I was aware of the general distinction you
> are making.  
> 
> What I was pointing out in my email to Richard Loosemore what that the
> definitions in his paper "Complex Systems, Artificial Intelligence and
> Theoretical Psychology," for "irreducible computability" and "global-local
> interconnect" themselves are not totally clear about this distinction, and
> as a result, when Richard says that those two issues are an unavoidable part
> of AGI design that must be much more deeply understood before AGI can
> advance, by the more loose definitions which would cover the types of
> complexity involved in large matrix calculations and the design of a massive
> supercomputer, of course those issues would arise in AGI design, but its no
> big deal because we have a long history of dealing with them.
> 
> But in my email to Richard I said I was assuming he was not using this more
> loose definitions of these words, because if he were, they would not present
> the unexpected difficulties of the type he has been predicting.  I said I
> though he was dealing with more the potentially unruly type of complexity, I
> assume you were talking about.
> 
> I am aware of that type of complexity being a potential problem, but I have
> designed my system to hopefully control it.  A modern-day well functioning
> economy is complex (people at the Santa Fe Institute often cite economies as
> examples of complex systems), but it is often amazingly unchaotic
> considering how loosely it is organized and how many individual entities it
> has in it, and how many transitions it is constantly undergoing.  Unsually,
> unless something bangs on it hard (such as having the price of a major
> commodity all of a sudden triple), it has a fair amount of stability, while
> constantly creating new winners and losers (which is a productive form of
> mini-chaos).  Of course in the absence of regulation it is naturally prone
> to boom and bust cycles.  

Ed,

I now understand that you have indeed heard of complex systems before, 
but I must insist that in your summary above you have summarized what 
they are in such a way that completely contradicts what they are!

A complex system such as the economy can and does have stable modes in 
which it appears to be stable.  This does not constradict the complexity 
at all.  A system is not complex because it is unstable.

I am struggling here, Ed.  I want to go on to explain exactly what I 
mean (and what complex systems theorists mean) but I cannot see a way to 
do it without writing half a book this afternoon.

Okay, let me try this.

Imagine that we got a bunch of computers and connected them with a 
network that allowed each one to talk to (say) the ten nearest machines.

Imagine that each one is running a very simple program:  it keeps a 
handful of local parameters (U, V, W, X, Y) and it updates the values of 
its own parameters according to what the neighboring machines are doing 
with their parameters.

How does it do the updating?  Well, imagine some really messy and 
bizarre algorithm that involves looking at the neighbors' values, then 
using them to cross reference each other, and introduce delays and 
gradients and stuff.

On the face of it, you might think that the result will be that the U V 
W X Y values just show a random sequence of fluctuations.

Well, we know two things about such a system.

1) Experience tells us that even though some systems like that are just 
random mush, there are some (a noticeably large number in fact) that 
have overall behavior that shows 'regularities'.  For example, much to 
our surprise we might see waves in the U values.  And every time two 
waves hit each other, a vortex is created for exactly 20 minutes, then 
it stops.  I am making this up, but that is the kind of thing that could 
happen.

2) The algorithm is so messy that we cannot do any math to analyse and 
predict the behavior of the system.  All we can do is say that we have 
absolutely no techniques that will allow us to mathematical progress on 
the problem today, and we do not know if at ANY time in future history 
there will be a mathematics that will cope with this system.

What this means is that the waves and vortices we observed cannot be 
"explained" in the normal way.  We see them happening, but we do not 
know why they do.  The bizzare algorithm is the "low level mechanism" 
and the waves and vortices are the "high level behavior", and when I say 
there is a "Global-Local Disconnect" in this system, all I mean is that 
we are completely stuck when it comes to explaining the high level in 
terms of th

Re: Human Irrationality [WAS Re: [agi] None of you seem to be able ...]

2007-12-08 Thread Richard Loosemore

Ed Porter wrote:

Mike,

When I write about "my system", (which sounds like it is designed somewhat
like yours), I am talking about a system that has only been thought about
deeply, but never yet built.

When you write about "my system" do you actually have something up and
running?  If so, hats off to you.  


And, if so, how much do you have up and running, how much of it can you
describe, and what sorts of things can it do and how well does it work?

Ed Porter


You presumably meant the question for me, since I was the one who said 
"my system" in the quote below.


The answer is that I do have a great deal of code implementing various 
aspects of my system, but questions like "how well does it work" are 
premature:  I am experimenting with mechanisms, and building all the 
tools needed to do more systematic experiments on those mechanisms, not 
attempting to build the entire system yet.


For the most part, though, I use the phrase "my system" to mean the 
architecture, which is more detailed than the particular code I have 
written.




Richard Loosemore



-Original Message-
From: Mike Tintner [mailto:[EMAIL PROTECTED] 
Sent: Saturday, December 08, 2007 4:16 PM

To: agi@v2.listbox.com
Subject: Re: Human Irrationality [WAS Re: [agi] None of you seem to be able
..]

Richard:  in my system, decisions about what to do next are the
result of hundreds or thousands of "atoms" (basic units of knowledge,
all of which are active processors) coming together in a very
context-dependent way and trying to form coherent models of the
situation.  This cloud of knowledge atoms will cause an outcome to
emerge, but they almost never go through a sequence of steps, like a
linear computer program, to generate an outcome.  As a result I cannot
exactly predict what they will do on a particular occasion (they will
have a general consistency in their behavior, but that consistency is
not imposed by a sequence of machine instructions, it is emergent).


Sounds - just a tad - like somewhat recent Darwinian selection ideas of how 
the brain thinks. Do you think the brain actually thinks in your way? 
Doesn't have to - but you claim to be based on the brain. (You don't have a 
self engaged in conscious, "to be or not to be,"decisionmaking, I take it?)




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=74001696-312be4


RE: Human Irrationality [WAS Re: [agi] None of you seem to be able ...]

2007-12-08 Thread Ed Porter
Mike,

When I write about "my system", (which sounds like it is designed somewhat
like yours), I am talking about a system that has only been thought about
deeply, but never yet built.

When you write about "my system" do you actually have something up and
running?  If so, hats off to you.  

And, if so, how much do you have up and running, how much of it can you
describe, and what sorts of things can it do and how well does it work?

Ed Porter

-Original Message-
From: Mike Tintner [mailto:[EMAIL PROTECTED] 
Sent: Saturday, December 08, 2007 4:16 PM
To: agi@v2.listbox.com
Subject: Re: Human Irrationality [WAS Re: [agi] None of you seem to be able
..]

Richard:  in my system, decisions about what to do next are the
result of hundreds or thousands of "atoms" (basic units of knowledge,
all of which are active processors) coming together in a very
context-dependent way and trying to form coherent models of the
situation.  This cloud of knowledge atoms will cause an outcome to
emerge, but they almost never go through a sequence of steps, like a
linear computer program, to generate an outcome.  As a result I cannot
exactly predict what they will do on a particular occasion (they will
have a general consistency in their behavior, but that consistency is
not imposed by a sequence of machine instructions, it is emergent).


Sounds - just a tad - like somewhat recent Darwinian selection ideas of how 
the brain thinks. Do you think the brain actually thinks in your way? 
Doesn't have to - but you claim to be based on the brain. (You don't have a 
self engaged in conscious, "to be or not to be,"decisionmaking, I take it?)



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73985772-4d045e

Re: Human Irrationality [WAS Re: [agi] None of you seem to be able ...]

2007-12-08 Thread Mike Tintner

Richard:  in my system, decisions about what to do next are the
result of hundreds or thousands of "atoms" (basic units of knowledge,
all of which are active processors) coming together in a very
context-dependent way and trying to form coherent models of the
situation.  This cloud of knowledge atoms will cause an outcome to
emerge, but they almost never go through a sequence of steps, like a
linear computer program, to generate an outcome.  As a result I cannot
exactly predict what they will do on a particular occasion (they will
have a general consistency in their behavior, but that consistency is
not imposed by a sequence of machine instructions, it is emergent).


Sounds - just a tad - like somewhat recent Darwinian selection ideas of how 
the brain thinks. Do you think the brain actually thinks in your way? 
Doesn't have to - but you claim to be based on the brain. (You don't have a 
self engaged in conscious, "to be or not to be,"decisionmaking, I take it?)




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73983345-d15736


Re: Human Irrationality [WAS Re: [agi] None of you seem to be able ...]

2007-12-08 Thread Richard Loosemore

Mike Tintner wrote:
Richard: If someone asked that, I couldn't think of anything to say 
except ...

why *wouldn't* it be possible?  It would strike me as just not a
question that made any sense, to ask for the exact reasons why it is
possible to paint things that are not representational.

Jeez, Richard, of course, it's possible... we all agree that AGI is 
possible (well in my case, only with a body). The question is - how? !*? 
That's what we're here for -   to have IDEAS.. rather than handwave... 
(see, I knew you would)  ...in this case, about how a program can be 
maximally adaptive - change course at any point


Hold on a minute there.

What I have been addressing is just your initial statement:

"Cognitive science treats the human mind as basically a programmed 
computational machine much like actual programmed computers - and 
programs are normally conceived of as rational. - coherent sets of steps 
etc."


The *only* point I have been trying to establish is that when you said 
"and programs are normally conceived of as rational" this made no sense 
because programs can do anything at all, rational or irrational.


Now you say "Jeez, Richard, of course, it's possible [to build programs 
that are either rational or irrational] . The question is - how? !*?"


No, that is another question, one that I have not been addressing.

My only goal was to establish that you cannot say that programs built by 
cognitive scientists are *necessarily* "rational" (in you usage), or 
that they are "normally conceived of as rational".


Most of the theories/models/programs built by cognitive scientists are 
completely neutral on the question of "rational" issues of the sort you 
talk about, because they are about small aspects of cognition where 
those issues don't have any bearing.


There are an infinite number of ways to build a cognitive model in such 
a way that it fits your definition of "irrational", just as there are an 
infinite number of ways to use paint in such a way that the resulting 
picture is abstract rather than representational.  Nothing would be 
proved by my producing an actual example of an "irrational" cognitive 
model, just as nothing would be proved by my painting an abstract 
painting just to prove that that is possible.


I think you have agreed that computers and computational models can in 
principle be used to produce systems that fit your definition of 
irrational, and since that is what I was trying to establish, I think 
we're done, no?


If you don't agree, then there is probably something wrong with your 
picture of what computers can do (how they can be programmed), and it 
would be helpful if you would say what exactly it is about them that 
makes you think this is not possible.


Looking at your suggestion below, I am guessing that you might see an 
AGI program as involving explicit steps of the sort "If x is true, then 
consider these factors and then proceed to the next step".  That is an 
extrarodinarily simplistic picture of what copmputers systems, in 
general are able to do.  So simplistic as to be not general at all.


For example, in my system, decisions about what to do next are the 
result of hundreds or thousands of "atoms" (basic units of knowledge, 
all of which are active processors) coming together in a very 
context-dependent way and trying to form coherent models of the 
situation.  This cloud of knowledge atoms will cause an outcome to 
emerge, but they almost never go through a sequence of steps, like a 
linear computer program, to generate an outcome.  As a result I cannot 
exactly predict what they will do on a particular occasion (they will 
have a general consistency in their behavior, but that consistency is 
not imposed by a sequence of machine instructions, it is emergent).


One of my problems is that it is so obvious to me that programs can do 
things that do not look "rule governed" that I can hardly imagine anyone 
would think otherwise.  Perhaps that is the source of the 
misunderstanding here.



Richard Loosemore


Okay here's my v.v. rough idea - the core two lines or principles of a 
much more complex program  - for engaging in any activity, solving any 
problem - with maximum adaptivity


1. Choose any reasonable path - and any reasonable way to move along it 
- to the goal.   [and then move]


["reasonable" = "likely to be as or more profitable than any of the 
other paths you have time to consider"]


2. If you have not yet reached the goal, and if you have not any other 
superior goals ["anything better to do"], choose any other reasonable 
path - and way of moving - that will lead you closer to the goal.


This presupposes what the human brain clearly has - the hierarchical 
ability to recognize literally ANYTHING as a "thing", "path", "way of 
moving"/ "move" or "goal". It can perceive literally anything from these 
multifunctional perspectives. This presupposes that something like these 
concepts are fundamental to the brain's operat

Re: Human Irrationality [WAS Re: [agi] None of you seem to be able ...]

2007-12-08 Thread Mike Tintner
Richard: If someone asked that, I couldn't think of anything to say except 
...

why *wouldn't* it be possible?  It would strike me as just not a
question that made any sense, to ask for the exact reasons why it is
possible to paint things that are not representational.

Jeez, Richard, of course, it's possible... we all agree that AGI is possible 
(well in my case, only with a body). The question is - how? !*? That's what 
we're here for -   to have IDEAS.. rather than handwave... (see, I knew you 
would)  ...in this case, about how a program can be maximally adaptive - 
change course at any point


Okay here's my v.v. rough idea - the core two lines or principles of a much 
more complex program  - for engaging in any activity, solving any problem - 
with maximum adaptivity


1. Choose any reasonable path - and any reasonable way to move along it - to 
the goal.   [and then move]


["reasonable" = "likely to be as or more profitable than any of the other 
paths you have time to consider"]


2. If you have not yet reached the goal, and if you have not any other 
superior goals ["anything better to do"], choose any other reasonable path - 
and way of moving - that will lead you closer to the goal.


This presupposes what the human brain clearly has - the hierarchical ability 
to recognize literally ANYTHING as a "thing", "path", "way of moving"/ 
"move" or "goal". It can perceive literally anything from these 
multifunctional perspectives. This presupposes that something like these 
concepts are fundamental to the brain's operation.


This also presupposes what you might say are - roughly - the basic 
principles of neuroeconomics and decision theory - that the brain does and 
any adaptive brain must, continually assess every action for profitability - 
for its rewards, risks and costs.


[The big deal here is those two words  "any" -   and any path etc that is 
"as"  profitable -  those two words/ concepts give maximal freedom and 
adaptivity - and true freedom]


What we're talking about here BTW is when you think about it, a truly 
"universal program" for soving, and learning how to solve, literally any 
problem.


[Oh, there has to be a third line or clause  - and a lot more too of 
course - that says:   1a. If you can't see any reasonable paths etc - look 
for some.]


So what are your ideas, Richard, here? Have you actually thought about it? 
Jeez, what do we pay you all this money for?









-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73929597-fb8991


Re: Human Irrationality [WAS Re: [agi] None of you seem to be able ...]

2007-12-07 Thread Richard Loosemore

Mike Tintner wrote:
Richard:For my own system (and for Hofstadter too), the natural 
extension of the

system to a full AGI design would involve

a system [that] can change its approach and rules of reasoning at 
literally any step of problem-solving  it will be capable of

producing all the human irrationalities that I listed previously -
like not even defining or answering the problem. It will by the same
token have the capacity to be truly creative, because it will ipso
facto be capable of lateral thinking at any step of problem-solving.


This is very VERY much part of the design.

There is not any problem with doing all of this.

Does this clarify the question?

I think really I would reflect the question back at you and ask why you
would think that this is a difficult thing to do?

Richard,

Fine. Sounds interesting. But you don't actually clarify or explain 
anything. Why don't you explain how you or anyone else can fundamentally 
change your approach/rules at any point of solving a problem?


Why don't you, just  in plain English, - in philosophical as opposed to 
programming form  - set out the key rules or principles that allow you 
or anyone else to do this? I have never seen such key rules or 
principles anywhere, nor indeed even adumbrated anywhere. (Fancy word, 
but it just came to mind). And since they are surely a central problem 
for AGI - and no one has solved AGI - how on earth could I not think 
this a difficult matter?


I have some v. rough ideas about this, which I can gladly set out.  But 
I'd like to hear yours -   you should be able to do it briefly. But 
please, no handwaving.


I will try to think about your question when I can but meanwhile think 
about this:  if we go back to the analogy of painting and whether or not 
it can be used to depict things that are abstract or 
non-representational, how would you respond to someone who wanted exact 
details of how come painting could allow that to be possible.?


If someone asked that, I couldn't think of anything to say except ... 
why *wouldn't* it be possible?  It would strike me as just not a 
question that made any sense, to ask for the exact reasons why it is 
possible to paint things that are not representational.


I simply cannot understand why anyone would think it not possible to do 
that.  It is possible:  it is not easy to do it right, but that's not 
the point.  Computers can be used to program systems of any sort 
(including deeply irrational things like Microsoft Office), so why would 
anyone think that AGI systems must exhibit only a certain sort of design?


This isn't handwaving, it is just genuine bafflement.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73903282-a471b6


Re: Human Irrationality [WAS Re: [agi] None of you seem to be able ...]

2007-12-07 Thread Mike Tintner
Richard:For my own system (and for Hofstadter too), the natural extension of 
the

system to a full AGI design would involve

a system [that] can change its approach and rules of reasoning at 
literally any step of problem-solving  it will be capable of

producing all the human irrationalities that I listed previously -
like not even defining or answering the problem. It will by the same
token have the capacity to be truly creative, because it will ipso
facto be capable of lateral thinking at any step of problem-solving.


This is very VERY much part of the design.

There is not any problem with doing all of this.

Does this clarify the question?

I think really I would reflect the question back at you and ask why you
would think that this is a difficult thing to do?

Richard,

Fine. Sounds interesting. But you don't actually clarify or explain 
anything. Why don't you explain how you or anyone else can fundamentally 
change your approach/rules at any point of solving a problem?


Why don't you, just  in plain English, - in philosophical as opposed to 
programming form  - set out the key rules or principles that allow you or 
anyone else to do this? I have never seen such key rules or principles 
anywhere, nor indeed even adumbrated anywhere. (Fancy word, but it just came 
to mind). And since they are surely a central problem for AGI - and no one 
has solved AGI - how on earth could I not think this a difficult matter?


I have some v. rough ideas about this, which I can gladly set out.  But I'd 
like to hear yours -   you should be able to do it briefly. But please, no 
handwaving.





-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73792444-8e01d6


Re: Human Irrationality [WAS Re: [agi] None of you seem to be able ...]

2007-12-07 Thread Richard Loosemore

Mike Tintner wrote:

Richard: Mike,
I think you are going to have to be specific about what you mean by 
"irrational" because you mostly just say that all the processes that 
could possibly exist in computers are rational, and I am wondering 
what else is there that "irrational" could possibly mean.  I have 
named many processes that seem to me to fit the "irrational" 
definition, but without being too clear about it you have declared 
them all to be just rational, so now I have no idea what you can be 
meaning by the word.



Richard,

Er, it helps to read my posts. From my penultimate post to you:

"If a system can change its approach and rules of reasoning at literally 
any step of

problem-solving, then it is truly "crazy"/ irrational (think of a crazy
path). And it will be capable of producing all the human irrationalities
that I listed previously - like not even defining or answering the problem.
It will by the same token have the capacity to be truly creative, 
because it

will ipso facto be capable of lateral thinking at any step of
problem-solving. Is your system capable of that? Or anything close? Somehow
I doubt it, or you'd already be claiming the solution to both AGI and
computational creativity."

A rational system follows a set of rules in solving a problem  (which 
can incl. rules that self-modify according to metarules) ;  a creative, 
irrational system can change/break/create any and all rules (incl. 
metarules) at any point of solving a problem  -  the ultimate, by 
definition, in adaptivity. (Much like you, and indeed all of us, change 
the rules of engagement much of the time in our discussions here).


Listen, no need to reply - because you're obviously not really 
interested. To me that's ironic, though, because this is absolutely the 
most central issue there is in AGI. But no matter.


No, I am interested, I was just confused, and I did indeed miss the 
above definition (got a lot I have to do right now, so am going very 
fast through my postings) -- sorry about that.


The fact is that the computational models I mentioned (those by 
Hofstadter etc) are all just attempts to understand part of the problem 
of how a cognitive system works, and all of them are consistent with the 
design of a system that is irrational accroding to your above 
definition.  They may look rational, but that is just an illusion: 
every one of them is so small that it is completely neutral with respect 
to the rationality of a complete system.  They could be used by someone 
who wanted to build a rational system or an irrational system, it does 
not matter.


For my own system (and for Hofstadter too), the natural extension of the 
system to a full AGI design would involve


a system [that] can change its approach and rules of reasoning at 
literally any step of problem-solving  it will be capable of

producing all the human irrationalities that I listed previously -
like not even defining or answering the problem. It will by the same
token have the capacity to be truly creative, because it will ipso
facto be capable of lateral thinking at any step of problem-solving.


This is very VERY much part of the design.

I prefer not to use the term "irrational" to describe it (because that 
has other connotations), but using your definition, it would be irrational.


There is not any problem with doing all of this.

Does this clarify the question?

I think really I would reflect the question back at you and ask why you 
would think that this is a difficult thing to do?  It is not difficult 
to design a system this way:  some people like the trad-AI folks don't 
do it (yet), and appear not to be trying, but there is nothing in 
principle that makes it difficult to build a system of this sort.





Richard Loosemore



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73685934-1acb8b


Re: Human Irrationality [WAS Re: [agi] None of you seem to be able ...]

2007-12-07 Thread Mike Tintner

Richard: Mike,
I think you are going to have to be specific about what you mean by 
"irrational" because you mostly just say that all the processes that could 
possibly exist in computers are rational, and I am wondering what else is 
there that "irrational" could possibly mean.  I have named many processes 
that seem to me to fit the "irrational" definition, but without being too 
clear about it you have declared them all to be just rational, so now I 
have no idea what you can be meaning by the word.



Richard,

Er, it helps to read my posts. From my penultimate post to you:

"If a system can change its approach and rules of reasoning at literally any 
step of

problem-solving, then it is truly "crazy"/ irrational (think of a crazy
path). And it will be capable of producing all the human irrationalities
that I listed previously - like not even defining or answering the problem.
It will by the same token have the capacity to be truly creative, because it
will ipso facto be capable of lateral thinking at any step of
problem-solving. Is your system capable of that? Or anything close? Somehow
I doubt it, or you'd already be claiming the solution to both AGI and
computational creativity."

A rational system follows a set of rules in solving a problem  (which can 
incl. rules that self-modify according to metarules) ;  a creative, 
irrational system can change/break/create any and all rules (incl. 
metarules) at any point of solving a problem  -  the ultimate, by 
definition, in adaptivity. (Much like you, and indeed all of us, change the 
rules of engagement much of the time in our discussions here).


Listen, no need to reply - because you're obviously not really interested. 
To me that's ironic, though, because this is absolutely the most central 
issue there is in AGI. But no matter.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73661748-adcbd5


Re: Human Irrationality [WAS Re: [agi] None of you seem to be able ...]

2007-12-07 Thread Richard Loosemore


Mike,

I think you are going to have to be specific about what you mean by 
"irrational" because you mostly just say that all the processes that 
could possibly exist in computers are rational, and I am wondering what 
else is there that "irrational" could possibly mean.  I have named many 
processes that seem to me to fit the "irrational" definition, but 
without being too clear about it you have declared them all to be just 
rational, so now I have no idea what you can be meaning by the word.



Richard Loosemore


Mike Tintner wrote:
Richard:This raises all sorts of deep issues about what exactly you 
would mean

by "rational".  If a bunch of "things" (computational processes) come
together and each contribute "something" to a decision that results in
an output, and the exact output choice depends on so many factors coming
together that it would not necessarily be the same output if roughly the
same situation occurred another time, and if none of these things looked
like a "rule" of any kind, then would you still call it "rational"?If 
the answer is yes then whatever would count as "not rational"?


I'm not sure what you mean - but this seems consistent with other 
impressions I've been getting of your thinking.


Let me try and cut through this: if science were to change from its 
prevailing conception of the human mind as a rational, computational 
machine to what I am suggesting - i.e. a creative, compositional, 
irrational machine - we would be talking of a major revolution that 
would impact right through the sciences - and radically extend the scope 
of scientific investigation into human thought. It would be the end of 
the deterministic conception of humans and animals and ultimately be a 
revolution of Darwinian proportions.


Hofstadter & co are absolutely not revolutionaries. Johnson-Laird 
conceives of the human mind as an automaton. None of them are 
fundamentally changing the prevailing conceptions of cognitive science. 
No one has reacted to them with shock or horror or delight.


I suspect that what you are talking about is loosely akin to the ideas 
of some that quantum mechanics has changed scientific determinism. It 
hasn't - the fact that we can't measure certain quantum phenomena with 
precision does not mean that they are not fundamentally deterministic. 
And science remains deterministic.


Similarly, if you make a computer system very complex, keep changing the 
factors involved in computations, add random factors & whatever, you are 
not necessarily making it non-rational. You make it v. difficult to 
understand the computer's rationality, (and possibly extend our 
conception of rationality), but the system may still be basically 
rational, just as quantum particles are still in all probability 
basically deterministic.


As a side-issue, I don't believe that human reasoning, conscious and 
unconscious, is  remotely, even infinitesimally as complex as that of 
the AI systems you guys all seem to be building. The human brain surely 
never seizes up with the kind of complex, runaway calculations that 
y'all have been conjuring up in your arguments. That only happens when 
you have a rational system that obeys basically rigid (even if complex) 
rules.  The human brain is cleverer than that - it doesn't have any 
definite rules for any activities. In fact, you should be so lucky as to 
have a nice, convenient set of rules, even complex ones,  to guide you 
when you sit down to write your computer programs.




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73610112-93352e


Re: Human Irrationality [WAS Re: [agi] None of you seem to be able ...]

2007-12-07 Thread Mike Tintner
Richard:This raises all sorts of deep issues about what exactly you would 
mean

by "rational".  If a bunch of "things" (computational processes) come
together and each contribute "something" to a decision that results in
an output, and the exact output choice depends on so many factors coming
together that it would not necessarily be the same output if roughly the
same situation occurred another time, and if none of these things looked
like a "rule" of any kind, then would you still call it "rational"?If the 
answer is yes then whatever would count as "not rational"?


I'm not sure what you mean - but this seems consistent with other 
impressions I've been getting of your thinking.


Let me try and cut through this: if science were to change from its 
prevailing conception of the human mind as a rational, computational machine 
to what I am suggesting - i.e. a creative, compositional, irrational 
machine - we would be talking of a major revolution that would impact right 
through the sciences - and radically extend the scope of scientific 
investigation into human thought. It would be the end of the deterministic 
conception of humans and animals and ultimately be a revolution of Darwinian 
proportions.


Hofstadter & co are absolutely not revolutionaries. Johnson-Laird conceives 
of the human mind as an automaton. None of them are fundamentally changing 
the prevailing conceptions of cognitive science. No one has reacted to them 
with shock or horror or delight.


I suspect that what you are talking about is loosely akin to the ideas of 
some that quantum mechanics has changed scientific determinism. It hasn't - 
the fact that we can't measure certain quantum phenomena with precision does 
not mean that they are not fundamentally deterministic. And science remains 
deterministic.


Similarly, if you make a computer system very complex, keep changing the 
factors involved in computations, add random factors & whatever, you are not 
necessarily making it non-rational. You make it v. difficult to understand 
the computer's rationality, (and possibly extend our conception of 
rationality), but the system may still be basically rational, just as 
quantum particles are still in all probability basically deterministic.


As a side-issue, I don't believe that human reasoning, conscious and 
unconscious, is  remotely, even infinitesimally as complex as that of the AI 
systems you guys all seem to be building. The human brain surely never 
seizes up with the kind of complex, runaway calculations that y'all have 
been conjuring up in your arguments. That only happens when you have a 
rational system that obeys basically rigid (even if complex) rules.  The 
human brain is cleverer than that - it doesn't have any definite rules for 
any activities. In fact, you should be so lucky as to have a nice, 
convenient set of rules, even complex ones,  to guide you when you sit down 
to write your computer programs.




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73606953-582970


Re: [agi] None of you seem to be able ...

2007-12-07 Thread Benjamin Goertzel
On Dec 6, 2007 8:06 PM, Ed Porter <[EMAIL PROTECTED]> wrote:
> Ben,
>
> To the extent it is not proprietary, could you please list some of the types
> of parameters that have to be tuned, and the types, if any, of
> Loosemore-type complexity problems you envision in Novamente or have
> experienced with WebMind, in such tuning and elsewhere?
>
> Ed Porter

A specific list of parameters would have no meaning without a huge
explanation which I don't have time to give...

Instead I'll list a few random areas where choices need to be made, that appear
localized at first but wind up affecting the whole

-- attention allocation is handled by an "artificial economy" mechanism, which
has the same sorts of parameters as any economic system (analogues of
interest rates,
rent rates, etc.)

-- program trees representing internal procedures are normalized via a set of
normalization rules, which collectively cast procedures into a certain
normal form.
There are many ways to do this.

-- the pruning of (backward and forward chaining) inference trees uses a
statistical "bandit problem" methodology, which requires a priori probabilities
to be ascribed to various inference steps


Fortunately though in each of the above three
examples there is theory that can guide parameter tning (different theories
in the three cases -- dynamic systems theory for the artificial economy; formal
computer science and language theory for program tree reduction; and Bayesian
stats for the pruning issue)

Webmind AI Engine had too many parameters and too much coupling between
subsystems.  We cast parameter optimization as an AI learning problem but it
was a hard one, though we did make headway on it.  Novamente Engine has much
coupling btw subsystems, but no unnecessary coupling; and many fewer
parameters on which system behavior can sensitively depend.  Definitely,
minimization of the number of needful-of-adjustment parameters is a very key
aspect of AGI system design.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73598324-4bf78b


Re: [agi] None of you seem to be able ...

2007-12-07 Thread Richard Loosemore

Jean-Paul Van Belle wrote:

Interesting - after drafting three replies I have come to realize
that it is possible to hold two contradictory views and live or even
run with it. Looking at their writings, both Ben & Richard know damn
well what complexity means and entails for AGI. Intuitively, I side
with Richard's stance that, if the current state of 'the new kind of
science' cannot even understand simple chaotic systems - the
toy-problems of three-variable differential quadratic equations  and
2-D Alife, then what hope is there to find a theoretical solution for
a really complex system. The way forward is by experimental
exploration of part of the solution space. I don't think we'll find
general complexity theories any time soon. On the other hand,
practically I think that it *is* (or may be) possible to build an AGI
system up carefully and systematically from the ground up i.e.
inspired by a sound (or at least plausible) theoretical framework or
by modelling it on real-world complex systems that seem to work
(because that's the way I proceed too), finetuning the system
parameters and managing emerging complexity as we go along and move
up the complexity scale. (Just like engineers can build pretty much
anything without having a GUT.) Both paradagmatic approaches have
their merits and are in fact complementary: explore, simulate,
genetically evolve etc. from the top down to get a bird's eye view of
the problem space versus incrementally build up from the bottom up
following a carefully chartered path/ridge inbetween the chasms of
the unknown based on a strong conceptual theoretical founding. It is
done all the time in other sciences - even maths! Interestingly, I
started out wanting to use a simulation tool to check the behaviour
(read: fine-tune the parameters) of my architectural designs but then
realised that the simulation of a complex system is actually a
complex system itself and it'd be easier and more efficient to
prototype than to simulate. But that's just because of the nature of
my architecture. Assuming Ben's theories hold, he is adopting the
right approach. Given Richard's assumption or intuitions, he is
following the right path too. I doubt that they will converge on a
common solution but the space of conceivably possible AGI
architectures is IMHO extremely large. In fact, my architectural
approach is a bit of a poor cousin/hybrid: having neither Richard's
engineering skills nor Ben's mathematical understanding I am hoping
to do a scruffy alternative path :)


Interesting thoughts:  remind me, if I forget, that when I get my 
website functioning and can put longer papers into a permanent 
repository, that we all need to have a forward-looking discussion about 
some of the detailed issues that might arise here.  That is, going 
beyond merely arguing about whether or not there is a problem.  I have 
many thoughts about what you say, but no time right now, so I will come 
back to this.


The short version of my thoughts is that we need to look into some of 
the details of what I propose to do, and try to evaluate the possible 
dangers of not taking the path I suggest.




Richard Loosemore


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73591687-f58813


Re: Human Irrationality [WAS Re: [agi] None of you seem to be able ...]

2007-12-07 Thread Richard Loosemore

Mike Tintner wrote:
Well, I'm not sure if  "not doing logic" necessarily means a system is 
irrational, i.e if rationality equates to logic.  Any system 
consistently followed can classify as rational. If for example, a 
program consistently does Freudian free association and produces nothing 
but a chain of associations with some connection:


"bird - - feathers - four..tops "

or on the contrary, a 'nonsense' chain where there is NO connection..

logic.. sex... ralph .. essence... pi... Loosemore...

then it is rational - it consistently follows a system with a set of 
rules. And the rules could, for argument's sake, specify that every step 
is illogical - as in breaking established rules of logic - or that steps 
are alternately logical and illogical.  That too would be rational. 
Neural nets from the little I know are also rational inasmuch as they 
follow rules. Ditto Hofstadter & Johnson-Laird from again the little I 
know also seem rational - Johnson-Laird's jazz improvisation program 
from my cursory reading seemed rational and not truly creative.


Sorry to be brief, but:

This raises all sorts of deep issues about what exactly you would mean 
by "rational".  If a bunch of "things" (computational processes) come 
together and each contribute "something" to a decision that results in 
an output, and the exact output choice depends on so many factors coming 
together that it would not necessarily be the same output if roughly the 
same situation occurred another time, and if none of these things looked 
like a "rule" of any kind, then would you still call it "rational"?


If the answer is yes then whatever would count as "not rational"?


Richard Loosemore



I do not know enough to pass judgment on your system, but  you do strike 
me as a rational kind of guy (although probably philosophically much 
closer to me than most here  as you seem to indicate).  Your attitude to 
emotions seems to me rational, and your belief that you can produce an 
AGI that will almost definitely be cooperative , also bespeaks rationality.


In the final analysis, irrationality = creativity (although I'm using 
the word with a small "c", rather than the social kind, where someone 
produces a new idea that no one in society has had or published before). 
If a system can change its approach and rules of reasoning at literally 
any step of problem-solving, then it is truly "crazy"/ irrational (think 
of a crazy path). And it will be capable of producing all the human 
irrationalities that I listed previously - like not even defining or 
answering the problem. It will by the same token have the capacity to be 
truly creative, because it will ipso facto be capable of lateral 
thinking at any step of problem-solving. Is your system capable of that? 
Or anything close? Somehow I doubt it, or you'd already be claiming the 
solution to both AGI and computational creativity.


But yes, please do send me your paper.

P.S. I hope you won't - & I actually don't think - that you will get all 
pedantic on me like so many AI-ers & say "ah but we already have 
programs that can modify their rules." Yes, but they do that according 
to metarules - they are still basically rulebound. A crazy/ creative 
program is rulebreaking (and rulecreating) - can break ALL the rules, 
incl. metarules. Rulebound/rulebreaking is one of the most crucial 
differences between narrow AI/AGI.



Richard: In the same way computer programs are completely
neutral and can be used to build systems that are either rational or
irrational.  My system is not "rational" in that sense at all.

Richard,

Out of interest, rather than pursuing the original argument:

1) Who are these programmers/ systembuilders who try to create 
programs (and what are the programs/ systems) that are either 
"irrational" or "non-rational"  (and described  as such)?


I'm a little partied out right now, so all I have time for is to 
suggest: Hofstadter's group builds all kinds of programs that do 
things without logic.  Phil Johnson-Laird (and students) used to try 
to model reasoning ability using systems that did not do logic.  All 
kinds of language processing people use various kinds of neural nets:  
see my earlier research papers with Gordon Brown et al, as well as 
folks like Mark Seidenberg, Kim Plunkett etc.  Marslen-Wilson and 
Tyler used something called a "Cohort Model" to describe some aspects 
of language.


I am just dragging up the name of anyone who has ever done any kind of 
computer modelling of some aspect of cognition:  all of these people 
do not use systems that do any kind of "logical" processing.  I could 
go on indefinitely.  There are probably hundreds of them.  They do not 
try to build complete systems, of course, just local models.



When I have proposed (in different threads) that the mind is not 
rationally, algorithmically programmed I have been met with uniform 
and often fierce resistance both on this and another AI forum.


Hey, join the club!  You have 

RE: [agi] None of you seem to be able ...

2007-12-06 Thread Jean-Paul Van Belle
Interesting - after drafting three replies I have come to realize that it is 
possible to hold two contradictory views and live or even run with it. Looking 
at their writings, both Ben & Richard know damn well what complexity means and 
entails for AGI. 
Intuitively, I side with Richard's stance that, if the current state of 'the 
new kind of science' cannot even understand simple chaotic systems - the 
toy-problems of three-variable differential quadratic equations  and 2-D Alife, 
then what hope is there to find a theoretical solution for a really complex 
system. The way forward is by experimental exploration of part of the solution 
space. I don't think we'll find general complexity theories any time soon.
On the other hand, practically I think that it *is* (or may be) possible to 
build an AGI system up carefully and systematically from the ground up i.e. 
inspired by a sound (or at least plausible) theoretical framework or by 
modelling it on real-world complex systems that seem to work (because that's 
the way I proceed too), finetuning the system parameters and managing emerging 
complexity as we go along and move up the complexity scale. (Just like 
engineers can build pretty much anything without having a GUT.)
Both paradagmatic approaches have their merits and are in fact complementary: 
explore, simulate, genetically evolve etc. from the top down to get a bird's 
eye view of the problem space versus incrementally build up from the bottom up 
following a carefully chartered path/ridge inbetween the chasms of the unknown 
based on a strong conceptual theoretical founding. It is done all the time in 
other sciences - even maths!
Interestingly, I started out wanting to use a simulation tool to check the 
behaviour (read: fine-tune the parameters) of my architectural designs but then 
realised that the simulation of a complex system is actually a complex system 
itself and it'd be easier and more efficient to prototype than to simulate. But 
that's just because of the nature of my architecture. Assuming Ben's theories 
hold, he is adopting the right approach. Given Richard's assumption or 
intuitions, he is following the right path too. I doubt that they will converge 
on a common solution but the space of conceivably possible AGI architectures is 
IMHO extremely large. In fact, my architectural approach is a bit of a poor 
cousin/hybrid: having neither Richard's engineering skills nor Ben's 
mathematical understanding I am hoping to do a scruffy alternative path :)
-- 

Research Associate: CITANDA
Post-Graduate Section Head 
Department of Information Systems
Phone: (+27)-(0)21-6504256
Fax: (+27)-(0)21-6502280
Office: Leslie Commerce 4.21


>>> On 2007/12/07 at 03:06, in message <[EMAIL PROTECTED]>,
>> Conclusion:  there is a danger that the complexity that even Ben agrees
>> must be present in AGI systems will have a significant impact on our
>> efforts to build them.  But the only response to this danger at the
>> moment is the bare statement made by people like Ben that "I do not
>> think that the danger is significant".  No reason given, no explicit
>> attack on any component of the argument I have given, only a statement
>> of intuition, even though I have argued that intuition cannot in
>> principle be a trustworthy guide here.
> But Richard, your argument ALSO depends on intuitions ...
> I agree that AGI systems contain a lot of complexity in the dynamical-
> systems-theory sense.
> And I agree that tuning all the parameters of an AGI system externally
> is likely to be intractable, due to this complexity.
> However, part of the key to intelligence is **self-tuning**.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73455082-621f89

Re: Human Irrationality [WAS Re: [agi] None of you seem to be able ...]

2007-12-06 Thread Mike Tintner
Well, I'm not sure if  "not doing logic" necessarily means a system is 
irrational, i.e if rationality equates to logic.  Any system consistently 
followed can classify as rational. If for example, a program consistently 
does Freudian free association and produces nothing but a chain of 
associations with some connection:


"bird - - feathers - four..tops "

or on the contrary, a 'nonsense' chain where there is NO connection..

logic.. sex... ralph .. essence... pi... Loosemore...

then it is rational - it consistently follows a system with a set of rules. 
And the rules could, for argument's sake, specify that every step is 
illogical - as in breaking established rules of logic - or that steps are 
alternately logical and illogical.  That too would be rational. Neural nets 
from the little I know are also rational inasmuch as they follow rules. 
Ditto Hofstadter & Johnson-Laird from again the little I know also seem 
rational - Johnson-Laird's jazz improvisation program from my cursory 
reading seemed rational and not truly creative.


I do not know enough to pass judgment on your system, but  you do strike me 
as a rational kind of guy (although probably philosophically much closer to 
me than most here  as you seem to indicate).  Your attitude to emotions 
seems to me rational, and your belief that you can produce an AGI that will 
almost definitely be cooperative , also bespeaks rationality.


In the final analysis, irrationality = creativity (although I'm using the 
word with a small "c", rather than the social kind, where someone produces a 
new idea that no one in society has had or published before). If a system 
can change its approach and rules of reasoning at literally any step of 
problem-solving, then it is truly "crazy"/ irrational (think of a crazy 
path). And it will be capable of producing all the human irrationalities 
that I listed previously - like not even defining or answering the problem. 
It will by the same token have the capacity to be truly creative, because it 
will ipso facto be capable of lateral thinking at any step of 
problem-solving. Is your system capable of that? Or anything close? Somehow 
I doubt it, or you'd already be claiming the solution to both AGI and 
computational creativity.


But yes, please do send me your paper.

P.S. I hope you won't - & I actually don't think - that you will get all 
pedantic on me like so many AI-ers & say "ah but we already have programs 
that can modify their rules." Yes, but they do that according to metarules - 
they are still basically rulebound. A crazy/ creative program is 
rulebreaking (and rulecreating) - can break ALL the rules, incl. metarules. 
Rulebound/rulebreaking is one of the most crucial differences between narrow 
AI/AGI.



Richard: In the same way computer programs are completely
neutral and can be used to build systems that are either rational or
irrational.  My system is not "rational" in that sense at all.

Richard,

Out of interest, rather than pursuing the original argument:

1) Who are these programmers/ systembuilders who try to create programs 
(and what are the programs/ systems) that are either "irrational" or 
"non-rational"  (and described  as such)?


I'm a little partied out right now, so all I have time for is to suggest: 
Hofstadter's group builds all kinds of programs that do things without 
logic.  Phil Johnson-Laird (and students) used to try to model reasoning 
ability using systems that did not do logic.  All kinds of language 
processing people use various kinds of neural nets:  see my earlier 
research papers with Gordon Brown et al, as well as folks like Mark 
Seidenberg, Kim Plunkett etc.  Marslen-Wilson and Tyler used something 
called a "Cohort Model" to describe some aspects of language.


I am just dragging up the name of anyone who has ever done any kind of 
computer modelling of some aspect of cognition:  all of these people do 
not use systems that do any kind of "logical" processing.  I could go on 
indefinitely.  There are probably hundreds of them.  They do not try to 
build complete systems, of course, just local models.



When I have proposed (in different threads) that the mind is not 
rationally, algorithmically programmed I have been met with uniform and 
often fierce resistance both on this and another AI forum.


Hey, join the club!  You have read my little brouhaha with Yudkowsky last 
year I presume?  A lot of AI people have their heads up their asses, so 
yes, they believe that rationality is God.


It does depend how you put it though:  sometimes you use rationality to 
not mean what they mean, so that might explain the ferocity.



My argument
re the philosophy of mind of  cog sci & other sciences is of course not 
based on such reactions, but they do confirm my argument. And the 
position you at first appear to be adopting is unique both in my 
experience and my reading.


2) How is your system "not rational"? Does it not use algorithms?


It uses "dynamic relaxation"

Re: [agi] None of you seem to be able ...

2007-12-06 Thread Richard Loosemore

Scott Brown wrote:

Hi Richard,

On Dec 6, 2007 8:46 AM, Richard Loosemore <[EMAIL PROTECTED] 
> wrote:


Try to think of some other example where we have tried to build a system
that behaves in a certain overall way, but we started out by using
components that interacted in a completely funky way, and we succeeded
in getting the thing working in the way we set out to.  In all the
history of engineering there has never been such a thing.


I would argue that, just as we don't have to fully understand the 
complexity posed by the interaction of subatomic particles to make 
predictions about the way molecular systems behave, we don't have to 
fully understand the complexity of interactions between neurons to make 
predictions about how cognitive systems behave.  Many researchers are 
attempting to create cognitive models that don't necessarily map 
directly back to low-level neural activity in biological organisms.  
Doesn't this approach mitigate some of the risk posed by complexity in 
neural systems?


I completely agree that the neural-level stuff does not have to impact 
cognitive-level stuff:  that is why I work at the cognitive level and do 
not bother too much with exact neural architecture.


The only problem with your statement was the last sentence:  when I say 
that there is a complex systems problem, I only mean complexity at the 
cognitive level, not complexity at the neural level.


I am not too worried about any complexity that might exist down at the 
neural level because as far as I can tell that level is not *dominated* 
by complex effects.  At the cognitive level, on the other hand, there is 
a strong possibility that what happens when the mind builds a model of 
some situation, it gets a large nummber of concepts to come together and 
try to relax into a stable representation, and that relaxation process 
is potentially sensitive to complex effects (some small parameter in the 
design of the "concepts" could play a crucial role in ensuring that the 
relaxation process goes properly, for example).


I am being rather terse here due to lack of time, but that is the short 
answer.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73430502-7926e9


Re: FW: [agi] None of you seem to be able ...

2007-12-06 Thread Richard Loosemore

Ed Porter wrote:

Richard,

A day or so ago I told you if you could get me to eat my words with regard
to something involving your global-local disconnect, I would do so with
relish because you would have taught me something valuable.  


Well  -- as certain parts of my below email indicate -- to a small extent
you have gotten me to eat my words, at least partially.  You have provided
me with either some new valuable thoughts or a valuable reminder of some old
ones (I attended a two day seminar on complexity in the early '90s after
reading the popular "Chaos book").  You haven't flipped me around 180
degrees, but you have shifted my compass somewhat.  So I suppose I should
thank you.

My acknowledgement of this shift was indicated in my below email in multiple
places in small ways (such as my statement I had copied you long explanation
of your position to my file of valuable clippings from this list) and in
particular by the immediately following quote, with the relevant portions
capitalized.  


"ED PORTER=> SO, NET, NET, RICHARD, RE-READING YOUR
PAPER AND READING YOUR BELOW LONG POST HAVE INCREASED MY RESPECT FOR YOUR
ARGUMENTS.  I AM SOMEWHAT MORE AFRAID OF COMPLEXITY GOTCHAS THAN I WAS TWO
DAYS AGO.  But I still am pretty confident (without anything beginning to
approach proof) such gotchas will not prevent use from making useful human
level AGI within the decade if AGI got major funding"

You may find anything less than total capitulation unsatisfying, 


Oh, not at all:  thankyou for your gracious words.  I did notice what 
you wrote earlier, but it came too late for me to respond.  I am glad to 
correspond with someone who takes such a flexible approach to 
discussion:  that is rare.


I am very glad, and certainly I do not stay grumpy for long:  I look 
forward to more dialogue in the future.


I will also do my best to write longer versions of some of these 
arguments, and some of my more positive (i.e. less destructive) ideas in 
a permanent form on a website.




but I think

it would improve the quality of exchange on this list if there was more
acknowledgment after an argument of when one shifts their understanding in
response to someone else's arguments, rather than always trying to act as if
one has won every aspect of every argument.

Ed Porter


-Original Message-
From: Ed Porter [mailto:[EMAIL PROTECTED] 
Sent: Thursday, December 06, 2007 12:57 PM

To: agi@v2.listbox.com
Subject: RE: [agi] None of you seem to be able ...

Richard,

You will be happy to note that I have copied the text of your reply to my
"Valuable Clippings From AGI Mailing List file".  Below are some comments.


RICHARD LOOSEMORE=> I now understand that you have indeed heard of

complex systems before, but I must insist that in your summary above you
have summarized what they are in such a way that completely contradicts what
they are!

A complex system such as the economy can and does have stable modes in 
which it appears to be stable.  This does not constradict the complexity 
at all.  A system is not complex because it is unstable.


ED PORTER=> Richard, I was citing a relatively stable economies as
exactly what you say they are, an example of a complex system that is
relatively stable.  So why is it that my summary " summarized what they are
in such a way that completely contradicts what they are!"?   I implied that
economies have traditionally had instabilities, such as boom and bust
cycles, and I am aware that even with all our controls, other major
instabilities could strike, in much the same ways that people can have
nervous breakdowns.

ED PORTER=> With regard to the rest of your paper I find it one of your
better reasoned discussions of the problem of complexity.  I like Ben, agree
it is a potential problem.  I said that in the email you were responding to.
My intuition, like Ben's, tells me we probably be able to deal with it, but
your paper is correct to point out that such intuitions are really largely
guesses.  


RICHARD LOOSEMORE=>how can someone know that how much impact the

complexity is going to have, when in the same breath they will admit that
NOBODY currently understands just how much of an impact the complexity has.

the best that anyone can do is point to other systems in which there is a
small amount of complexity and say:  "Well, these folks managed to
understand their systems without getting worried about complexity, so why
don't we assume that our problem is no worse than theirs?"  For example,
someone could point to the dynamics of planetary systems and say that there
is a small bit of complexity there, but it is a relatively small effect in
the grand scheme of things.

ED PORTER=> A better example would be the world economy.  Its got 6
billion highly autonomous players.  It has all sorts of non-linearities and
complex connections.  Although it has

Re: Human Irrationality [WAS Re: [agi] None of you seem to be able ...]

2007-12-06 Thread Richard Loosemore

Mike Tintner wrote:

Richard: In the same way computer programs are completely
neutral and can be used to build systems that are either rational or
irrational.  My system is not "rational" in that sense at all.

Richard,

Out of interest, rather than pursuing the original argument:

1) Who are these programmers/ systembuilders who try to create programs 
(and what are the programs/ systems) that are either "irrational" or 
"non-rational"  (and described  as such)?


I'm a little partied out right now, so all I have time for is to 
suggest:  Hofstadter's group builds all kinds of programs that do things 
without logic.  Phil Johnson-Laird (and students) used to try to model 
reasoning ability using systems that did not do logic.  All kinds of 
language processing people use various kinds of neural nets:  see my 
earlier research papers with Gordon Brown et al, as well as folks like 
Mark Seidenberg, Kim Plunkett etc.  Marslen-Wilson and Tyler used 
something called a "Cohort Model" to describe some aspects of language.


I am just dragging up the name of anyone who has ever done any kind of 
computer modelling of some aspect of cognition:  all of these people do 
not use systems that do any kind of "logical" processing.  I could go on 
indefinitely.  There are probably hundreds of them.  They do not try to 
build complete systems, of course, just local models.



When I have proposed (in different threads) that the mind is not 
rationally, algorithmically programmed I have been met with uniform and 
often fierce resistance both on this and another AI forum. 


Hey, join the club!  You have read my little brouhaha with Yudkowsky 
last year I presume?  A lot of AI people have their heads up their 
asses, so yes, they believe that rationality is God.


It does depend how you put it though:  sometimes you use rationality to 
not mean what they mean, so that might explain the ferocity.



My argument
re the philosophy of mind of  cog sci & other sciences is of course not 
based on such reactions, but they do confirm my argument. And the 
position you at first appear to be adopting is unique both in my 
experience and my reading.


2) How is your system "not rational"? Does it not use algorithms?


It uses "dynamic relaxation" in a "generalized neural net".  Too much to 
explain in a hurry.



And could you give a specific example or two of the kind of problem that 
it deals with - non-rationally?  (BTW I don't think I've seen any 
problem examples for your system anywhere, period  - for all I know, it 
could be designed to read children' stories, bomb Iraq, do syllogisms, 
work out your domestic budget, or work out the meaning of life - or play 
and develop in virtual worlds).


I am playing this close, for the time being, but I have released a small 
amount of it in a forthcoming neuroscience paper.  I'll send it to you 
tomorrow if you like, but it does not go into a lot of detail.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73425500-35e13a


RE: [agi] None of you seem to be able ...

2007-12-06 Thread Ed Porter
Ben,

To the extent it is not proprietary, could you please list some of the types
of parameters that have to be tuned, and the types, if any, of
Loosemore-type complexity problems you envision in Novamente or have
experienced with WebMind, in such tuning and elsewhere?

Ed Porter

-Original Message-
From: Benjamin Goertzel [mailto:[EMAIL PROTECTED] 
Sent: Thursday, December 06, 2007 1:09 PM
To: agi@v2.listbox.com
Subject: Re: [agi] None of you seem to be able ...

> Conclusion:  there is a danger that the complexity that even Ben agrees
> must be present in AGI systems will have a significant impact on our
> efforts to build them.  But the only response to this danger at the
> moment is the bare statement made by people like Ben that "I do not
> think that the danger is significant".  No reason given, no explicit
> attack on any component of the argument I have given, only a statement
> of intuition, even though I have argued that intuition cannot in
> principle be a trustworthy guide here.

But Richard, your argument ALSO depends on intuitions ...

I'll try, though, to more concisely frame the reason I think your argument
is wrong.

I agree that AGI systems contain a lot of complexity in the dynamical-
systems-theory sense.

And I agree that tuning all the parameters of an AGI system externally
is likely to be intractable, due to this complexity.

However, part of the key to intelligence is **self-tuning**.

I believe that if an AGI system is built the right way, it can effectively
tune its own parameters, hence adaptively managing its own complexity.

Now you may say there's a problem here: If AGI component A2 is to
tune the parameters of AGI component A1, and A1 is complex, then
A2 has got to also be complex ... and who's gonna tune its parameters?

So the answer has got to be that: To effectively tune the parameters
of an AGI component of complexity X, requires an AGI component of
complexity a bit less than X.  Then one can build a self-tuning AGI system,
if one does the job right.

Now, I'm not saying that Novamente (for instance) is explicitly built
according to this architecture: it doesn't have N components wherein
component A_N tunes the parameters of component A_(N+1).

But in many ways, throughout the architecture, it relies on this sort of
fundamental logic.

Obviously it is not the case that every system of complexity X can
be parameter-tuned by a system of complexity less than X.  The question
however is whether an AGI system can be built of such components.
I suggest the answer is yes -- and furthermore suggest that this is
pretty much the ONLY way to do it...

Your intuition is that this is not possible, but you don't have a proof
of this...

And yes, I realize the above argument of mine is conceptual only -- I
haven't
given a formal definition of complexity.  There are many, but that would
lead into a mess of math that I don't have time to deal with right now,
in the context of answering an email...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73419475-92a308<>

Re: [agi] None of you seem to be able ...

2007-12-06 Thread Scott Brown
Hi Richard,

On Dec 6, 2007 8:46 AM, Richard Loosemore <[EMAIL PROTECTED]> wrote:

> Try to think of some other example where we have tried to build a system
> that behaves in a certain overall way, but we started out by using
> components that interacted in a completely funky way, and we succeeded
> in getting the thing working in the way we set out to.  In all the
> history of engineering there has never been such a thing.
>

I would argue that, just as we don't have to fully understand the complexity
posed by the interaction of subatomic particles to make predictions about
the way molecular systems behave, we don't have to fully understand the
complexity of interactions between neurons to make predictions about how
cognitive systems behave.  Many researchers are attempting to create
cognitive models that don't necessarily map directly back to low-level
neural activity in biological organisms.  Doesn't this approach mitigate
some of the risk posed by complexity in neural systems?

-- Scott

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73399933-fcedd2

FW: [agi] None of you seem to be able ...

2007-12-06 Thread Ed Porter
Richard,

A day or so ago I told you if you could get me to eat my words with regard
to something involving your global-local disconnect, I would do so with
relish because you would have taught me something valuable.  

Well  -- as certain parts of my below email indicate -- to a small extent
you have gotten me to eat my words, at least partially.  You have provided
me with either some new valuable thoughts or a valuable reminder of some old
ones (I attended a two day seminar on complexity in the early '90s after
reading the popular "Chaos book").  You haven't flipped me around 180
degrees, but you have shifted my compass somewhat.  So I suppose I should
thank you.

My acknowledgement of this shift was indicated in my below email in multiple
places in small ways (such as my statement I had copied you long explanation
of your position to my file of valuable clippings from this list) and in
particular by the immediately following quote, with the relevant portions
capitalized.  

"ED PORTER=> SO, NET, NET, RICHARD, RE-READING YOUR
PAPER AND READING YOUR BELOW LONG POST HAVE INCREASED MY RESPECT FOR YOUR
ARGUMENTS.  I AM SOMEWHAT MORE AFRAID OF COMPLEXITY GOTCHAS THAN I WAS TWO
DAYS AGO.  But I still am pretty confident (without anything beginning to
approach proof) such gotchas will not prevent use from making useful human
level AGI within the decade if AGI got major funding"

You may find anything less than total capitulation unsatisfying, but I think
it would improve the quality of exchange on this list if there was more
acknowledgment after an argument of when one shifts their understanding in
response to someone else's arguments, rather than always trying to act as if
one has won every aspect of every argument.

Ed Porter


-Original Message-
From: Ed Porter [mailto:[EMAIL PROTECTED] 
Sent: Thursday, December 06, 2007 12:57 PM
To: agi@v2.listbox.com
Subject: RE: [agi] None of you seem to be able ...

Richard,

You will be happy to note that I have copied the text of your reply to my
"Valuable Clippings From AGI Mailing List file".  Below are some comments.

>RICHARD LOOSEMORE=> I now understand that you have indeed heard of
complex systems before, but I must insist that in your summary above you
have summarized what they are in such a way that completely contradicts what
they are!

A complex system such as the economy can and does have stable modes in 
which it appears to be stable.  This does not constradict the complexity 
at all.  A system is not complex because it is unstable.

ED PORTER=> Richard, I was citing a relatively stable economies as
exactly what you say they are, an example of a complex system that is
relatively stable.  So why is it that my summary " summarized what they are
in such a way that completely contradicts what they are!"?   I implied that
economies have traditionally had instabilities, such as boom and bust
cycles, and I am aware that even with all our controls, other major
instabilities could strike, in much the same ways that people can have
nervous breakdowns.

ED PORTER=> With regard to the rest of your paper I find it one of your
better reasoned discussions of the problem of complexity.  I like Ben, agree
it is a potential problem.  I said that in the email you were responding to.
My intuition, like Ben's, tells me we probably be able to deal with it, but
your paper is correct to point out that such intuitions are really largely
guesses.  

>RICHARD LOOSEMORE=>how can someone know that how much impact the
complexity is going to have, when in the same breath they will admit that
NOBODY currently understands just how much of an impact the complexity has.

the best that anyone can do is point to other systems in which there is a
small amount of complexity and say:  "Well, these folks managed to
understand their systems without getting worried about complexity, so why
don't we assume that our problem is no worse than theirs?"  For example,
someone could point to the dynamics of planetary systems and say that there
is a small bit of complexity there, but it is a relatively small effect in
the grand scheme of things.

ED PORTER=> A better example would be the world economy.  Its got 6
billion highly autonomous players.  It has all sorts of non-linearities and
complex connections.  Although it has fits and starts is has surprising
stability considering everything that is thrown at it (Not clear how far
this stability will hold into the singularity future) but still it is an
instructive example of how extremely complex things, with lots of
non-linearities, can work relatively well if there are the proper
motivations and controls.

>RICHARD LOOSEMORE=>Problem with that line of argument is that there are
NO other examples of an engineering system with as much naked funkiness in
the interactions between the low level components

Re: Human Irrationality [WAS Re: [agi] None of you seem to be able ...]

2007-12-06 Thread Mike Tintner

Richard: In the same way computer programs are completely
neutral and can be used to build systems that are either rational or
irrational.  My system is not "rational" in that sense at all.

Richard,

Out of interest, rather than pursuing the original argument:

1) Who are these programmers/ systembuilders who try to create programs (and 
what are the programs/ systems) that are either "irrational" or 
"non-rational"  (and described  as such)?


When I have proposed (in different threads) that the mind is not rationally, 
algorithmically programmed I have been met with uniform and often fierce 
resistance both on this and another AI forum. My argument re the philosophy 
of mind of  cog sci & other sciences is of course not based on such 
reactions, but they do confirm my argument. And the position you at first 
appear to be adopting is unique both in my experience and my reading.


2) How is your system "not rational"? Does it not use algorithms?

And could you give a specific example or two of the kind of problem that it 
deals with - non-rationally?  (BTW I don't think I've seen any problem 
examples for your system anywhere, period  - for all I know, it could be 
designed to read children' stories, bomb Iraq, do syllogisms, work out your 
domestic budget, or work out the meaning of life - or play and develop in 
virtual worlds).



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73382084-a9590d


Re: [agi] None of you seem to be able ...

2007-12-06 Thread Richard Loosemore

Derek Zahn wrote:

Richard Loosemore writes:

 > Okay, let me try this.
 >
 > Imagine that we got a bunch of computers [...]
 
Thanks for taking the time to write that out.  I think it's the most 
understandable version of your argument that you have written yet.  Put 
it on the web somewhere and link to it whenever the issue comes up again 
in the future.


Thanks:  I will do that very soon.

If you are right, you may have to resort to "told you so" when other 
projects fail to produce the desired emergent intelligence.  No matter 
what you do, system builders can and do and will say that either their 
system is probably not heavily impacted by the issue, or that the issue 
itself is overstated for AGI development, and I doubt that most will be 
convinced otherwise.  By making such a clear exposition, at least the 
issue is out there for people to think about.


True.  I have to go further than that if I want to get more people 
involved in working on this project though.  People with money listen to 
the mainstream voice and want nothing to do with an idea so heavily 
criticised, no matter that the criticism comes from those with a vested 
interest in squashing it.



I have no position myself on whether Novamente (for example) is likely 
to be slain by its own complexity, but it is interesting to ponder.


I would rather it did not, and I hope Ben is right in being so 
optimistic.  I just know that it is a dangerous course to follow if you 
actually don't want to run the risk of another 50 years of running 
around in circles.



Richard Loosemore.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73348560-68439c


Re: Last word for the time being [WAS Re: [agi] None of you seem to be able ...]

2007-12-06 Thread Richard Loosemore

Benjamin Goertzel wrote:

Show me ONE other example of the reverse engineering of a system in
which the low level mechanisms show as many complexity-generating
characteristics as are found in the case of intelligent systems, and I
will gladly learn from the experience of the team that did the job.

I do not believe you can name a single one.


Well, I am not trying to reverse engineer the brain.  Any more than
the Wright Brothers were trying to reverse engineer  a bird -- though I
do imagine the latter will eventually be possible.


You know, I sympathize with you in a way.  You are trying to build an
AGI system using a methodology that you are completely committed to.
And here am I coming along like Bertrand Russell writing his letter to
Frege, just as poor Frege was about to publish his "Grundgesetze der
Arithmetik", pointing out that everything in the new book was undermined
by a paradox.  How else can you respond except by denying the idea as
vigorously as possible?


It's a deeply flawed analogy.

Russell's paradox is a piece of math and once Frege
was confronted with it he got it.  The discussion between the two of them
did not devolve into long, rambling dialogues about the meanings of terms
and the uncertainties of various intuitions.


Believe me, I know:  which is why I envy Russell for the positive 
response he got from Frege.  You could help the discussion enormously by 
not pushing it in the direction of long rambling dialogues, and by not 
trying to argue about the meanings of terms and the uncertainties of 
various intuitions, which have nothing to do with the point that I made.


I for one hate that kind of pointless discussion, which is why I keep 
trying to make you address the key point.


Unfortunately, you never do address the key point:  in the above, you 
ignored it completely!  (Again!)


At least Frege did actually get it.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73346948-931def


Human Irrationality [WAS Re: [agi] None of you seem to be able ...]

2007-12-06 Thread Richard Loosemore

Mike Tintner wrote:
Richard,  The problem here is that I am not sure in what sense you are 
using the
word "rational".  There are many usages.  One of those usages is very 
common in cog sci, and if I go with *that* usage your claim is 
completely wrong:  you can pick up an elementary cog psy textbook and 
find at least two chapters dedicated to a discussion about the many 
ways that humans are (according to the textbook) "irrational".


This is a subject of huge importance, and it shouldn't be hard to reach 
a mutual understanding at least. "Rational" in general means that a 
system or agent follows a coherent and systematic set of steps in 
solving a problem.


The social sciences treat humans as rational agents maximising or 
boundedly satisficing their utilities in taking decisions  - coherently 
systematically finding solutions for their needs,(& there is much 
controversy about this - everyone knows it ain't right, but no 
substitute has been offered)


Cognitive science treats the human mind as basically a programmed 
computational machine much like actual programmed computers - and 
programs are normally conceived of  as rational. - coherent sets of 
steps etc.


Both cog sci and sci psych. generally endlessly highlight 
irrationalities in our decisionmaking/problemsolving processes - but 
these are only in *parts* of those processes, not the processes as a 
whole. They're like bugs in the program, but the program and mind as a 
whole are basically rational - following coherent sets of steps - it's 
just that the odd heuristic/ attitude/ assumption is wrong (or perhaps 
they have a neurocognitive deficit).


Mike,

What is happening here is that you have gotten an extremely 
oversimplified picture of what cognitive science is claiming.  This 
particular statement of yours focusses on the key misunderstanding:


> Cognitive science treats the human mind as basically a programmed
> computational machine much like actual programmed computers - and
> programs are normally conceived of  as rational. - coherent sets of
> steps etc.

Programs IN GENERAL are not rational, it is just that the folks who 
tried to AI and build models of mind in the very very early years 
started out with simple programs that tried to do "reasoning-like" 
computations, and as a result you have seen this as everything that 
computers do.


This would be analogous to someone saying "Paint is used to build 
pictures that directly represent objects in the world."  This would not 
be true:  paint is completely neutral and can be used to either 
represent real things, or represent non-real things, or represent 
nothing at all.  In the same way computer programs are completely 
neutral and can be used to build systems that are either rational or 
irrational.  My system is not "rational" in that sense at all.


Just because some paintings represent things, that does not mean that 
paint only does that.  Just because some people tried to use computers 
to build rational-looking models of mind, that does not mean that 
computers in general do that.





Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73344123-2104e3


RE: [agi] None of you seem to be able ...

2007-12-06 Thread Derek Zahn
Richard Loosemore writes:> Okay, let me try this.> > Imagine that we got a 
bunch of computers [...]
 
Thanks for taking the time to write that out.  I think it's the most 
understandable version of your argument that you have written yet.  Put it on 
the web somewhere and link to it whenever the issue comes up again in the 
future.
 
If you are right, you may have to resort to "told you so" when other projects 
fail to produce the desired emergent intelligence.  No matter what you do, 
system builders can and do and will say that either their system is probably 
not heavily impacted by the issue, or that the issue itself is overstated for 
AGI development, and I doubt that most will be convinced otherwise.  By making 
such a clear exposition, at least the issue is out there for people to think 
about.
 
I have no position myself on whether Novamente (for example) is likely to be 
slain by its own complexity, but it is interesting to ponder.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73249587-454993

Re: Last word for the time being [WAS Re: [agi] None of you seem to be able ...]

2007-12-06 Thread Benjamin Goertzel
> Show me ONE other example of the reverse engineering of a system in
> which the low level mechanisms show as many complexity-generating
> characteristics as are found in the case of intelligent systems, and I
> will gladly learn from the experience of the team that did the job.
>
> I do not believe you can name a single one.

Well, I am not trying to reverse engineer the brain.  Any more than
the Wright Brothers were trying to reverse engineer  a bird -- though I
do imagine the latter will eventually be possible.

> You know, I sympathize with you in a way.  You are trying to build an
> AGI system using a methodology that you are completely committed to.
> And here am I coming along like Bertrand Russell writing his letter to
> Frege, just as poor Frege was about to publish his "Grundgesetze der
> Arithmetik", pointing out that everything in the new book was undermined
> by a paradox.  How else can you respond except by denying the idea as
> vigorously as possible?

It's a deeply flawed analogy.

Russell's paradox is a piece of math and once Frege
was confronted with it he got it.  The discussion between the two of them
did not devolve into long, rambling dialogues about the meanings of terms
and the uncertainties of various intuitions.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73249230-63bddf


Last word for the time being [WAS Re: [agi] None of you seem to be able ...]

2007-12-06 Thread Richard Loosemore

Benjamin Goertzel wrote:

Richard,

Well, I'm really sorry to have offended you so much, but you seem to be
a mighty easy guy to offend!

I know I can be pretty offensive at times; but this time, I wasn't
even trying ;-)


The argument I presented was not a "conjectural assertion", it made the
following coherent case:

   1) There is a high prima facie *risk* that intelligence involves a
significant amount of irreducibility (some of the most crucial
characteristics of a complete intelligence would, in any other system,
cause the behavior to show a global-local disconnect), and


The above statement contains two fuzzy terms -- "high" and "significant" ...

You have provided no evidence for any particular quantification of
these terms...
your evidence is qualitative/intuitive, so far as I can tell...

Your quantification of these terms seems to me a conjectural assertion
unsupported by evidence.


[This is going to cross over your parallel response to a different post. 
No time to address that other argument, but the comments made here are 
not affected by what is there.]


I have answered this point very precisely on many occasions, including 
in the paper.  Here it is again:


If certain types of mechanisms do indeed give rise to complexity (as all 
the complex systems theorist agree), then BY DEFINITION it will never be 
possible to quantify the exact relationship between:


   1)  The precise characteristics of the low-level mechanisms (both 
the type and the quantity) that would lead us to expect complexity, and


   2)  The amount of complexity thereby caused in the high-level behavior.

Even if the complex systems effect were completely real, the best we 
could ever do would be to come up with suggestive characteristics that 
lead to complexity.  Nevertheless, there is a long list of such 
suggestive characteristics, and everyone (including you) agree that all 
those suggestive characteristics are present in the low level mechanisms 
that must be in an AGI.


So the one most important thing we know about complex systems is that if 
complex systems really do exist, then we CANNOT say "Give me precise 
quantitative evidence that we should expect complexity in this 
particular system".


And what is your response to this most important fact about complex systems?

Your response is: "Give me precise quantitative evidence that we should 
expect complexity in this particular system".


And then, when I explain all of the above (as I have done before, many 
times), you go on to conclude:


"[You are giving] a conjectural assertion unsupported by evidence."

Which is, in the context of my actual argument, a serious little bit of 
sleight-of-hand (to be as polite as possible about it).





   2) Because of the unique and unusual nature of complexity there is
only a vanishingly small chance that we will be able to find a way to
assess the exact degree of risk involved, and

   3) (A corollary of (2)) If the problem were real, but we were to
ignore this risk and simply continue with an "engineering" approach
(pretending that complexity is insignificant),


The engineering approach does not pretend that complexity is
insignificant.  It just denies that the complexity of intelligent systems
leads to the sort of irreducibility you suggest it does.


It denies it?  Based on what?  My argument above makes it crystal clear 
that if the engineering approach is taking that attitude, then it does 
so purely on the basis of wishful thinking, whilst completely ignoring 
the above argument.  The engineering approach would be saying:  "We 
understand complex systems well enough to know that there isn't a 
problem in this case"  a nonsensical position when by definition it 
is not possible for anyone to really understand the connection, and the 
best evidence we can get is actually pointing to the opposite conclusion.


So this comes back to the above argument:  the engineering approach has 
to address that first, before it can make any such claim.




Some complex systems can be reverse-engineered in their general
principles even if not in detail.  And that is all one would need to do
in order to create a brain emulation (not that this is what I'm trying
to do) --- assuming one's goal was not to exactly emulate some
specific human brain based on observing the behaviors it generates,
but merely to emulate the brainlike character of the system...


This has never been done, but that is exactly what I am trying to do.

Show me ONE other example of the reverse engineering of a system in 
which the low level mechanisms show as many complexity-generating 
characteristics as are found in the case of intelligent systems, and I 
will gladly learn from the experience of the team that did the job.


I do not believe you can name a single one.




then the *only* evidence
we would ever get that irreducibility was preventing us from building a
complete intelligence would be the fact that we would simply run around
in circles all th

Re: [agi] None of you seem to be able ...

2007-12-06 Thread Benjamin Goertzel
> Conclusion:  there is a danger that the complexity that even Ben agrees
> must be present in AGI systems will have a significant impact on our
> efforts to build them.  But the only response to this danger at the
> moment is the bare statement made by people like Ben that "I do not
> think that the danger is significant".  No reason given, no explicit
> attack on any component of the argument I have given, only a statement
> of intuition, even though I have argued that intuition cannot in
> principle be a trustworthy guide here.

But Richard, your argument ALSO depends on intuitions ...

I'll try, though, to more concisely frame the reason I think your argument
is wrong.

I agree that AGI systems contain a lot of complexity in the dynamical-
systems-theory sense.

And I agree that tuning all the parameters of an AGI system externally
is likely to be intractable, due to this complexity.

However, part of the key to intelligence is **self-tuning**.

I believe that if an AGI system is built the right way, it can effectively
tune its own parameters, hence adaptively managing its own complexity.

Now you may say there's a problem here: If AGI component A2 is to
tune the parameters of AGI component A1, and A1 is complex, then
A2 has got to also be complex ... and who's gonna tune its parameters?

So the answer has got to be that: To effectively tune the parameters
of an AGI component of complexity X, requires an AGI component of
complexity a bit less than X.  Then one can build a self-tuning AGI system,
if one does the job right.

Now, I'm not saying that Novamente (for instance) is explicitly built
according to this architecture: it doesn't have N components wherein
component A_N tunes the parameters of component A_(N+1).

But in many ways, throughout the architecture, it relies on this sort of
fundamental logic.

Obviously it is not the case that every system of complexity X can
be parameter-tuned by a system of complexity less than X.  The question
however is whether an AGI system can be built of such components.
I suggest the answer is yes -- and furthermore suggest that this is
pretty much the ONLY way to do it...

Your intuition is that this is not possible, but you don't have a proof
of this...

And yes, I realize the above argument of mine is conceptual only -- I haven't
given a formal definition of complexity.  There are many, but that would
lead into a mess of math that I don't have time to deal with right now,
in the context of answering an email...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73243865-194e0e


Re: [agi] None of you seem to be able ...

2007-12-06 Thread Mike Tintner
s on convergent problems, even though essaywriting and similar 
projects constitute a good half - and by far the most important half  -of 
educational and real world problemsolving and intelligence, period. IOW sci 
psych's concept of intelligence basically ignores the divergent/ "fluid" 
half of intelligence.


Scientific pyschology does not in any way study the systematic and 
inevitable irrationality of how people actually solve divergent problems. To 
do that, you would have to look, for example, at how people solve problems 
like essays from beginning to end - involving hundreds to thousands of lines 
of thought. That would be much, much too complex for present-day psychology 
which concentrates on simple problems and simple aspects of problemsolving.


I can go on at much greater length, incl. explaining how I think the mind is 
actually programmed, and the positive, "creative" side of its irrationality, 
but by now you should have the beginning of an understanding of what I'm on 
about and mean by rational/irrational. There really is no question that 
science does  currently regard the human mind as rational - (it's called 
rational, not irrational, decision theory and science talks of rational, not 
irrational, agents ) - and to do otherwise would challenge the foundations 
of cognitive science.


AI in general does not seek to produce irrational programs - and it remains 
a subject of intense debate as to whether it can produce creative programs.


When you have a machine that can solve divergent problems - incl. writing 
essays and having free-flowing conversations - and do so as 
irrationally/creatively as humans do, you will have solved the problem of 
AGI.





- Original Message - 
From: "Richard Loosemore" <[EMAIL PROTECTED]>

To: 
Sent: Thursday, December 06, 2007 3:19 PM
Subject: Re: [agi] None of you seem to be able ...



Mike Tintner wrote:

Richard: Now, interpreting that result is not easy,

Richard, I get the feeling you're getting understandably tired with all 
your correspondence today. Interpreting *any* of the examples of *hard* 
cog sci that you give is not easy. They're all useful, stimulating stuff, 
but they don't add up to a hard pic. of the brain's cognitive 
architecture. Perhaps Ben will back me up on this - it's a rather 
important point - our overall *integrated* picture of the brain's 
cognitive functioning is really v. poor, although certainly we have a 
wealth of details about, say, which part of the brain is somehow 
connected to a given operation.


You make an important point, but in your haste to make it you may have 
overlooked the fact that I really agree with you ... and have gone on to 
say that I am trying to fix that problem.


What I mean by that:  if you look at cog psy/cog sci in a superficial way 
you might come awy with the strong impression that "they don't add up to a 
hard pic. of the brain's cognitive architecture".  Sure.  But that is what 
I meant when I said that "cog sci has a huge amount of information stashed 
away, but it is in a format that makes it very hard for someone trying to 
build an intelligent system to actually use".


I believe I can see deeper into this problem, and I think that cog sci can 
be made to add up to a consistent picture, but it requires an extra 
organizational ingredient that I am in the process of adding right now.


The root of the problem is that the cog sci and AI communities both have 
extremely rigid protocols about how to do research, which are incompatible 
with each other.  In cog sci you are expected to produce a micro-theory 
for every experimental result, and efforts to work on larger theories or 
frameworks without introducing new experimental results that are directly 
explained are frowned upon.  The result is a style of work that produces 
"local patch" theories that do not have any generality.


The net result of all this is that when you say that "our overall 
*integrated* picture of the brain's cognitive functioning is really v. 
poor" I would point out that this is only true if you replace the "our" 
with "the AI community's".



Richard:I admit that I am confused right
now:  in the above paragraphs you say that your position is that the
human mind is 'rational' and then later that it is 'irrational' - was
the first one of those a typo?

Richard, No typo whatsoever if you just reread. V. clear. I say and said: 
*scientific pychology* and *cog sci* treat the mind as rational. I am the 
weirdo who is saying this is nonsense - the mind is 
irrational/crazy/creative - rationality is a major *achievement* not 
something that comes naturally. "Mike Tintner= crazy/irrational"- 
somehow, I don't think you'll find that hard to remember.


The problem here is that I am not sure in what sense you are using

RE: [agi] None of you seem to be able ...

2007-12-06 Thread Ed Porter
Richard,

You will be happy to note that I have copied the text of your reply to my
"Valuable Clippings From AGI Mailing List file".  Below are some comments.

>RICHARD LOOSEMORE=> I now understand that you have indeed heard of
complex systems before, but I must insist that in your summary above you
have summarized what they are in such a way that completely contradicts what
they are!

A complex system such as the economy can and does have stable modes in 
which it appears to be stable.  This does not constradict the complexity 
at all.  A system is not complex because it is unstable.

ED PORTER=> Richard, I was citing a relatively stable economies as
exactly what you say they are, an example of a complex system that is
relatively stable.  So why is it that my summary " summarized what they are
in such a way that completely contradicts what they are!"?   I implied that
economies have traditionally had instabilities, such as boom and bust
cycles, and I am aware that even with all our controls, other major
instabilities could strike, in much the same ways that people can have
nervous breakdowns.

ED PORTER=> With regard to the rest of your paper I find it one of your
better reasoned discussions of the problem of complexity.  I like Ben, agree
it is a potential problem.  I said that in the email you were responding to.
My intuition, like Ben's, tells me we probably be able to deal with it, but
your paper is correct to point out that such intuitions are really largely
guesses.  

>RICHARD LOOSEMORE=>how can someone know that how much impact the
complexity is going to have, when in the same breath they will admit that
NOBODY currently understands just how much of an impact the complexity has.

the best that anyone can do is point to other systems in which there is a
small amount of complexity and say:  "Well, these folks managed to
understand their systems without getting worried about complexity, so why
don't we assume that our problem is no worse than theirs?"  For example,
someone could point to the dynamics of planetary systems and say that there
is a small bit of complexity there, but it is a relatively small effect in
the grand scheme of things.

ED PORTER=> A better example would be the world economy.  Its got 6
billion highly autonomous players.  It has all sorts of non-linearities and
complex connections.  Although it has fits and starts is has surprising
stability considering everything that is thrown at it (Not clear how far
this stability will hold into the singularity future) but still it is an
instructive example of how extremely complex things, with lots of
non-linearities, can work relatively well if there are the proper
motivations and controls.

>RICHARD LOOSEMORE=>Problem with that line of argument is that there are
NO other examples of an engineering system with as much naked funkiness in
the interactions between the low level components.

ED PORTER=> The key is try to avoid and/or control funkiness in your
components.  Remember that an experiential system derives most of its
behavior by re-enacting, largely through substitutions and
probabilistic-transition-based synthesis, from past experience, with a bias
toward past experiences that have "worked" in some sense meaningful to the
machine.  These creates a tremendous bias toward desirable, vs. funky,
behaviors.

So, net, net, Richard, re-reading your paper and reading your below long
post have increased my respect for your arguments.  I am somewhat more
afraid of complexity gotchas than I was two days ago.  But I still am pretty
confident (without anything beginning to approach proof) such gotchas will
not prevent use from making useful human level AGI within the decade if AGI
got major funding.

But I have been afraid for a long time that even the other type of
complexity (i.e., complication, which often involves some risk of
"complexity") means that it may be very difficult for us humans to keep
control of superhuman-level AGI's for very long, so I have always worried
about that sort of complexity Gotcha.

But with regard to the complexity problem, it seems to me that we should
design systems with an eye to reducing their knarlyness, including planning
multiple types of control systems, and then once we get initial such system
up and running try to find out what sort of complexity problems we have.

Ed Porter



-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: Thursday, December 06, 2007 11:46 AM
To: agi@v2.listbox.com
Subject: Re: [agi] None of you seem to be able ...

Ed Porter wrote:
> Jean-Paul,
> 
> Although complexity is one of the areas associated with AI where I have
less
> knowledge than many on the list, I was aware of the general distinction
you
> are making.  
> 
> What I was pointing out in my email to Richard Loosemore what that the
> definitio

Re: [agi] None of you seem to be able ...

2007-12-06 Thread Richard Loosemore

Ed Porter wrote:

Richard,

I read your core definitions of "computationally irreducabile" and
"global-local disconnect" and by themselves they really don't distinguish
very well between complicated and "complex".


That paper was not designed to be a "complex systems for absolute 
beginners" paper, so these definitions work very well for anyone who has 
even a little background knowledge of complex systems.




Richard Loosemore



But I did assume from your paper and other writings you meant "complex"
although your core definitions are not very clear about the distinction.

Ed Porter



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73230463-32f239


RE: [agi] None of you seem to be able ...

2007-12-06 Thread Ed Porter
Richard,

I read your core definitions of "computationally irreducabile" and
"global-local disconnect" and by themselves they really don't distinguish
very well between complicated and "complex".

But I did assume from your paper and other writings you meant "complex"
although your core definitions are not very clear about the distinction.

Ed Porter

-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: Thursday, December 06, 2007 10:31 AM
To: agi@v2.listbox.com
Subject: Re: [agi] None of you seem to be able ...

Ed Porter wrote:
> Richard, 
> 
>   I quickly reviewed your paper, and you will be happy to note that I
> had underlined and highlighted it so such skimming was more valuable that
it
> otherwise would have been.
> 
>   With regard to "COMPUTATIONAL IRREDUCIBILITY", I guess a lot depends
> on definition. 
> 
>   Yes, my vision of a human AGI would be a very complex machine.  Yes,
> a lot of its outputs could only be made with human level reasonableness
> after a very large amount of computation.  I know of no shortcuts around
the
> need to do such complex computation.  So it arguably falls in to what you
> say Wolfram calls "computational irreducibility."  
> 
>   But the same could be said for any of many types of computations,
> such as large matrix equations or Google's map-reduces, which are
routinely
> performed on supercomputers.
> 
>   So if that is how you define irreducibility, its not that big a
> deal.  It just means you have to do a lot of computing to get an answer,
> which I have assumed all along for AGI (Remember I am the one pushing for
> breaking the small hardware mindset.)  But it doesn't mean we don't know
how
> to do such computing or that we have to do a lot more complexity research,
> of the type suggested in your paper, before we can successfully designing
> AGIs.
> 
>   With regard to "GLOBAL-LOCAL DISCONNECT", again it depends what you
> mean.  
> 
>   You define it as
> 
>   "The GLD merely signifies that it might be difficult or
> impossible to derive analytic explanations of global regularities that we
> observe in the system, given only a knowledge of the local rules that
drive
> the system. "
> 
>   I don't know what this means.  Even the game of Life referred to in
> your paper can be analytically explained.  It is just that some of the
> things that happen are rather complex and would take a lot of computing to
> analyze.  So does the global-local disconnect apply to anything where an
> explanation requires a lot of analysis?  If that is the case than any
large
> computation, of the type which mankind does and designs every day, would
> have a global-local disconnect.
> 
>   If that is the case, the global-local disconnect is no big deal.  We
> deal with it every day.

Forgive, but I am going to have to interrupt at this point.

Ed, what is going on here is that my paper is about "complex systems" 
but you are taking that phrase to mean something like "complicated 
systems" rather than the real meaning -- the real meaning is very much 
not "complicated systems", it has to do with a particular class of 
systems that are labelled "complex" BECAUSE they show overall behavior 
that appears to be disconnected from the mechanisms out of which the 
systems are made up.

The problem is that "complex systems" has a specific technical meaning. 
  If you look at the footnote in my paper (I think it is on page one), 
you will find that the very first time I use the word "complex" I make 
sure that my audience does not take it the wrong way by explaining that 
it does not refer to "complicated system".

Everything you are saying here in this post is missing the point, so 
could I request that you do some digging around to figure out what 
complex systems are, and then make a second attempt?  I am sorry:  I do 
not have the time to write a long introductory essay on complex systems 
right now.

Without this understanding, the whole of my paper will seem like 
gobbledegook.  I am afraid this is the result of skimming through the 
paper.  I am sure you would have noticed the problem if you had gone 
more slowly.



Richard Loosemore.


>   I don't know exactly what you mean by "regularities" in the above
> definition, but I think you mean something equivalent to patterns or
> meaningful generalizations.  In many types of computing commonly done, you
> don't know what the regularizes will be without tremendous computing.  For
> example in principal component analysis, you often don't know what the
major
> dimensions of a distribution will 

RE: [agi] None of you seem to be able ...

2007-12-06 Thread Ed Porter
Ben,
You below email is a much more concise statement of the basic point
I was trying to make
Ed Porter

-Original Message-
From: Benjamin Goertzel [mailto:[EMAIL PROTECTED] 
Sent: Thursday, December 06, 2007 9:45 AM
To: agi@v2.listbox.com
Subject: Re: [agi] None of you seem to be able ...

There is no doubt that complexity, in the sense typically used in
dynamical-systems-theory, presents a major issue for AGI systems.  Any
AGI system with real potential is bound to have a lot of parameters
with complex interdependencies between them, and tuning these
parameters is going to be a major problem.  The question is whether
one has an adequate theory of one's system to allow one to do this
without an intractable amount of trial and error.  Loosemore -- if I
interpret him correctly -- seems to be suggesting that for powerful
AGI systems no such theory can exist, on principle.  I doubt very much
this is correct.

-- Ben G

On Dec 6, 2007 9:40 AM, Ed Porter <[EMAIL PROTECTED]> wrote:
> Jean-Paul,
>
> Although complexity is one of the areas associated with AI where I have
less
> knowledge than many on the list, I was aware of the general distinction
you
> are making.
>
> What I was pointing out in my email to Richard Loosemore what that the
> definitions in his paper "Complex Systems, Artificial Intelligence and
> Theoretical Psychology," for "irreducible computability" and "global-local
> interconnect" themselves are not totally clear about this distinction, and
> as a result, when Richard says that those two issues are an unavoidable
part
> of AGI design that must be much more deeply understood before AGI can
> advance, by the more loose definitions which would cover the types of
> complexity involved in large matrix calculations and the design of a
massive
> supercomputer, of course those issues would arise in AGI design, but its
no
> big deal because we have a long history of dealing with them.
>
> But in my email to Richard I said I was assuming he was not using this
more
> loose definitions of these words, because if he were, they would not
present
> the unexpected difficulties of the type he has been predicting.  I said I
> though he was dealing with more the potentially unruly type of complexity,
I
> assume you were talking about.
>
> I am aware of that type of complexity being a potential problem, but I
have
> designed my system to hopefully control it.  A modern-day well functioning
> economy is complex (people at the Santa Fe Institute often cite economies
as
> examples of complex systems), but it is often amazingly unchaotic
> considering how loosely it is organized and how many individual entities
it
> has in it, and how many transitions it is constantly undergoing.
Unsually,
> unless something bangs on it hard (such as having the price of a major
> commodity all of a sudden triple), it has a fair amount of stability,
while
> constantly creating new winners and losers (which is a productive form of
> mini-chaos).  Of course in the absence of regulation it is naturally prone
> to boom and bust cycles.
>
> So the system would need regulation.
>
> Most of my system operates on a message passing system with little concern
> for synchronization, it does not require low latencies, most of its units,
> operate under fairly similar code.  But hopefully when you get it all
> working together it will be fairly dynamic, but that dynamism with be
under
> multiple controls.
>
> I think we are going to have to get such systems up and running to find
you
> just how hard or easy they will be to control, which I acknowledged in my
> email to Richard.  I think that once we do we will be in a much better
> position to think about what is needed to control them.  I believe such
> control will be one of the major intellectual challenges to getting AGI to
> function at a human-level.  This issue is not only preventing runaway
> conditions, it is optimizing the intelligence of the inferencing, which I
> think will be even more import and diffiducle.  (There are all sorts of
> damping mechanisms and selective biasing mechanism that should be able to
> prevent many types of chaotic behaviors.)  But I am quite confident with
> multiple teams working on it, these control problems could be largely
> overcome in several years, with the systems themselves doing most of the
> learning.
>
> Even a little OpenCog AGI on a PC, could be interesting first indication
of
> the extent to which complexity will present control problems.  As I said
if
> you had 3G of ram for representation, that should allow about 50 million
> atoms.  Over time you would probably end up with at least hundreds of
> thousand of complex patterns, and it would be interesting to see how easy
it
> would be to properly contr

Re: [agi] None of you seem to be able ...

2007-12-06 Thread Richard Loosemore

Ed Porter wrote:

Jean-Paul,

Although complexity is one of the areas associated with AI where I have less
knowledge than many on the list, I was aware of the general distinction you
are making.  


What I was pointing out in my email to Richard Loosemore what that the
definitions in his paper "Complex Systems, Artificial Intelligence and
Theoretical Psychology," for "irreducible computability" and "global-local
interconnect" themselves are not totally clear about this distinction, and
as a result, when Richard says that those two issues are an unavoidable part
of AGI design that must be much more deeply understood before AGI can
advance, by the more loose definitions which would cover the types of
complexity involved in large matrix calculations and the design of a massive
supercomputer, of course those issues would arise in AGI design, but its no
big deal because we have a long history of dealing with them.

But in my email to Richard I said I was assuming he was not using this more
loose definitions of these words, because if he were, they would not present
the unexpected difficulties of the type he has been predicting.  I said I
though he was dealing with more the potentially unruly type of complexity, I
assume you were talking about.

I am aware of that type of complexity being a potential problem, but I have
designed my system to hopefully control it.  A modern-day well functioning
economy is complex (people at the Santa Fe Institute often cite economies as
examples of complex systems), but it is often amazingly unchaotic
considering how loosely it is organized and how many individual entities it
has in it, and how many transitions it is constantly undergoing.  Unsually,
unless something bangs on it hard (such as having the price of a major
commodity all of a sudden triple), it has a fair amount of stability, while
constantly creating new winners and losers (which is a productive form of
mini-chaos).  Of course in the absence of regulation it is naturally prone
to boom and bust cycles.  


Ed,

I now understand that you have indeed heard of complex systems before, 
but I must insist that in your summary above you have summarized what 
they are in such a way that completely contradicts what they are!


A complex system such as the economy can and does have stable modes in 
which it appears to be stable.  This does not constradict the complexity 
at all.  A system is not complex because it is unstable.


I am struggling here, Ed.  I want to go on to explain exactly what I 
mean (and what complex systems theorists mean) but I cannot see a way to 
do it without writing half a book this afternoon.


Okay, let me try this.

Imagine that we got a bunch of computers and connected them with a 
network that allowed each one to talk to (say) the ten nearest machines.


Imagine that each one is running a very simple program:  it keeps a 
handful of local parameters (U, V, W, X, Y) and it updates the values of 
its own parameters according to what the neighboring machines are doing 
with their parameters.


How does it do the updating?  Well, imagine some really messy and 
bizarre algorithm that involves looking at the neighbors' values, then 
using them to cross reference each other, and introduce delays and 
gradients and stuff.


On the face of it, you might think that the result will be that the U V 
W X Y values just show a random sequence of fluctuations.


Well, we know two things about such a system.

1) Experience tells us that even though some systems like that are just 
random mush, there are some (a noticeably large number in fact) that 
have overall behavior that shows 'regularities'.  For example, much to 
our surprise we might see waves in the U values.  And every time two 
waves hit each other, a vortex is created for exactly 20 minutes, then 
it stops.  I am making this up, but that is the kind of thing that could 
happen.


2) The algorithm is so messy that we cannot do any math to analyse and 
predict the behavior of the system.  All we can do is say that we have 
absolutely no techniques that will allow us to mathematical progress on 
the problem today, and we do not know if at ANY time in future history 
there will be a mathematics that will cope with this system.


What this means is that the waves and vortices we observed cannot be 
"explained" in the normal way.  We see them happening, but we do not 
know why they do.  The bizzare algorithm is the "low level mechanism" 
and the waves and vortices are the "high level behavior", and when I say 
there is a "Global-Local Disconnect" in this system, all I mean is that 
we are completely stuck when it comes to explaining the high level in 
terms of the low level.


Believe me, it is childishly easy to write down equations/algorithms for 
a system like this that are so profoundly intractable that no 
mathematician would even think of touching them.  You have to trust me 
on this.  Call your local Math department at Harvard or somewhere, and 
che

Re: [agi] None of you seem to be able ...

2007-12-06 Thread A. T. Murray
Mike Tintner wrote on Thu, 6 Dec 2007:
>
> ATM:
>> http://mentifex.virtualentity.com/mind4th.html -- an AGI prototype --
>> has just gone through a major bug-solving update, and is now much
>> better at maintaining chains of continuous thought -- after the
>> user has entered sufficient knowledge for the AI to think about.
>>
> It doesn't have - you didn't try to give it - 
> independent curiosity (like an infant)?
 
No, sorry, but the Forthmind does have an Ask module at 
http://mentifex.virtualenty.com/ask.html for asking questions --
which, come to think of it, may be a form of innate curiosity.

Meanwhile a year and a half after receiving a bug report, 
the current bug-solving update has been posted at 
http://tech.groups.yahoo.com/group/win32forth/message/13048
as follows FYI:

> OK, the audRecog subroutine is not totally bugfree
> when it comes to distinguishing certain sequences 
> of ASCII characters. It may be necessary to not use
> MACHINES or SERVE if these words confuse the AI.
> In past years I have spent dozens of painful
> hours fiddling with the audRecog subroutine, 
> and usually the slightest change breaks it worse
> than it was before. It works properly probably 
> eighty percent of the time, if not more.
> Even though the audRecog module became suspect 
> to me over time, I pressed on for True AI.

On 14 June 2006 I responded above to a post by FJR.
Yesterday -- a year and a half later -- I finally 
tracked down and eliminated the bug in question.

http://mind.sourceforge.net/audrecog.html -- 
the auditory recognition "audRecog" module -- 
was sometimes malfunctioning by misrecognizing 
one word of input as the word of a different 
concept, usually if both words ended the same. 

The solution was to base the selection of an 
auditory recognition upon finding the candidate 
word-match with the highest incremental activation, 
rather than merely taking the most recent match. 

By what is known as serendipity or sheer luck, 
the present solution to the old audRecog problem 
opens up a major new possibility for a far more 
advanced version of the audRecog module -- one 
that can recognize the concept of, say, "book" 
as input of either the word "book" or "books." 
Since audRecog now recognizes a word by using 
incremental activation, it should not be too 
hard to switch the previous pattern-recognition 
algorithm into one that no longer insists upon 
dealing only with entire words, but can instead 
recognize less than an entire word because so 
much incremental activation has built up.

The above message may not be very crystal clear, 
and so it is posted here mainly as a show of 
hope and as a forecasting of what may yet come.

http://mind.sourceforge.net/mind4th.html is 
the original Mind.Forth with the new audRecog.

http://AIMind-I.com is FJR's AI Mind in Forth.
(Sorry I can't help in the matter of timers.)

ATM
-- 
http://mentifex.virtualentity.com/mind4th.html 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73193379-092711


Re: [agi] None of you seem to be able ...

2007-12-06 Thread Richard Loosemore

Mike Tintner wrote:


JVPB:You seem to have missed what many A(G)I people (Ben, Richard, etc.) 
mean by 'complexity' (as opposed to the common usage of complex meaning 
difficult).


Well, I as an ignoramus, was wondering about this - so thankyou. And it 
wasn't clear at all to me from Richard's paper what he meant. 


Well, to be fair to me, I pointed out in a footnote at the very 
beginning of the paper that the term "complex system" was being sued in 
the technical sense, and then shortly afterwards I gave some references 
to anyone who needed to figure out what that technical sense actually was...


Could I have done more?

Look up the Waldrop book that I gave as a reference:  at least that is a 
nice non-technical read.




Richard Loosemore


What I'm
taking out from your account is that it involves random inputs...? Is 
there a fuller account of it? Is it the random dimension that he/others 
hope will produce emergent/human-like behaviour? (..because if so, I'd 
disagree - I'd argue the complications of human behaviour flow from 
conflict/ conflicting goals - which happens to be signally missing from 
his (and cognitive science's) ideas about emotions).



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73179225-6ab0e8


Re: [agi] None of you seem to be able ...

2007-12-06 Thread Richard Loosemore
as
built without having to first research the global-local disconnect in any
great depth, as your have suggested is necessary.

Similarly, although the computation in a Novamente type AGI
architecture would be much more complex than in Hecht-Neilsen's
confabulation, it would share certain important similarities.  And although
the complexity issues in appropriately controlling the inferencing a
human-level Novamente-type machine will be challenging, it is far from clear
that such design will require substantial advances in the understanding of
global-local interconnect.  


I am confident that valuable (though far less than human-level)
computation can be done in a Novamente type system with relatively simple
control mechanisms.  So I think it is worth designing such Novamente-type
systems and saving the fine tuning of the inference control system until we
have systems to tests such control systems on.  And I think it is best to
save whatever study of complexity that may be needed to get such control
systems to operate relatively optimally in a dynamic manner until we
actually have initial such control systems up and running, so that we have a
better idea about what complexity issues we are really dealing with.  


I think this make much more sense than spending a lot of time now
exploring the -- it would seem to me -- extremely very large space of
possible global-local disconnects.

Ed Porter

-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, December 05, 2007 10:41 AM

To: agi@v2.listbox.com
Subject: Re: [agi] None of you seem to be able ...

Ed Porter wrote:

RICHARD LOOSEMOORE> There is a high prima facie *risk* that

intelligence
involves a 
significant amount of irreducibility (some of the most crucial 
characteristics of a complete intelligence would, in any other system, 
cause the behavior to show a global-local disconnect),



ED PORTER=> Richard, "prima facie" means obvious on its face.  The

above

statement and those that followed it below may be obvious to you, but it

is

not obvious to a lot of us, and at least I have not seen (perhaps because

of

my own ignorance, but perhaps not) any evidence that it is obvious.
Apparently Ben also does not find your position to be obvious, and Ben is

no

dummy.

Richard, did you ever just consider that it might be "turtles all the way
down", and by that I mean experiential patterns, such as those that could

be

represented by Novamente atoms (nodes and links) in a gen/comp hierarchy
"all the way down".  In such a system each level is quite naturally

derived

from levels below it by learning from experience.  There is a lot of

dynamic

activity, but much of it is quite orderly, like that in Hecht-Neilsen's
Confabulation.  There is no reason why there has to be a "GLOBAL-LOCAL
DISCONNECT" of the type you envision, i.e., one that is totally impossible
to architect in terms of until one totally explores global-local

disconnect

space (just think how large an exploration space that might be).

So if you have prima facie evidence to support your claim (other than your
paper which I read which does not meet that standard


Ed,

Could you please summarize for me what your understandig is of my claim 
for the "prima facie" evidence (that I gave in that paper), and then, if 
you would, please explain where you believe the claim goes wrong.


With that level of specificity, we can discuss it.

Many thanks,



Richard Loosemore



), then present it.  If

you make me eat my words you will have taught me something sufficiently
valuable that I will relish the experience.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73176111-22c48d


Re: [agi] None of you seem to be able ...

2007-12-06 Thread Richard Loosemore

Mike Tintner wrote:

Richard: Now, interpreting that result is not easy,

Richard, I get the feeling you're getting understandably tired with all 
your correspondence today. Interpreting *any* of the examples of *hard* 
cog sci that you give is not easy. They're all useful, stimulating 
stuff, but they don't add up to a hard pic. of the brain's cognitive 
architecture. Perhaps Ben will back me up on this - it's a rather 
important point - our overall *integrated* picture of the brain's 
cognitive functioning is really v. poor, although certainly we have a 
wealth of details about, say, which part of the brain is somehow 
connected to a given operation.


You make an important point, but in your haste to make it you may have 
overlooked the fact that I really agree with you ... and have gone on to 
say that I am trying to fix that problem.


What I mean by that:  if you look at cog psy/cog sci in a superficial 
way you might come awy with the strong impression that "they don't add 
up to a hard pic. of the brain's cognitive architecture".  Sure.  But 
that is what I meant when I said that "cog sci has a huge amount of 
information stashed away, but it is in a format that makes it very hard 
for someone trying to build an intelligent system to actually use".


I believe I can see deeper into this problem, and I think that cog sci 
can be made to add up to a consistent picture, but it requires an extra 
organizational ingredient that I am in the process of adding right now.


The root of the problem is that the cog sci and AI communities both have 
extremely rigid protocols about how to do research, which are 
incompatible with each other.  In cog sci you are expected to produce a 
micro-theory for every experimental result, and efforts to work on 
larger theories or frameworks without introducing new experimental 
results that are directly explained are frowned upon.  The result is a 
style of work that produces "local patch" theories that do not have any 
generality.


The net result of all this is that when you say that "our overall 
*integrated* picture of the brain's cognitive functioning is really v. 
poor" I would point out that this is only true if you replace the "our" 
with "the AI community's".



Richard:I admit that I am confused right
now:  in the above paragraphs you say that your position is that the
human mind is 'rational' and then later that it is 'irrational' - was
the first one of those a typo?

Richard, No typo whatsoever if you just reread. V. clear. I say and 
said: *scientific pychology* and *cog sci* treat the mind as rational. I 
am the weirdo who is saying this is nonsense - the mind is 
irrational/crazy/creative - rationality is a major *achievement* not 
something that comes naturally. "Mike Tintner= crazy/irrational"- 
somehow, I don't think you'll find that hard to remember.


The problem here is that I am not sure in what sense you are using the 
word "rational".  There are many usages.  One of those usages is very 
common in cog sci, and if I go with *that* usage your claim is 
completely wrong:  you can pick up an elementary cog psy textbook and 
find at least two chapters dedicated to a discussion about the many ways 
that humans are (according to the textbook) "irrational".


I suspect what is happening is that you are using the term in a 
different way, and that this is the cause of the confusion.  Since you 
are making the claim, I think the ball is in your court:  please try to 
explain why this discrepency arises so I can understand you claim.  Take 
a look at e.g. Eysenck and Keane (Cognitive Psychology) and try to 
reconcile what you say with what they say.


Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73173298-c0f919


Re: [agi] None of you seem to be able ...

2007-12-06 Thread Benjamin Goertzel
ttern activations a second, so you would be talking about a
> fairly trivial system, but with say 100K patterns, it would be a good first
> indication of how easy or hard agi systems will be to control.
>
> Ed Porter
>
> -Original Message-
> From: Jean-Paul Van Belle [mailto:[EMAIL PROTECTED]
> Sent: Thursday, December 06, 2007 1:34 AM
> To: agi@v2.listbox.com
>
> Subject: RE: [agi] None of you seem to be able ...
>
> Hi Ed
>
> You seem to have missed what many A(G)I people (Ben, Richard, etc.) mean by
> 'complexity' (as opposed to the common usage of complex meaning difficult).
> It is not the *number* of calculations or interconnects that gives rise to
> complexity or chaos, but their nature. E.g. calculating the eigen-values of
> a n=10^1 matrix is *very* difficult but not complex. So the large matrix
> calculations, map-reduces or BleuGene configuration are very simple. A
> map-reduce or matrix calculation is typically one line of code (at least in
> Python - which is where Google probably gets the idea from :)
>
> To make them complex, you need to go beyond.
> E.g. a 500K-node 3 layer neural network is simplistic (not simple:),
> chaining only 10K NNs together (each with 10K input/outputs) in a random
> network (with only a few of these NNs serving as input or output modules)
> would produce complex behaviour, especially if for each iteration, the input
> vector changes dynamically. Note that the latter has FAR FEWER interconnects
> i.e. would need much fewer calculations but its behaviour would be
> impossible to predict (you can only simulate it) whereas the behaviour of
> the 500K is much more easily understood.
> BlueGene has a simple architecture, a network of computers who do mainly the
> same thing (e.g the GooglePlex) has predictive behaviour, however if each
> computer acts/behaves very differently (I guess on the internet we could
> classify users into a number of distinct agent-like behaviours), you'll get
> complex behaviour. It's the difference in complexity between a 8Gbit RAM
> chip and say an old P3 CPU chip. The latter has less than one-hundredth of
> the transistors but is far more complex and displays interesting behaviour,
> the former doesn't.
>
> Jean-Paul
> >>> On 2007/12/05 at 23:12, in message
> <[EMAIL PROTECTED]>,
> "Ed Porter" <[EMAIL PROTECTED]> wrote:
> >   Yes, my vision of a human AGI would be a very complex machine.  Yes,
> > a lot of its outputs could only be made with human level reasonableness
> > after a very large amount of computation.  I know of no shortcuts around
> the
> > need to do such complex computation.  So it arguably falls in to what you
> > say Wolfram calls "computational irreducibility."
> >   But the same could be said for any of many types of computations,
> > such as large matrix equations or Google's map-reduces, which are
> routinely
> > performed on supercomputers.
> >   So if that is how you define irreducibility, its not that big a
> > deal.  It just means you have to do a lot of computing to get an answer,
> > which I have assumed all along for AGI (Remember I am the one pushing for
> > breaking the small hardware mindset.)  But it doesn't mean we don't know
> how
> > to do such computing or that we have to do a lot more complexity research,
> > of the type suggested in your paper, before we can successfully designing
> > AGIs.
> [...]
> >   Although it is easy to design system where the systems behavior
> > would be sufficiently chaotic that such design would be impossible, it
> seems
> > likely that it is also possible to design complex system in which the
> > behavior is not so chaotic or unpredictable.  Take the internet.
> Something
> > like 10^8 computers talk to each other, and in general it works as
> designed.
> > Take IBM's supercomputer BlueGene L, 64K dual core processor computer each
> > with at least 256MBytes all capable of receiving and passing messages at
> > 4Ghz on each of over 3 dimensions, and capable of performing 100's of
> > trillions of FLOP/sec.  Such a system probably contains at least 10^14
> > non-linear separately functional elements, and yet it works as designed.
> If
> > there is a global-local disconnect in the BlueGene L, which there could be
> > depending on your definition, it is not a problem for most of the
> > computation it does.
>
> --
>
> Research Associate: CITANDA
> Post-Graduate Section Head
> Department of Information Systems
> Phone: (+27)-(0)21-6504256
> Fax: (+27)-(0)21-6502280
> Office: Leslie Commerce 4.21
>
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73163220-1ae588


RE: [agi] None of you seem to be able ...

2007-12-06 Thread Ed Porter
Jean-Paul,

Although complexity is one of the areas associated with AI where I have less
knowledge than many on the list, I was aware of the general distinction you
are making.  

What I was pointing out in my email to Richard Loosemore what that the
definitions in his paper "Complex Systems, Artificial Intelligence and
Theoretical Psychology," for "irreducible computability" and "global-local
interconnect" themselves are not totally clear about this distinction, and
as a result, when Richard says that those two issues are an unavoidable part
of AGI design that must be much more deeply understood before AGI can
advance, by the more loose definitions which would cover the types of
complexity involved in large matrix calculations and the design of a massive
supercomputer, of course those issues would arise in AGI design, but its no
big deal because we have a long history of dealing with them.

But in my email to Richard I said I was assuming he was not using this more
loose definitions of these words, because if he were, they would not present
the unexpected difficulties of the type he has been predicting.  I said I
though he was dealing with more the potentially unruly type of complexity, I
assume you were talking about.

I am aware of that type of complexity being a potential problem, but I have
designed my system to hopefully control it.  A modern-day well functioning
economy is complex (people at the Santa Fe Institute often cite economies as
examples of complex systems), but it is often amazingly unchaotic
considering how loosely it is organized and how many individual entities it
has in it, and how many transitions it is constantly undergoing.  Unsually,
unless something bangs on it hard (such as having the price of a major
commodity all of a sudden triple), it has a fair amount of stability, while
constantly creating new winners and losers (which is a productive form of
mini-chaos).  Of course in the absence of regulation it is naturally prone
to boom and bust cycles.  

So the system would need regulation.

Most of my system operates on a message passing system with little concern
for synchronization, it does not require low latencies, most of its units,
operate under fairly similar code.  But hopefully when you get it all
working together it will be fairly dynamic, but that dynamism with be under
multiple controls.

I think we are going to have to get such systems up and running to find you
just how hard or easy they will be to control, which I acknowledged in my
email to Richard.  I think that once we do we will be in a much better
position to think about what is needed to control them.  I believe such
control will be one of the major intellectual challenges to getting AGI to
function at a human-level.  This issue is not only preventing runaway
conditions, it is optimizing the intelligence of the inferencing, which I
think will be even more import and diffiducle.  (There are all sorts of
damping mechanisms and selective biasing mechanism that should be able to
prevent many types of chaotic behaviors.)  But I am quite confident with
multiple teams working on it, these control problems could be largely
overcome in several years, with the systems themselves doing most of the
learning.

Even a little OpenCog AGI on a PC, could be interesting first indication of
the extent to which complexity will present control problems.  As I said if
you had 3G of ram for representation, that should allow about 50 million
atoms.  Over time you would probably end up with at least hundreds of
thousand of complex patterns, and it would be interesting to see how easy it
would be to properly control them, and get them to work together as a
properly functioning thought economy in what ever small interactive world
they developed their self-organizing pattern base.  Of course on such a PC
based system you would only, on average, be able to do about 10million
pattern to pattern activations a second, so you would be talking about a
fairly trivial system, but with say 100K patterns, it would be a good first
indication of how easy or hard agi systems will be to control.

Ed Porter

-Original Message-
From: Jean-Paul Van Belle [mailto:[EMAIL PROTECTED] 
Sent: Thursday, December 06, 2007 1:34 AM
To: agi@v2.listbox.com
Subject: RE: [agi] None of you seem to be able ...

Hi Ed

You seem to have missed what many A(G)I people (Ben, Richard, etc.) mean by
'complexity' (as opposed to the common usage of complex meaning difficult).
It is not the *number* of calculations or interconnects that gives rise to
complexity or chaos, but their nature. E.g. calculating the eigen-values of
a n=10^1 matrix is *very* difficult but not complex. So the large matrix
calculations, map-reduces or BleuGene configuration are very simple. A
map-reduce or matrix calculation is typically one line of code (at least in
Python - which is where Google probably gets the idea from :)

To make them complex, yo

Re: [agi] None of you seem to be able ...

2007-12-06 Thread Mike Dougherty
On Dec 6, 2007 8:23 AM, Benjamin Goertzel <[EMAIL PROTECTED]> wrote:
> On Dec 5, 2007 6:23 PM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> > resistance to moving onto the second stage. You have enough psychoanalytical
> > understanding, I think, to realise that the unusual length of your reply to
> > me may possibly be a reflection of that resistance and an inner conflict.
>
> What is bizarre to me, in this psychoanalysis of Ben Goertzel that you 
> present,
> is that you overlook [snip]
>
> Mike, you can make a lot of valid criticisms against me, but I don't
> think you can
> claim I have not originated an "interdependent network of creative ideas."
> I certainly have done so.  You may not like or believe my various ideas, but
> for sure they form an interdependent network.  Read "The Hidden Pattern"
> for evidence.

I just wanted to comment on how well Ben "accepted" Mike's 'analysis.'
 Personally, I was offended by Mike's inconsiderate use of language.
Apparently we have different ideas of etiquette, so that's all I'll
say about it.  (rather than be drawn into a completely off-topic
pissing contest over who is right to say what, etc.)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73157985-48127a


Re: [agi] None of you seem to be able ...

2007-12-06 Thread Benjamin Goertzel
On Dec 5, 2007 6:23 PM, Mike Tintner <[EMAIL PROTECTED]> wrote:
>
> Ben:  To publish your ideas
> > in academic journals, you need to ground them in the existing research
> > literature,
> > not in your own personal introspective observations.
>
> Big mistake. Think what would have happened if Freud had omitted the 40-odd
> examples of slips in The Psychopathology of Everyday Life (if I've got the
> right book!)

Obviously, Freud's reliance on introspection and qualitative experience had
plusses and minuses.  He generated a lot of nonsense as well as some
brilliant ideas.

But anyway, I was talking about style of exposition, not methodology of
doing work.  If Freud were a professor today, he would write in a different
style in order to get journal publications; though he might still write some
books in a more expository style as well.

I was pointing out that, due to the style of exposition required in contemporary
academic culture, one can easily get a false impression that no one in academia
is doing original thinking -- but the truth is that, even if you DO
original thinking,
you are required in writing your ideas up for publication to give them
the appearance
of minimal originality via grounding them exorbitantly in the prior
literature (even if in fact
their conception had nothing, or very little, to do with the prior
literature).  I'm not
saying I like this -- I'm just describing the reality.  Also, in the
psych literature, grounding
an idea in your own personal observations is not acceptable and is not
going to get
you published -- unless of course you're a clinical psychologist,
which I am not.

The scientific heavyweights are the people who are heavily
> grounded. The big difference between Darwin and Wallace is all those
> examples/research, and not the creative idea.

That is an unwarranted overgeneralization.

Anyway YOU were the one who was harping on the lack of creativity in AGI.

Now you've changed your tune and are harping on the lack of {creativity coupled
with a lot of empirical research}

Ever consider that this research is going on RIGHT NOW?  I don't know why you
think it should be instantaneous.  A number of us are doing concrete
research work
aimed at investigating our creative ideas about AGI.  Research is
hard.  It takes
time.  Darwin's research took time.  The Manhattan Project took time.  etc.

> And what I didn't explain in my simple, but I believe important, two-stage
> theory of creative development is that there's an immense psychological
> resistance to moving onto the second stage. You have enough psychoanalytical
> understanding, I think, to realise that the unusual length of your reply to
> me may possibly be a reflection of that resistance and an inner conflict.

What is bizarre to me, in this psychoanalysis of Ben Goertzel that you present,
is that you overlook
the fact that I am spending most of my time on concrete software projects, not
on abstract psychological/philosophical theory
Including the Novamente Cognition Engine
project which is aimed precisely at taking some of my creative ideas about AGI
and realizing them in useful software

As it happens, my own taste IS more for theory, math and creative arts than
software development -- but, I decided some time ago that the most IMPORTANT
thing I could do would be to focus a lot of attention on
implementation and detailed
design rather than "just" generating more and more funky ideas.  It is
always tempting to me to
consider my role as being purely that of a thinker, and leave all
practical issues to others
who like that sort of thing better -- but I consider the creation of
AGI *so* important
that I've been willing to devote the bulk of my time to activities
that run against my
personal taste and inclination, for some years now  And
fortunately I have found
some great software engineers as collaborators.


> P.S. Just recalling a further difference between the original and the
> creative thinker - the creative one has greater *complexes* of ideas - it
> usually doesn't take just one idea to produce major creative work, as people
> often think, but a whole interdependent network of them. That, too, is v.
> hard.

Mike, you can make a lot of valid criticisms against me, but I don't
think you can
claim I have not originated an "interdependent network of creative ideas."
I certainly have done so.  You may not like or believe my various ideas, but
for sure they form an interdependent network.  Read "The Hidden Pattern"
for evidence.

-- Ben Goertzel

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73146505-9fe3b7


Re: [agi] None of you seem to be able ...

2007-12-06 Thread Jean-Paul Van Belle
Well, as one ignoramus speaking to another (hopefully the smart ones on the 
list will correct us) I think not. It's not the random inputs (no intelligence 
or complex system can deal with randomness and turn it into something 
meaningful - just like random walk share prices would mean you cannot 
consistently beat the stock market :) that make a system complex. It is at 
least its structure and internal diversity as well as challenging environment. 
Like so much, true complexity (both behaviour or structure) is likely to exist 
on/walk a very fine line between 'boring' and 'random' (see Wolfram et al or 
anything fractal) which means randomly linking up modules is IMHO not likely to 
give rise to AGI (one of my criticisms against traditional connectionism) and 
completely random environments are also not likely to give rise to AGI either. 
You need a well-configured (through evolution - Ben?, design - my approach - or 
experimentation/systematic exploration - Richard's) modular system 
working/growing/learning/evolving in a rich i.e. not boring but not random 
either environment. Randomness just produces static or noise (i.e. more 
randomness). {The parallel with data is too obvious not to point out: neither 
completely random data nor highly regular (e.g. infinitely repeating) data 
contain much information, the interesting stuff is inbetween those two extremes}
So there is no(t necessarily any) complexity hiding in 'difficult algorithms', 
'complex mathematics', 'random data', 'large datasets', etc. Solving a system 
of 1000 linear equations is simple, solving two or three quadratic 
differential equations is complex. A map-reduce (assuming a straightforward 
transform function) of 20TB of data may require lots of computing power but is 
far less complex than running ALife on your PDA.
The reason why I responded to this post is that my AGI architecture is relying 
very much on the emerging complexity arising from a "moderately massively 
modular" design which seems to be  queried by many on this list but one of the 
more mainstream hypotheses (tho not necessarily the dominant one) in CogSci (eg 
Carruthers). (Aslo note that for CogScientists 'massive modular' is quite 
significantly less massive than what it means to CompScientists :)
 
=Jean-Paul
 

Research Associate: CITANDA
Post-Graduate Section Head 
Department of Information Systems
Phone: (+27)-(0)21-6504256
Fax: (+27)-(0)21-6502280
Office: Leslie Commerce 4.21

>>> "Mike Tintner" <[EMAIL PROTECTED]> 2007/12/06 14:05 >>>

JVPB:You seem to have missed what many A(G)I people (Ben, Richard, etc.) 
mean by 'complexity' (as opposed to the common usage of complex meaning 
difficult).

Well, I as an ignoramus, was wondering about this - so thankyou. And it 
wasn't clear at all to me from Richard's paper what he meant. What I'm 
taking out from your account is that it involves random inputs...? Is there 
a fuller account of it? Is it the random dimension that he/others hope will 
produce emergent/human-like behaviour? (..because if so, I'd disagree - I'd 
argue the complications of human behaviour flow from conflict/ conflicting 
goals - which happens to be signally missing from his (and cognitive 
science's) ideas about emotions).


-
This list is sponsored by AGIRI: http://www.agiri.org/email 
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73141768-c30744

Re: [agi] None of you seem to be able ...

2007-12-06 Thread Mike Tintner


ATM:> http://mentifex.virtualentity.com/mind4th.html -- an AGI prototype --

has just gone through a major bug-solving update, and is now much
better at maintaining chains of continuous thought -- after the
user has entered sufficient knowledge for the AI to think about.

It doesn't have - you didn't try to give it - independent curiosity (like an 
infant)? 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73137353-3f3449


Re: [agi] None of you seem to be able ...

2007-12-06 Thread Mike Tintner


JVPB:You seem to have missed what many A(G)I people (Ben, Richard, etc.) 
mean by 'complexity' (as opposed to the common usage of complex meaning 
difficult).


Well, I as an ignoramus, was wondering about this - so thankyou. And it 
wasn't clear at all to me from Richard's paper what he meant. What I'm 
taking out from your account is that it involves random inputs...? Is there 
a fuller account of it? Is it the random dimension that he/others hope will 
produce emergent/human-like behaviour? (..because if so, I'd disagree - I'd 
argue the complications of human behaviour flow from conflict/ conflicting 
goals - which happens to be signally missing from his (and cognitive 
science's) ideas about emotions).



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73135416-76c456


Re: [agi] None of you seem to be able ...

2007-12-06 Thread Mike Tintner


Ben:  To publish your ideas

in academic journals, you need to ground them in the existing research
literature,
not in your own personal introspective observations.


Big mistake. Think what would have happened if Freud had omitted the 40-odd 
examples of slips in The Psychopathology of Everyday Life (if I've got the 
right book!) The scientific heavyweights are the people who are heavily 
grounded. The big difference between Darwin and Wallace is all those 
examples/research, and not the creative idea.


And what I didn't explain in my simple, but I believe important, two-stage 
theory of creative development is that there's an immense psychological 
resistance to moving onto the second stage. You have enough psychoanalytical 
understanding, I think, to realise that the unusual length of your reply to 
me may possibly be a reflection of that resistance and an inner conflict. 
The resistance occurs inpart because you have to privilege a normally 
underderprivileged level of the mind - the level that provides and seeks 
actual, historical examples of  generalisations, as opposed to the normally 
more privileged level that provides hypothetical, made-up examples . Look at 
philosophers and you will see virtually an entire profession/field that has 
not moved beyond providing hypothetical examples. It's much harder to deal 
in actual examples/ evidence  - things that have actually happened - because 
they take longer to locate in memory. You have to be patient while your 
brain drags them out. But you can normally make up examples almost 
immediately. (If only Richard's massive parallel, cerebral computation were 
true!)


But BTW an interesting misunderstanding on your part is that evidence here 
means *introspective* observations. Freud's evidence for the unconscious 
consisted entirely of publicly observable events - the slips. You must do 
similarly for your multiple selves - not tell me, say, how fragmented you 
feel! Try and produce such evidence  & I think you'll find you will rapidly 
lose enthusiasm for your idea. Stick to the same single, but divided self 
described with extraordinary psychological consistency by every great 
religion over 1000's of years and a whole string of humanist psychologists 
including Freud,  - and make sure your AGI has something similar.


P.S. Just recalling a further difference between the original and the 
creative thinker - the creative one has greater *complexes* of ideas - it 
usually doesn't take just one idea to produce major creative work, as people 
often think, but a whole interdependent network of them. That, too, is v. 
hard.





-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73122427-f045c6


RE: [agi] None of you seem to be able ...

2007-12-05 Thread Jean-Paul Van Belle
Hi Ed

You seem to have missed what many A(G)I people (Ben, Richard, etc.) mean by 
'complexity' (as opposed to the common usage of complex meaning difficult).
It is not the *number* of calculations or interconnects that gives rise to 
complexity or chaos, but their nature. E.g. calculating the eigen-values of a 
n=10^1 matrix is *very* difficult but not complex. So the large matrix 
calculations, map-reduces or BleuGene configuration are very simple. A 
map-reduce or matrix calculation is typically one line of code (at least in 
Python - which is where Google probably gets the idea from :)

To make them complex, you need to go beyond. 
E.g. a 500K-node 3 layer neural network is simplistic (not simple:), chaining 
only 10K NNs together (each with 10K input/outputs) in a random network (with 
only a few of these NNs serving as input or output modules) would produce 
complex behaviour, especially if for each iteration, the input vector changes 
dynamically. Note that the latter has FAR FEWER interconnects i.e. would need 
much fewer calculations but its behaviour would be impossible to predict (you 
can only simulate it) whereas the behaviour of the 500K is much more easily 
understood.
BlueGene has a simple architecture, a network of computers who do mainly the 
same thing (e.g the GooglePlex) has predictive behaviour, however if each 
computer acts/behaves very differently (I guess on the internet we could 
classify users into a number of distinct agent-like behaviours), you'll get 
complex behaviour. It's the difference in complexity between a 8Gbit RAM chip 
and say an old P3 CPU chip. The latter has less than one-hundredth of the 
transistors but is far more complex and displays interesting behaviour, the 
former doesn't.

Jean-Paul
>>> On 2007/12/05 at 23:12, in message <[EMAIL PROTECTED]>,
"Ed Porter" <[EMAIL PROTECTED]> wrote:
>   Yes, my vision of a human AGI would be a very complex machine.  Yes,
> a lot of its outputs could only be made with human level reasonableness
> after a very large amount of computation.  I know of no shortcuts around the
> need to do such complex computation.  So it arguably falls in to what you
> say Wolfram calls "computational irreducibility."  
>   But the same could be said for any of many types of computations,
> such as large matrix equations or Google's map-reduces, which are routinely
> performed on supercomputers.
>   So if that is how you define irreducibility, its not that big a
> deal.  It just means you have to do a lot of computing to get an answer,
> which I have assumed all along for AGI (Remember I am the one pushing for
> breaking the small hardware mindset.)  But it doesn't mean we don't know how
> to do such computing or that we have to do a lot more complexity research,
> of the type suggested in your paper, before we can successfully designing
> AGIs.
[...]
>   Although it is easy to design system where the systems behavior
> would be sufficiently chaotic that such design would be impossible, it seems
> likely that it is also possible to design complex system in which the
> behavior is not so chaotic or unpredictable.  Take the internet.  Something
> like 10^8 computers talk to each other, and in general it works as designed.
> Take IBM's supercomputer BlueGene L, 64K dual core processor computer each
> with at least 256MBytes all capable of receiving and passing messages at
> 4Ghz on each of over 3 dimensions, and capable of performing 100's of
> trillions of FLOP/sec.  Such a system probably contains at least 10^14
> non-linear separately functional elements, and yet it works as designed.  If
> there is a global-local disconnect in the BlueGene L, which there could be
> depending on your definition, it is not a problem for most of the
> computation it does.

-- 

Research Associate: CITANDA
Post-Graduate Section Head 
Department of Information Systems
Phone: (+27)-(0)21-6504256
Fax: (+27)-(0)21-6502280
Office: Leslie Commerce 4.21


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73082928-3b96d2

Re: [agi] None of you seem to be able ...

2007-12-05 Thread Benjamin Goertzel
Tintner wrote:
> Your paper represents almost a literal application of the idea that
> creativity is ingenious/lateral. Hey it's no trick to be just
> ingenious/lateral or fantastic.

Ah ... before creativity was what was lacking.  But now you're shifting
arguments and it's something else that is lacking ;-)

>
> You clearly like producing new psychological ideas - from a skimming of your
> work, you've produced several. However, I didn't come across a single one
> that was grounded or where any attempt was made to ground them in direct,
> fresh observation (as opposed to occasionally referring to an existing
> scientific paper).

That is a very strange statement.

In fact nearly all my psychological ideas
are grounded in direct, fresh **introspective** observation ---
but they're not written up that way
because that's not the convention in modern academia.  To publish your ideas
in academic journals, you need to ground them in the existing research
literature,
not in your own personal introspective observations.

It is true that few of my psychological hypotheses are grounded in my own novel
lab experiments, though.  I did a little psych lab work in the late
90's, in the domain of
perceptual illusions -- but the truth is that psych and neuroscience
are not currently
sophisticated enough to allow empirical investigation of really
interesting questions about
the nature of cognition, self, etc.  Wait a couple decades, I guess.

>In terms of creative psychology, that is consistent with
> your resistance to producing prototypes - and grounding your
> invention/innovation.

Well, I don't have any psychological resistance to producing working
software, obviously.

Most of my practical software work has been proprietary for customers; but,
check out MOSES and OpenBiomind on Google Code -- two open-source projects that
have emerged from my Novamente LLC and Biomind LLC work ...

It just happens that AGI does not lend itself to prototyping, for
reasons I've already tried
and failed to explain to you

We're gonna launch trainable, adaptive virtual animals in Second Life sometime
in 2008  But I won't consider them real "prototypes" of Novamente
AGI, even though in
fact they will use several aspects of the Novamente Cognition Engine
software.  They
won't embody the key emergent structures/dynamics that I believe need
to be there to have
human-level cognition -- and there is no simple prototype system that
will do so.

You celebrate Jeff Hawkins' prototype systems, but have you tried
them?  He's built
(or, rather Dileep George has built)
an image classification engine, not much different in performance from
many others out there.
It's nice work but it's not really an AGI prototype, it's an image classifiers.
He may be sort-of labeling it a prototype of his AGI approach -- but
really, it doesn't prove anything
dramatic about his AGI approach.  No one who inspected his code and
ran it would think that it
did provide such proof.

> There are at least two stages of creative psychological development - which
> you won't find in any literature. The first I'd call simply "original"
> thinking, the second is truly "creative" thinking. The first stage is when
> people realise they too can have new ideas and get hooked on the excitement
> of producing them. Only much later comes the second stage, when thinkers
> realise that truly creative ideas have to be grounded. Arguably, the great
> majority of people who may officially be labelled as "creatives", never get
> beyond the first stage - you can make a living doing just that. But the most
> beautiful and valuable ideas come from being repeatedly refined against the
> evidence. People resist this stage because it does indeed mean a lot of
> extra work , but it's worth it.  (And it also means developing that inner
> faculty which calls for actual evidence).

OK, now you're making a very different critique than what you started
with though.

Before you were claiming there are no creative ideas in AGI.

Now, when confronted with creative ideas, you're complaining that they're not
grounded via experimental validation.

Well, yeah...

And the problem is that if one's creative ideas pertain to the
dynamics of large-scale,
complex software systems, then it takes either a lot of time or a lot
of money to achieve
this validation that you mention.

It is not the case that I (and other AGI researchers) are somehow
psychologically
undesirous of seeing our creative ideas explored via experiment.  It
is, rather, the case
that doing the relevant experiments requires a LOT OF WORK, and we are
few in number
with relatively scant resources.

What I am working toward, with Novamente and soon with OpenCog as
well, is precisely
the empirical exploration of the various creative ideas of myself,
others whose work has
been built on in the Novamente design, and my colleagues...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please 

RE: [agi] None of you seem to be able ...

2007-12-05 Thread Ed Porter
is large the
inferencing from each of the many local rules would require a hell of a lot
of computing, so much computing that a human could not in a human lifetime
understand everything it was doing in a relatively short period of time.  

But because the system is an machine whose behavior is largely
dominated by sensed experience, and by what behaviors and representations
have proven themselves to be useful in that experience, and because the
system has control mechanism, such as markets and currency control
mechanisms, for modulating the general level of activity and discriminating
against unproductive behaviors and parameter settings -- the chance of more
than a small, and often beneficial, amount of chaotic behavior is greatly
reduced.  (But until we actually start running such systems we will not know
for sure.)

It seems to me (perhaps mistakenly) you have been saying, that the
the global-local disconnect is some great dark chasm which has to be
extensively explored before we humans can dare begin to seek to design
complex AGI's.

I have seen no evidence for that.  It seems to me that chaotic
behavior is, to a lesser degree, like combinatorial explosion.  It is a
problem we should always keep in mind, which limits some of the things we
can do, but which in general we know how to avoid.  More knowledge about it
might be helpful, but it is not clear at this point how much it is needed,
and, if it were needed, which particular aspects of it would be needed.

Your paper says 

"We sometimes talk of the basic units of knowledge-concepts
or symbols- as if they have little or no internal structure, and as if they
exist at the base level of description of our system. This could be wrong:
we could be looking at the equivalent of the second level in the Life
automaton, therefore seeing nothing more than an approximation of how the
real system works."  

I don't see why the atoms (nodes and links) of an AGI cannot be
represented as relatively straight forward digital representations, such as
a struc or object class.  A more complex NL-level concept (such as "Iraq" to
use Ben's common example) might involve hundreds of thousands or millions of
such nodes and links, but it seems to me there are ways to deal with such
complexity in a relatively orderly, relatively computationally efficient (by
that I mean scalable, but not computational cheap), manner.  

My approach would not involve anything as self-defeating as using a
representation that has such a convoluted non-linear temporal causality as
that in the Game of Life, as you quotation suggests.  I have designed my
system to largely avoid the unruliness of complexity whenever possible. 

Take Hecht-Neilsen's confabulation.  It uses millions of inferences
for each of the multiple words and phrases its selects when it generates an
NL sentense.  But unless his papers are dishonest, it does them on an
overall manner that is amazingly orderly, despite the underlying complexity.


Would such computation be "irreducibly complex"?  Very arguably by
the Wolfram definition, it would be.  Would there be a "global-local
disconnect"?   It depends on the definition.  The conceptual model of how
the system works is relatively simple, but that actual
inference-by-inference computation would be very difficult for a human to
follow at a detailed level.  But what is clear is that such a system was
built without having to first research the global-local disconnect in any
great depth, as your have suggested is necessary.

Similarly, although the computation in a Novamente type AGI
architecture would be much more complex than in Hecht-Neilsen's
confabulation, it would share certain important similarities.  And although
the complexity issues in appropriately controlling the inferencing a
human-level Novamente-type machine will be challenging, it is far from clear
that such design will require substantial advances in the understanding of
global-local interconnect.  

I am confident that valuable (though far less than human-level)
computation can be done in a Novamente type system with relatively simple
control mechanisms.  So I think it is worth designing such Novamente-type
systems and saving the fine tuning of the inference control system until we
have systems to tests such control systems on.  And I think it is best to
save whatever study of complexity that may be needed to get such control
systems to operate relatively optimally in a dynamic manner until we
actually have initial such control systems up and running, so that we have a
better idea about what complexity issues we are really dealing with.  

I think this make much more sense than spending a lot of time now
exploring the -- it would seem to me -- extremely very large space of
possible global-local disconnects.

Ed Porter

-----Original Message-
Fr

Re: [agi] None of you seem to be able ...

2007-12-05 Thread Mike Tintner

Richard: Now, interpreting that result is not easy,

Richard, I get the feeling you're getting understandably tired with all your 
correspondence today. Interpreting *any* of the examples of *hard* cog sci 
that you give is not easy. They're all useful, stimulating stuff, but they 
don't add up to a hard pic. of the brain's cognitive architecture. Perhaps 
Ben will back me up on this - it's a rather important point - our overall 
*integrated* picture of the brain's cognitive functioning is really v. poor, 
although certainly we have a wealth of details about, say, which part of the 
brain is somehow connected to a given operation.


Richard:I admit that I am confused right
now:  in the above paragraphs you say that your position is that the
human mind is 'rational' and then later that it is 'irrational' - was
the first one of those a typo?

Richard, No typo whatsoever if you just reread. V. clear. I say and said: 
*scientific pychology* and *cog sci* treat the mind as rational. I am the 
weirdo who is saying this is nonsense - the mind is 
irrational/crazy/creative - rationality is a major *achievement* not 
something that comes naturally. "Mike Tintner= crazy/irrational"- somehow, I 
don't think you'll find that hard to remember. 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=72407413-5af67f


Re: [agi] None of you seem to be able ...

2007-12-05 Thread Richard Loosemore

Mike Tintner wrote:

Richard:  science does too know a good deal about brain
architecture!I *know* cognitive science.  Cognitive science is a friend 
of mine.

Mike, you are no cognitive scientist :-).

Thanks, Richard,  for keeping it friendly - but -   are you saying cog 
sci knows the:


*'engram' - how info is encoded
*any precise cognitive form or level of the hierarchical processing 
vaguely defined by Hawkins et al

*how ideas are compared at any level -
*how analogies are produced
*whether templates or similar are/are not used in visual object processing

etc. etc ???


Well, you are crossing over between levels here in a way that confuses me.

Did you mean "brain architecture" when you said brain architecture? 
that is, are you taking about brain-level stuff, or cognitive-level 
stuff?  I took you to be talking quite literally about the neural level.


More generally, though, we understand a lot, but of course the picture 
is extremely incomplete.  But even though the picture is incomplete that 
would not mean that cognitive science knows almost nothing.


My position is that cog sci has a *huge* amount of information stashed 
away, but it is in a format that makes it very hard for someone trying 
to build an intelligent system to actually use.  AI people make very 
little use of this information at all.


My goal is to deconstruct cog sci in such a way as to make it usable in 
AI.  That is what I am doing now.



Obviously, if science can't answer the engram question, it can hardly 
answer anything else.


You are indeed a cognitive scientist but you don't seem to have a very 
good overall scientific/philosophical perspective on what that entails - 
and the status of cog. sci. is a fascinating one, philosophically. You 
see, I utterly believe in the cog. sci. approach of applying 
computational models to the brain and human thinking.  But what that has 
produced is *not* hard knowledge. It has made us aware of the 
complexities of what is probably involved, got us to the point where we 
are, so to speak, v. "warm" / close to the truth. But no, as, I think 
Ben asserted, what we actually *know* for sure about the brain's 
information processing is v. v. little.  (Just look at our previous 
dispute, where clearly there is no definite knowledge at all about how 
much parallel computation is involved in the brain's processing of any 
idea [like a sentence]). Those cog. sci, models are more like analogies 
than true theoretical models. And anyway most of the time though by no 
means all, cognitive scientists are like you & Minsky - much more 
interested in the AI applications of their models than in their literal 
scientific truth.


If you disagree, point to the hard knowledge re items like those listed 
above,  which surely must be the basis of any AI system that can 
legitimately claim to be based on the brain's architecture.


Well, it is difficult to know where to start.  What about the word 
priming results?  There is an enormous corpus of data concerning the 
time course of activation of words as a result of seeing/hearing other 
words.  I can use some of that data to constrain my models of activation.


Then there are studies of speech errors that show what kinds of events 
occur during attempts to articulate sentences:  that data can be used to 
say a great deal about the processes involved in going from an intention 
to articulation.


On and on the list goes:  I could spend all day just writing down 
examples of cognitive data and how it relates to models of intelligence.


Did you know, for example, that certain kinds of brain damage can leave 
a person with the ability to name a visually presented object, but then 
be unable to pick the object up and move it through space in a way that 
is consistent with the object's normal use . and that another type 
of brain damage can result in a person have exactly the opposite 
problem:  they can look at an object and say "I have no idea what that 
is", and yet when you ask them to pick the thing up and do what they 
would typically do with the object, they pick it up and show every sign 
that they know exactly what it is for (e.g. object is a key:  they say 
they don't know what it is, but then they pick it up and put it straight 
into a nearby lock).


Now, interpreting that result is not easy, but it does seem to tell us 
that there are two almost independent systems in the brain that handle 
vision-for-identification and vision-for-action.  Why?  I don't know, 
but I have some ideas, and those ideas are helping to constrain my 
framework.




Another example of where you are not so hot on the *philosophy* of cog. 
sci. is our v. first dispute.  I claimed and claim that it is 
fundamental to cog sci to treat the brain/mind as rational. And I'm 
right - and produced and can continue endlessly producing evidence. (It 
is fundamental to all the social sciences to treat humans as rational 
decisionmaking agents). Oh no it doesn't, you said,

Re: [agi] None of you seem to be able ...

2007-12-05 Thread Mike Tintner

Richard:  science does too know a good deal about brain
architecture!I *know* cognitive science.  Cognitive science is a friend of 
mine.

Mike, you are no cognitive scientist :-).

Thanks, Richard,  for keeping it friendly - but -   are you saying cog sci 
knows the:


*'engram' - how info is encoded
*any precise cognitive form or level of the hierarchical processing vaguely 
defined by Hawkins et al

*how ideas are compared at any level -
*how analogies are produced
*whether templates or similar are/are not used in visual object processing

etc. etc ???

Obviously, if science can't answer the engram question, it can hardly answer 
anything else.


You are indeed a cognitive scientist but you don't seem to have a very good 
overall scientific/philosophical perspective on what that entails - and the 
status of cog. sci. is a fascinating one, philosophically. You see, I 
utterly believe in the cog. sci. approach of applying computational models 
to the brain and human thinking.  But what that has produced is *not* hard 
knowledge. It has made us aware of the complexities of what is probably 
involved, got us to the point where we are, so to speak, v. "warm" / close 
to the truth. But no, as, I think Ben asserted, what we actually *know* for 
sure about the brain's information processing is v. v. little.  (Just look 
at our previous dispute, where clearly there is no definite knowledge at all 
about how much parallel computation is involved in the brain's processing of 
any idea [like a sentence]). Those cog. sci, models are more like analogies 
than true theoretical models. And anyway most of the time though by no means 
all, cognitive scientists are like you & Minsky - much more interested in 
the AI applications of their models than in their literal scientific truth.


If you disagree, point to the hard knowledge re items like those listed 
above,  which surely must be the basis of any AI system that can 
legitimately claim to be based on the brain's architecture.


Another example of where you are not so hot on the *philosophy* of cog. sci. 
is our v. first dispute.  I claimed and claim that it is fundamental to cog 
sci to treat the brain/mind as rational. And I'm right - and produced and 
can continue endlessly producing evidence. (It is fundamental to all the 
social sciences to treat humans as rational decisionmaking agents). Oh no it 
doesn't, you said, in effect - sci psychology is obsessed with the 
irrationalities of the human mind. And that is true, too. If you hadn't gone 
off in high dudgeon, we could have resolved the apparent contradiction. Sci 
psych does indeed love to study and point out all kinds of illusions and 
mistakes of the human mind. But to cog. sci. these are all so many *bugs* in 
an otherwise rational system. The system as a whole is still rational, as 
far as cog sci is concerned, but some of its parts - its heuristics, 
attitudes etc - are not. They, however, can be fixed.


So what I have been personally asserting elsewhere - namely that the brain 
is fundamentally irrational or "crazy" - that the human mind can't follow a 
logical, "joined up" train of reflective thought for more than a relatively 
few seconds on end - and is positively designed to be like that, and can't 
and isn't meant to be fixed  - does indeed represent a fundamental challenge 
to cog. sci's current rational paradigm of mind. (The flip side of that 
craziness is that it is a fundamentally *creative* mind - & this is utterly 
central to AGI)




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=72344338-9fc6ac


Re: [agi] None of you seem to be able ...

2007-12-05 Thread Richard Loosemore

Mike Tintner wrote:


Ben: > Obviously the brain contains answers to many of the unsolved 
problems of
AGI (not all -- e.g. not the problem of how to create a stable goal 
system

under recursive self-improvement).   However, current neuroscience does
NOT contain these answers.> And neither you nor anyone else has ever 
made a cogent argument that

emulating the brain is the ONLY route to creating powerful AGI.


Absolutely agree re neuroscience's lack of answers (hence Richard's 
assertion that his system is based on what cognitive science knows about 
brain architecture is not a smart one -  the truth is "not much at all".)


Um, excuse me?

Let me just make sure I understand this:  you say that it is not smart 
of me to say that my system is based on what cognitive science knows 
about brain architecture, because cognitive science knows not much at 
all about brain architecture?


Number one:  I don't actually say that (brain architecture is only a 
small part of what is involved in my system).


Number two:  Cognitive science does too know a good deal about brain 
architecture!


I *know* cognitive science.  Cognitive science is a friend of mine. 
Mike, you are no cognitive scientist :-).



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=72293683-687e21


Re: [agi] None of you seem to be able ...

2007-12-05 Thread Mike Tintner


Ben: > Obviously the brain contains answers to many of the unsolved problems 
of

AGI (not all -- e.g. not the problem of how to create a stable goal system
under recursive self-improvement).   However, current neuroscience does
NOT contain these answers.> And neither you nor anyone else has ever made 
a cogent argument that

emulating the brain is the ONLY route to creating powerful AGI.


Absolutely agree re neuroscience's lack of answers (hence Richard's 
assertion that his system is based on what cognitive science knows about 
brain architecture is not a smart one -  the truth is "not much at all".)


The cogent argument for emulating the brain - in brief - is simply that it's 
the only *all-rounder* cognitive system, the only multisensory, multimedia, 
multisignsystem that can solve problems in language AND maths AND 
(arithmetic/algebra/geometry) AND diagrams AND maps AND photographs  AND 
cinema AND painting AND sculpture & 3-D models AND "body language" etc  - 
and switch from solving problems in any one sign or sensory system to 
solving the same problems in any other sign or sensory system. And it's by 
extension the only truly multidomain system that can switch from solving 
problems in any one subject domain to any other, from solving problems of 
how to play football to how to marshall troops on a battlefield to how to do 
geometry,  applying the same knowledge across domains.  (I'm just 
formulating this argument for the first time - so it will no doubt need 
revisions!)  But  - correct me - I don't think there's any AI system that's 
a "two-rounder", able to work across two domains and sign systems, let 
alone, of course all of them. (And it's taken a billion years to evolve this 
all-round system which is clearly grounded in a body)


It LOOKS relatively straightforward to emulate or suspersede this system, 
when you make the cardinal error of drawing specialist comparisons - your 
we-can-make-a-plane-that-flies-faster-than-a-bird argument (and of course we 
already have machines that can think billions of times faster than the 
brain). But inventing general, all-round systems that are continually alive, 
complex psychoeconomies managing whole sets of complex activities in the 
real, as opposed to artificial world(s) and not just isolated tasks, is a 
whole different ballgame, to inventing specialist systems.


It represents a whole new stage of machine evolution - a step as drastic as 
the evolution of life from matter - and you, sir, :), have scant respect for 
the awesomeness of the undertaking (even though, paradoxically, you're much 
more aware than most of its complexity). Respect to the brain, bro!


It's a little as if you - not, I imagine, the very finest athletic 
specimen -  were to say:  hey, I can take the heavyweight champ of the world 
... AND Federer... AND Tiger Woods... AND the champ of every other sport. 
Well, yeah, you can indeed box and play tennis and actually do every other 
sport, but there's an awful lot more to beating even one of those champs let 
alone all or a selection of them than meets the eye (even if you were in 
addition to have a machine that could throw super-powerful punches or play 
superfast backhands).


Ben/MT:  none of the unsolved

problems are going to be solved - without major creative leaps. Just look
even at the ipod & iphone -  major new technology never happens without 
such

leaps.


Ben:The above sentence is rather hilarious to me.> If the Ipod and Iphone 
are your measure for "creative leaps" then
there have been  loads and loads of major creative leaps in AGI and 
narrow-AI research. As an example of a creative leap (that is speculative 
and may be wrong, but is certainly creative), check out my hypothesis of 
emergent social-psychological

intelligence as related to mirror neurons and octonion algebras:

http://www.goertzel.org/dynapsyc/2007/mirrorself.pdf


Ben,

Name ONE major creative leap in AGI  (in narrow AI, no question, there's 
loads).


Some background here: I am deeply interested in, & have done a lot of work, 
on the psychology & philosophy of creativity, as well as intelligence.


So your "creative" paper is interesting to me, because it helps refine 
definitions of creativity and "creative leaps".  The ipod & iphone do indeed 
represent brilliant leaps in terms of interfaces - with the touch-wheel and 
the "pinch" touchscreen [as distinct from the touchscreen itself] - v. neat 
lateral ideas which worked. No, not revolutionary in terms of changing vast 
fields of technology, just v. lateral, unexpected, albeit simple ideas. I 
have seen no similarly lateral approaches in AGI.


Your paper represents almost a literal application of the idea that 
creativity is ingenious/lateral. Hey it's no trick to be just 
ingenious/lateral or fantastic. How does memory work? -  well, you see, 
there's this system of angels that ferry every idea you have and file it in 
an infinite set of multiverses...etc...  Anyone can come up with f

Re: [agi] None of you seem to be able ...

2007-12-05 Thread Richard Loosemore

Ed Porter wrote:

RICHARD LOOSEMOORE> There is a high prima facie *risk* that intelligence
involves a 
significant amount of irreducibility (some of the most crucial 
characteristics of a complete intelligence would, in any other system, 
cause the behavior to show a global-local disconnect),



ED PORTER=> Richard, "prima facie" means obvious on its face.  The above
statement and those that followed it below may be obvious to you, but it is
not obvious to a lot of us, and at least I have not seen (perhaps because of
my own ignorance, but perhaps not) any evidence that it is obvious.
Apparently Ben also does not find your position to be obvious, and Ben is no
dummy.

Richard, did you ever just consider that it might be "turtles all the way
down", and by that I mean experiential patterns, such as those that could be
represented by Novamente atoms (nodes and links) in a gen/comp hierarchy
"all the way down".  In such a system each level is quite naturally derived
from levels below it by learning from experience.  There is a lot of dynamic
activity, but much of it is quite orderly, like that in Hecht-Neilsen's
Confabulation.  There is no reason why there has to be a "GLOBAL-LOCAL
DISCONNECT" of the type you envision, i.e., one that is totally impossible
to architect in terms of until one totally explores global-local disconnect
space (just think how large an exploration space that might be).

So if you have prima facie evidence to support your claim (other than your
paper which I read which does not meet that standard


Ed,

Could you please summarize for me what your understandig is of my claim 
for the "prima facie" evidence (that I gave in that paper), and then, if 
you would, please explain where you believe the claim goes wrong.


With that level of specificity, we can discuss it.

Many thanks,



Richard Loosemore



), then present it.  If

you make me eat my words you will have taught me something sufficiently
valuable that I will relish the experience.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=72269726-d5af19


RE: [agi] None of you seem to be able ...

2007-12-04 Thread John G. Rose
> From: Benjamin Goertzel [mailto:[EMAIL PROTECTED]
> 
> As an example of a creative leap (that is speculative and may be wrong,
> but is
> certainly creative), check out my hypothesis of emergent social-
> psychological
> intelligence as related to mirror neurons and octonion algebras:
> 
> http://www.goertzel.org/dynapsyc/2007/mirrorself.pdf
> 
> I happen to think the real subtlety of intelligence happens on the
> emergent level,
> and not on the level of the particulars of the system that gives rise
> to the emergent
> phenomena.  That paper conjectures some example phenomena that I believe
> occur on the emergent level of intelligent systems.
> 

This paper really takes the reader though a detailed walk of a really nice
application of octonionic structure applied to the mind. The concept of
mirrorhouses is really creative and thought provoking especially applied in
this way. I like thinking about a mind in this sort of crystallographic
structure yet there is no way I could comb through the details like this.
This type of methodology has so many advantages such as - 

* being visually descriptive yet highly complex
* modular and building block friendly
* computers love this sort of structure, it's what they do best
* there is an enormous amount of math existing related to this already
worked out
* scalable, extremely extensible, systematic
* it fibrillates out to sociologic systems
* etc..

Even if this phenomena is not emergent or partially emergent, (I favor
partially at this point as crystal clear can be a prefecture of emergence),
you can build AGI based on optimal emergent structures that the human brain
might be coalescing in a perfect world, and also come up with new and better
ones that the human brain hasn't got to yet either by building them directly
or baking new ones in a programmed complex system.

John
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=72153159-51ae59


RE: [agi] None of you seem to be able ...

2007-12-04 Thread Ed Porter
RICHARD LOOSEMOORE> There is a high prima facie *risk* that intelligence
involves a 
significant amount of irreducibility (some of the most crucial 
characteristics of a complete intelligence would, in any other system, 
cause the behavior to show a global-local disconnect),


ED PORTER=> Richard, "prima facie" means obvious on its face.  The above
statement and those that followed it below may be obvious to you, but it is
not obvious to a lot of us, and at least I have not seen (perhaps because of
my own ignorance, but perhaps not) any evidence that it is obvious.
Apparently Ben also does not find your position to be obvious, and Ben is no
dummy.

Richard, did you ever just consider that it might be "turtles all the way
down", and by that I mean experiential patterns, such as those that could be
represented by Novamente atoms (nodes and links) in a gen/comp hierarchy
"all the way down".  In such a system each level is quite naturally derived
from levels below it by learning from experience.  There is a lot of dynamic
activity, but much of it is quite orderly, like that in Hecht-Neilsen's
Confabulation.  There is no reason why there has to be a "GLOBAL-LOCAL
DISCONNECT" of the type you envision, i.e., one that is totally impossible
to architect in terms of until one totally explores global-local disconnect
space (just think how large an exploration space that might be).

So if you have prima facie evidence to support your claim (other than your
paper which I read which does not meet that standard), then present it.  If
you make me eat my words you will have taught me something sufficiently
valuable that I will relish the experience.

Ed Porter




-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 04, 2007 9:17 PM
To: agi@v2.listbox.com
Subject: Re: [agi] None of you seem to be able ...

Benjamin Goertzel wrote:
> On Dec 4, 2007 8:38 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
>> Benjamin Goertzel wrote:
>> [snip]
>>> And neither you nor anyone else has ever made a cogent argument that
>>> emulating the brain is the ONLY route to creating powerful AGI.  The
closest
>>> thing to such an argument that I've seen
>>> was given by Eric Baum in his book "What Is
>>> Thought?", and I note that Eric has backed away somewhat from that
>>> position lately.
>> This is a pretty outrageous statement to make, given that you know full
>> well that I have done exactly that.
>>
>> You may not agree with the argument, but that is not the same as
>> asserting that the argument does not exist.
>>
>> Unless you were meaning "emulating the brain" in the sense of emulating
>> it ONLY at the low level of neural wiring, which I do not advocate.
> 
> I don't find your nor Eric's nor anyone else's argument that
brain-emulation
> is the "golden path" very strongly convincing...
> 
> However, I found Eric's argument by reference to the compressed nature of
> the genome, more convincing than your argument via the hypothesis of
> irreducible emergent complexity...
> 
> Sorry if my choice of words was not adequately politic.  I find your
argument
> interesting, but it's certainly just as speculative as the various AGI
theories
> you dismiss  It basically rests on a big assumption, which is that the
> complexity of human intelligence is analytically irreducible within
pragmatic
> computational constraints.  In this sense it's less an argument than a
> conjectural
> assertion, albeit an admirably bold one.

Ben,

This is even worse.

The argument I presented was not a "conjectural assertion", it made the 
following coherent case:

   1) There is a high prima facie *risk* that intelligence involves a 
significant amount of irreducibility (some of the most crucial 
characteristics of a complete intelligence would, in any other system, 
cause the behavior to show a global-local disconnect), and

   2) Because of the unique and unusual nature of complexity there is 
only a vanishingly small chance that we will be able to find a way to 
assess the exact degree of risk involved, and

   3) (A corollary of (2)) If the problem were real, but we were to 
ignore this risk and simply continue with an "engineering" approach 
(pretending that complexity is insignificant), then the *only* evidence 
we would ever get that irreducibility was preventing us from building a 
complete intelligence would be the fact that we would simply run around 
in circles all the time, wondering why, when we put large systems 
together, they didn't quite make it, and

   4) Therefore we need to adopt a "Precautionary Principle" and treat 
the problem as if irreducibility really is

Re: [agi] None of you seem to be able ...

2007-12-04 Thread Benjamin Goertzel
Richard,

Well, I'm really sorry to have offended you so much, but you seem to be
a mighty easy guy to offend!

I know I can be pretty offensive at times; but this time, I wasn't
even trying ;-)

> The argument I presented was not a "conjectural assertion", it made the
> following coherent case:
>
>1) There is a high prima facie *risk* that intelligence involves a
> significant amount of irreducibility (some of the most crucial
> characteristics of a complete intelligence would, in any other system,
> cause the behavior to show a global-local disconnect), and

The above statement contains two fuzzy terms -- "high" and "significant" ...

You have provided no evidence for any particular quantification of
these terms...
your evidence is qualitative/intuitive, so far as I can tell...

Your quantification of these terms seems to me a conjectural assertion
unsupported by evidence.

>2) Because of the unique and unusual nature of complexity there is
> only a vanishingly small chance that we will be able to find a way to
> assess the exact degree of risk involved, and
>
>3) (A corollary of (2)) If the problem were real, but we were to
> ignore this risk and simply continue with an "engineering" approach
> (pretending that complexity is insignificant),

The engineering approach does not pretend that complexity is
insignificant.  It just denies that the complexity of intelligent systems
leads to the sort of irreducibility you suggest it does.

Some complex systems can be reverse-engineered in their general
principles even if not in detail.  And that is all one would need to do
in order to create a brain emulation (not that this is what I'm trying
to do) --- assuming one's goal was not to exactly emulate some
specific human brain based on observing the behaviors it generates,
but merely to emulate the brainlike character of the system...

> then the *only* evidence
> we would ever get that irreducibility was preventing us from building a
> complete intelligence would be the fact that we would simply run around
> in circles all the time, wondering why, when we put large systems
> together, they didn't quite make it, and

No.  Experimenting with AI systems could lead to evidence that would
support the irreducibility hypothesis more directly than that.  I doubt they
will but it's possible.  For instance, we might discover that creating more and
more intelligent systems inevitably presents more and more complex
parameter-tuning problems, so that parameter-tuning appears to be the
bottleneck.  This would suggest that some kind of highly expensive evolutionary
or ensemble approach as you're suggesting might be necessary.

>4) Therefore we need to adopt a "Precautionary Principle" and treat
> the problem as if irreducibility really is significant.
>
>
> Whether you like it or not - whether you've got too much invested in the
> contrary point of view to admit it, or not - this is a perfectly valid
> and coherent argument, and your attempt to try to push it into some
> lesser realm of a "conjectural assertion" is profoundly insulting.

The form of the argument is coherent and valid; but the premises involve
fuzzy quantifiers whose values you are apparently setting by
intuition, and whose
specific values sensitively impact the truth value of the conclusion.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=72135696-ff196d


Re: [agi] None of you seem to be able ...

2007-12-04 Thread Richard Loosemore

Benjamin Goertzel wrote:

On Dec 4, 2007 8:38 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:

Benjamin Goertzel wrote:
[snip]

And neither you nor anyone else has ever made a cogent argument that
emulating the brain is the ONLY route to creating powerful AGI.  The closest
thing to such an argument that I've seen
was given by Eric Baum in his book "What Is
Thought?", and I note that Eric has backed away somewhat from that
position lately.

This is a pretty outrageous statement to make, given that you know full
well that I have done exactly that.

You may not agree with the argument, but that is not the same as
asserting that the argument does not exist.

Unless you were meaning "emulating the brain" in the sense of emulating
it ONLY at the low level of neural wiring, which I do not advocate.


I don't find your nor Eric's nor anyone else's argument that brain-emulation
is the "golden path" very strongly convincing...

However, I found Eric's argument by reference to the compressed nature of
the genome, more convincing than your argument via the hypothesis of
irreducible emergent complexity...

Sorry if my choice of words was not adequately politic.  I find your argument
interesting, but it's certainly just as speculative as the various AGI theories
you dismiss  It basically rests on a big assumption, which is that the
complexity of human intelligence is analytically irreducible within pragmatic
computational constraints.  In this sense it's less an argument than a
conjectural
assertion, albeit an admirably bold one.


Ben,

This is even worse.

The argument I presented was not a "conjectural assertion", it made the 
following coherent case:


  1) There is a high prima facie *risk* that intelligence involves a 
significant amount of irreducibility (some of the most crucial 
characteristics of a complete intelligence would, in any other system, 
cause the behavior to show a global-local disconnect), and


  2) Because of the unique and unusual nature of complexity there is 
only a vanishingly small chance that we will be able to find a way to 
assess the exact degree of risk involved, and


  3) (A corollary of (2)) If the problem were real, but we were to 
ignore this risk and simply continue with an "engineering" approach 
(pretending that complexity is insignificant), then the *only* evidence 
we would ever get that irreducibility was preventing us from building a 
complete intelligence would be the fact that we would simply run around 
in circles all the time, wondering why, when we put large systems 
together, they didn't quite make it, and


  4) Therefore we need to adopt a "Precautionary Principle" and treat 
the problem as if irreducibility really is significant.



Whether you like it or not - whether you've got too much invested in the 
contrary point of view to admit it, or not - this is a perfectly valid 
and coherent argument, and your attempt to try to push it into some 
lesser realm of a "conjectural assertion" is profoundly insulting.





Richard Loosemore


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=72132038-3654d5


Re: [agi] None of you seem to be able ...

2007-12-04 Thread Benjamin Goertzel
On Dec 4, 2007 8:38 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Benjamin Goertzel wrote:
> [snip]
> > And neither you nor anyone else has ever made a cogent argument that
> > emulating the brain is the ONLY route to creating powerful AGI.  The closest
> > thing to such an argument that I've seen
> > was given by Eric Baum in his book "What Is
> > Thought?", and I note that Eric has backed away somewhat from that
> > position lately.
>
> This is a pretty outrageous statement to make, given that you know full
> well that I have done exactly that.
>
> You may not agree with the argument, but that is not the same as
> asserting that the argument does not exist.
>
> Unless you were meaning "emulating the brain" in the sense of emulating
> it ONLY at the low level of neural wiring, which I do not advocate.

I don't find your nor Eric's nor anyone else's argument that brain-emulation
is the "golden path" very strongly convincing...

However, I found Eric's argument by reference to the compressed nature of
the genome, more convincing than your argument via the hypothesis of
irreducible emergent complexity...

Sorry if my choice of words was not adequately politic.  I find your argument
interesting, but it's certainly just as speculative as the various AGI theories
you dismiss  It basically rests on a big assumption, which is that the
complexity of human intelligence is analytically irreducible within pragmatic
computational constraints.  In this sense it's less an argument than a
conjectural
assertion, albeit an admirably bold one.

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=72126612-7f96e4


Re: [agi] None of you seem to be able ...

2007-12-04 Thread Richard Loosemore

Benjamin Goertzel wrote:
[snip]

And neither you nor anyone else has ever made a cogent argument that
emulating the brain is the ONLY route to creating powerful AGI.  The closest
thing to such an argument that I've seen
was given by Eric Baum in his book "What Is
Thought?", and I note that Eric has backed away somewhat from that
position lately.


This is a pretty outrageous statement to make, given that you know full 
well that I have done exactly that.


You may not agree with the argument, but that is not the same as 
asserting that the argument does not exist.


Unless you were meaning "emulating the brain" in the sense of emulating 
it ONLY at the low level of neural wiring, which I do not advocate.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=72125338-4c83ae


Re: [agi] None of you seem to be able ...

2007-12-04 Thread Benjamin Goertzel
> More generally, I don't perceive any readiness to recognize that  the brain
> has the answers to all the many unsolved problems of AGI  -

Obviously the brain contains answers to many of the unsolved problems of
AGI (not all -- e.g. not the problem of how to create a stable goal system
under recursive self-improvement).   However, current neuroscience does
NOT contain these answers.

And neither you nor anyone else has ever made a cogent argument that
emulating the brain is the ONLY route to creating powerful AGI.  The closest
thing to such an argument that I've seen
was given by Eric Baum in his book "What Is
Thought?", and I note that Eric has backed away somewhat from that
position lately.

> I think it
> should be obvious that AGI isn't going to happen - and none of the unsolved
> problems are going to be solved - without major creative leaps. Just look
> even at the ipod & iphone -  major new technology never happens without such
> leaps.

The above sentence is rather hilarious to me.

If the Ipod and Iphone are your measure for "creative leaps" then
there have been
loads and loads of major creative leaps in AGI and narrow-AI research.

Anyway it seems to me that you're not just looking for creative leaps,
you're looking
for creative leaps that match your personal intuition.  Perhaps the
real problem is that
your personal intuition about intelligence is largely off-base ;-)

As an example of a creative leap (that is speculative and may be wrong, but is
certainly creative), check out my hypothesis of emergent social-psychological
intelligence as related to mirror neurons and octonion algebras:

http://www.goertzel.org/dynapsyc/2007/mirrorself.pdf

I happen to think the real subtlety of intelligence happens on the
emergent level,
and not on the level of the particulars of the system that gives rise
to the emergent
phenomena.  That paper conjectures some example phenomena that I believe
occur on the emergent level of intelligent systems.

Loosemore agrees with me on the importance of emergence, but he feels
there is a fundamental
irreducibility that makes it pragmatically impossible to figure out
via science, math
and intuition which concrete structures/dynamics will give rise to the
right emergent
structures, without doing a massive body of simulation experiments.  I
think he overstates
the degree of irreducibility.

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=72114408-ae9503


Re: [agi] None of you seem to be able ...

2007-12-04 Thread Mike Tintner


Dennis:
MT:none of you seem able to face this to my mind obvious truth.


Who do you mean under "you" in this context?
Do you think that everyone here agrees with Matt on everyting?

Quite the opposite is true -- almost every AI researcher has his own

unique set of believes.


I'm delighted to be corrected, if wrong. My hypothesis was that in 
processing ideas -  especially in searching for analogies - the brain will 
search through v. few examples in any given moment, all or almost all of 
them relevant,  where computers will search blindly through vast numbers. 
(I'm just reading a neuroeconomics book which puts the ratio of computer 
communication speed to that of the brain at 30 million to one). It seems to 
me that the brain's principles of search are fundamentally different to 
those of computers. My impression is that "none of you are able to face" 
that particular truth - correct me .


More generally, I don't perceive any readiness to recognize that  the brain 
has the answers to all the many unsolved problems of AGI  - answers which 
mostly if not entirely involve *very different kinds* of computation. I 
believe, for example, that the brain extensively uses direct shape-matching/ 
mappings to compare - and only some new form of analog computation will be 
able to handle that.  I don't see anyone who's prepared for that kind of 
creative leap -  for revolutionary new kinds of hardware and software.  In 
general, everyone seems to be starting from the materials that exist, and 
praying to God that minor adaptations will work. (You too, no?) Even Richard 
who just possibly may agree with me on the importance of emulating the 
brain, opines that the brain uses massive parallel computation above - 
because, I would argue, that's what fits his materials - that's what he 
*wants* not knows to be true.  I've argued about this with Ed - I think it 
should be obvious that AGI isn't going to happen - and none of the unsolved 
problems are going to be solved - without major creative leaps. Just look 
even at the ipod & iphone -  major new technology never happens without such 
leaps. Whom do you see as a creative "high-jumper" here - even in their 
philosophy?








-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=72098704-c6974e


Re: [agi] None of you seem to be able ...

2007-12-04 Thread Matt Mahoney
--- Dennis Gorelik <[EMAIL PROTECTED]> wrote:
> For example, I disagree with Matt's claim that AGI research needs
> special hardware with massive computational capabilities.

I don't claim you need special hardware.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=72062645-1ca7c4


[agi] None of you seem to be able ...

2007-12-04 Thread Dennis Gorelik
Mike,

> Matt::  The whole point of using massive parallel computation is to do the
> hard part of the problem.

> The whole idea of massive parallel computation here, surely has to be wrong.
> And yet none of you seem able to face this to my mind obvious truth.

Who do you mean under "you" in this context?
Do you think that everyone here agrees with Matt on everyting?
:-)

Quite the opposite is true -- almost every AI researcher has his own
unique set of believes. Some believes are shared with one set of
researchers -- other with another set. Some believes may be even
unique.

For example, I disagree with Matt's claim that AGI research needs
special hardware with massive computational capabilities.

However I agree with Matt on quite large set of other issues.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=71955617-a244b4