Re: Last word for the time being [WAS Re: [agi] None of you seem to be able ...]

2007-12-06 Thread Benjamin Goertzel
 Show me ONE other example of the reverse engineering of a system in
 which the low level mechanisms show as many complexity-generating
 characteristics as are found in the case of intelligent systems, and I
 will gladly learn from the experience of the team that did the job.

 I do not believe you can name a single one.

Well, I am not trying to reverse engineer the brain.  Any more than
the Wright Brothers were trying to reverse engineer  a bird -- though I
do imagine the latter will eventually be possible.

 You know, I sympathize with you in a way.  You are trying to build an
 AGI system using a methodology that you are completely committed to.
 And here am I coming along like Bertrand Russell writing his letter to
 Frege, just as poor Frege was about to publish his Grundgesetze der
 Arithmetik, pointing out that everything in the new book was undermined
 by a paradox.  How else can you respond except by denying the idea as
 vigorously as possible?

It's a deeply flawed analogy.

Russell's paradox is a piece of math and once Frege
was confronted with it he got it.  The discussion between the two of them
did not devolve into long, rambling dialogues about the meanings of terms
and the uncertainties of various intuitions.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73249230-63bddf


Last word for the time being [WAS Re: [agi] None of you seem to be able ...]

2007-12-06 Thread Richard Loosemore

Benjamin Goertzel wrote:

Richard,

Well, I'm really sorry to have offended you so much, but you seem to be
a mighty easy guy to offend!

I know I can be pretty offensive at times; but this time, I wasn't
even trying ;-)


The argument I presented was not a conjectural assertion, it made the
following coherent case:

   1) There is a high prima facie *risk* that intelligence involves a
significant amount of irreducibility (some of the most crucial
characteristics of a complete intelligence would, in any other system,
cause the behavior to show a global-local disconnect), and


The above statement contains two fuzzy terms -- high and significant ...

You have provided no evidence for any particular quantification of
these terms...
your evidence is qualitative/intuitive, so far as I can tell...

Your quantification of these terms seems to me a conjectural assertion
unsupported by evidence.


[This is going to cross over your parallel response to a different post. 
No time to address that other argument, but the comments made here are 
not affected by what is there.]


I have answered this point very precisely on many occasions, including 
in the paper.  Here it is again:


If certain types of mechanisms do indeed give rise to complexity (as all 
the complex systems theorist agree), then BY DEFINITION it will never be 
possible to quantify the exact relationship between:


   1)  The precise characteristics of the low-level mechanisms (both 
the type and the quantity) that would lead us to expect complexity, and


   2)  The amount of complexity thereby caused in the high-level behavior.

Even if the complex systems effect were completely real, the best we 
could ever do would be to come up with suggestive characteristics that 
lead to complexity.  Nevertheless, there is a long list of such 
suggestive characteristics, and everyone (including you) agree that all 
those suggestive characteristics are present in the low level mechanisms 
that must be in an AGI.


So the one most important thing we know about complex systems is that if 
complex systems really do exist, then we CANNOT say Give me precise 
quantitative evidence that we should expect complexity in this 
particular system.


And what is your response to this most important fact about complex systems?

Your response is: Give me precise quantitative evidence that we should 
expect complexity in this particular system.


And then, when I explain all of the above (as I have done before, many 
times), you go on to conclude:


[You are giving] a conjectural assertion unsupported by evidence.

Which is, in the context of my actual argument, a serious little bit of 
sleight-of-hand (to be as polite as possible about it).





   2) Because of the unique and unusual nature of complexity there is
only a vanishingly small chance that we will be able to find a way to
assess the exact degree of risk involved, and

   3) (A corollary of (2)) If the problem were real, but we were to
ignore this risk and simply continue with an engineering approach
(pretending that complexity is insignificant),


The engineering approach does not pretend that complexity is
insignificant.  It just denies that the complexity of intelligent systems
leads to the sort of irreducibility you suggest it does.


It denies it?  Based on what?  My argument above makes it crystal clear 
that if the engineering approach is taking that attitude, then it does 
so purely on the basis of wishful thinking, whilst completely ignoring 
the above argument.  The engineering approach would be saying:  We 
understand complex systems well enough to know that there isn't a 
problem in this case  a nonsensical position when by definition it 
is not possible for anyone to really understand the connection, and the 
best evidence we can get is actually pointing to the opposite conclusion.


So this comes back to the above argument:  the engineering approach has 
to address that first, before it can make any such claim.




Some complex systems can be reverse-engineered in their general
principles even if not in detail.  And that is all one would need to do
in order to create a brain emulation (not that this is what I'm trying
to do) --- assuming one's goal was not to exactly emulate some
specific human brain based on observing the behaviors it generates,
but merely to emulate the brainlike character of the system...


This has never been done, but that is exactly what I am trying to do.

Show me ONE other example of the reverse engineering of a system in 
which the low level mechanisms show as many complexity-generating 
characteristics as are found in the case of intelligent systems, and I 
will gladly learn from the experience of the team that did the job.


I do not believe you can name a single one.




then the *only* evidence
we would ever get that irreducibility was preventing us from building a
complete intelligence would be the fact that we would simply run around
in circles all the time, 

Re: Last word for the time being [WAS Re: [agi] None of you seem to be able ...]

2007-12-06 Thread Richard Loosemore

Benjamin Goertzel wrote:

Show me ONE other example of the reverse engineering of a system in
which the low level mechanisms show as many complexity-generating
characteristics as are found in the case of intelligent systems, and I
will gladly learn from the experience of the team that did the job.

I do not believe you can name a single one.


Well, I am not trying to reverse engineer the brain.  Any more than
the Wright Brothers were trying to reverse engineer  a bird -- though I
do imagine the latter will eventually be possible.


You know, I sympathize with you in a way.  You are trying to build an
AGI system using a methodology that you are completely committed to.
And here am I coming along like Bertrand Russell writing his letter to
Frege, just as poor Frege was about to publish his Grundgesetze der
Arithmetik, pointing out that everything in the new book was undermined
by a paradox.  How else can you respond except by denying the idea as
vigorously as possible?


It's a deeply flawed analogy.

Russell's paradox is a piece of math and once Frege
was confronted with it he got it.  The discussion between the two of them
did not devolve into long, rambling dialogues about the meanings of terms
and the uncertainties of various intuitions.


Believe me, I know:  which is why I envy Russell for the positive 
response he got from Frege.  You could help the discussion enormously by 
not pushing it in the direction of long rambling dialogues, and by not 
trying to argue about the meanings of terms and the uncertainties of 
various intuitions, which have nothing to do with the point that I made.


I for one hate that kind of pointless discussion, which is why I keep 
trying to make you address the key point.


Unfortunately, you never do address the key point:  in the above, you 
ignored it completely!  (Again!)


At least Frege did actually get it.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73346948-931def