Benjamin Goertzel wrote:
Richard,
Well, I'm really sorry to have offended you so much, but you seem to be
a mighty easy guy to offend!
I know I can be pretty offensive at times; but this time, I wasn't
even trying ;-)
The argument I presented was not a conjectural assertion, it made the
following coherent case:
1) There is a high prima facie *risk* that intelligence involves a
significant amount of irreducibility (some of the most crucial
characteristics of a complete intelligence would, in any other system,
cause the behavior to show a global-local disconnect), and
The above statement contains two fuzzy terms -- high and significant ...
You have provided no evidence for any particular quantification of
these terms...
your evidence is qualitative/intuitive, so far as I can tell...
Your quantification of these terms seems to me a conjectural assertion
unsupported by evidence.
[This is going to cross over your parallel response to a different post.
No time to address that other argument, but the comments made here are
not affected by what is there.]
I have answered this point very precisely on many occasions, including
in the paper. Here it is again:
If certain types of mechanisms do indeed give rise to complexity (as all
the complex systems theorist agree), then BY DEFINITION it will never be
possible to quantify the exact relationship between:
1) The precise characteristics of the low-level mechanisms (both
the type and the quantity) that would lead us to expect complexity, and
2) The amount of complexity thereby caused in the high-level behavior.
Even if the complex systems effect were completely real, the best we
could ever do would be to come up with suggestive characteristics that
lead to complexity. Nevertheless, there is a long list of such
suggestive characteristics, and everyone (including you) agree that all
those suggestive characteristics are present in the low level mechanisms
that must be in an AGI.
So the one most important thing we know about complex systems is that if
complex systems really do exist, then we CANNOT say Give me precise
quantitative evidence that we should expect complexity in this
particular system.
And what is your response to this most important fact about complex systems?
Your response is: Give me precise quantitative evidence that we should
expect complexity in this particular system.
And then, when I explain all of the above (as I have done before, many
times), you go on to conclude:
[You are giving] a conjectural assertion unsupported by evidence.
Which is, in the context of my actual argument, a serious little bit of
sleight-of-hand (to be as polite as possible about it).
2) Because of the unique and unusual nature of complexity there is
only a vanishingly small chance that we will be able to find a way to
assess the exact degree of risk involved, and
3) (A corollary of (2)) If the problem were real, but we were to
ignore this risk and simply continue with an engineering approach
(pretending that complexity is insignificant),
The engineering approach does not pretend that complexity is
insignificant. It just denies that the complexity of intelligent systems
leads to the sort of irreducibility you suggest it does.
It denies it? Based on what? My argument above makes it crystal clear
that if the engineering approach is taking that attitude, then it does
so purely on the basis of wishful thinking, whilst completely ignoring
the above argument. The engineering approach would be saying: We
understand complex systems well enough to know that there isn't a
problem in this case a nonsensical position when by definition it
is not possible for anyone to really understand the connection, and the
best evidence we can get is actually pointing to the opposite conclusion.
So this comes back to the above argument: the engineering approach has
to address that first, before it can make any such claim.
Some complex systems can be reverse-engineered in their general
principles even if not in detail. And that is all one would need to do
in order to create a brain emulation (not that this is what I'm trying
to do) --- assuming one's goal was not to exactly emulate some
specific human brain based on observing the behaviors it generates,
but merely to emulate the brainlike character of the system...
This has never been done, but that is exactly what I am trying to do.
Show me ONE other example of the reverse engineering of a system in
which the low level mechanisms show as many complexity-generating
characteristics as are found in the case of intelligent systems, and I
will gladly learn from the experience of the team that did the job.
I do not believe you can name a single one.
then the *only* evidence
we would ever get that irreducibility was preventing us from building a
complete intelligence would be the fact that we would simply run around
in circles all the time,