Benjamin Goertzel wrote:
Richard,

Well, I'm really sorry to have offended you so much, but you seem to be
a mighty easy guy to offend!

I know I can be pretty offensive at times; but this time, I wasn't
even trying ;-)

The argument I presented was not a "conjectural assertion", it made the
following coherent case:

   1) There is a high prima facie *risk* that intelligence involves a
significant amount of irreducibility (some of the most crucial
characteristics of a complete intelligence would, in any other system,
cause the behavior to show a global-local disconnect), and

The above statement contains two fuzzy terms -- "high" and "significant" ...

You have provided no evidence for any particular quantification of
these terms...
your evidence is qualitative/intuitive, so far as I can tell...

Your quantification of these terms seems to me a conjectural assertion
unsupported by evidence.

[This is going to cross over your parallel response to a different post. No time to address that other argument, but the comments made here are not affected by what is there.]

I have answered this point very precisely on many occasions, including in the paper. Here it is again:

If certain types of mechanisms do indeed give rise to complexity (as all the complex systems theorist agree), then BY DEFINITION it will never be possible to quantify the exact relationship between:

1) The precise characteristics of the low-level mechanisms (both the type and the quantity) that would lead us to expect complexity, and

   2)  The amount of complexity thereby caused in the high-level behavior.

Even if the complex systems effect were completely real, the best we could ever do would be to come up with suggestive characteristics that lead to complexity. Nevertheless, there is a long list of such suggestive characteristics, and everyone (including you) agree that all those suggestive characteristics are present in the low level mechanisms that must be in an AGI.

So the one most important thing we know about complex systems is that if complex systems really do exist, then we CANNOT say "Give me precise quantitative evidence that we should expect complexity in this particular system".

And what is your response to this most important fact about complex systems?

Your response is: "Give me precise quantitative evidence that we should expect complexity in this particular system".

And then, when I explain all of the above (as I have done before, many times), you go on to conclude:

"[You are giving] a conjectural assertion unsupported by evidence."

Which is, in the context of my actual argument, a serious little bit of sleight-of-hand (to be as polite as possible about it).



   2) Because of the unique and unusual nature of complexity there is
only a vanishingly small chance that we will be able to find a way to
assess the exact degree of risk involved, and

   3) (A corollary of (2)) If the problem were real, but we were to
ignore this risk and simply continue with an "engineering" approach
(pretending that complexity is insignificant),

The engineering approach does not pretend that complexity is
insignificant.  It just denies that the complexity of intelligent systems
leads to the sort of irreducibility you suggest it does.

It denies it? Based on what? My argument above makes it crystal clear that if the engineering approach is taking that attitude, then it does so purely on the basis of wishful thinking, whilst completely ignoring the above argument. The engineering approach would be saying: "We understand complex systems well enough to know that there isn't a problem in this case" .... a nonsensical position when by definition it is not possible for anyone to really understand the connection, and the best evidence we can get is actually pointing to the opposite conclusion.

So this comes back to the above argument: the engineering approach has to address that first, before it can make any such claim.


Some complex systems can be reverse-engineered in their general
principles even if not in detail.  And that is all one would need to do
in order to create a brain emulation (not that this is what I'm trying
to do) --- assuming one's goal was not to exactly emulate some
specific human brain based on observing the behaviors it generates,
but merely to emulate the brainlike character of the system...

This has never been done, but that is exactly what I am trying to do.

Show me ONE other example of the reverse engineering of a system in which the low level mechanisms show as many complexity-generating characteristics as are found in the case of intelligent systems, and I will gladly learn from the experience of the team that did the job.

I do not believe you can name a single one.



then the *only* evidence
we would ever get that irreducibility was preventing us from building a
complete intelligence would be the fact that we would simply run around
in circles all the time, wondering why, when we put large systems
together, they didn't quite make it, and

No.  Experimenting with AI systems could lead to evidence that would
support the irreducibility hypothesis more directly than that.  I doubt they
will but it's possible.  For instance, we might discover that creating more and
more intelligent systems inevitably presents more and more complex
parameter-tuning problems, so that parameter-tuning appears to be the
bottleneck.  This would suggest that some kind of highly expensive evolutionary
or ensemble approach as you're suggesting might be necessary.

I addressed this response in the paper (or at least I did at one point: I hope it was in the final draft).

The relationship between cause and effect here (complexity causing the AI systems to not work as they are scaled up) would be so intangible that it could be decades or centuries before anyone would begin to make the connection.

We have had fifty years already of EXACTLY this sort of evidence (that complexity is stopping us from getting full AI systems working), but has anyone (except me) stood up and say "Hey, just a second: this could be the result of complexity? Maybe we should consider a change of strategy."?

No. Instead, people go into denial and claim that their favorite design is all we need to get out of this bad patch. If the denial can go on for 50 years already, how much longer do you think it will continue?


   4) Therefore we need to adopt a "Precautionary Principle" and treat
the problem as if irreducibility really is significant.


Whether you like it or not - whether you've got too much invested in the
contrary point of view to admit it, or not - this is a perfectly valid
and coherent argument, and your attempt to try to push it into some
lesser realm of a "conjectural assertion" is profoundly insulting.

The form of the argument is coherent and valid; but the premises involve
fuzzy quantifiers whose values you are apparently setting by
intuition, and whose
specific values sensitively impact the truth value of the conclusion.

This is nonsense for the reasons stated in the first section of my commentary, above: this is just a repeat of the "Give me precise quantitative evidence that we should expect complexity in this particular system" line of argument.

*****

You know, I sympathize with you in a way. You are trying to build an AGI system using a methodology that you are completely committed to. And here am I coming along like Bertrand Russell writing his letter to Frege, just as poor Frege was about to publish his "Grundgesetze der Arithmetik", pointing out that everything in the new book was undermined by a paradox. How else can you respond except by denying the idea as vigorously as possible?

Unfortunately, I also have no choice but to try to undermine your position by asking that the argument be addressed directly, rather than evaded.

Sorry to be such a pain.




Richard Loosemore


P.S. I don't offend easily.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73248239-27dec2

Reply via email to