Benjamin Goertzel wrote:
On Dec 4, 2007 8:38 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
Benjamin Goertzel wrote:
[snip]
And neither you nor anyone else has ever made a cogent argument that
emulating the brain is the ONLY route to creating powerful AGI.  The closest
thing to such an argument that I've seen
was given by Eric Baum in his book "What Is
Thought?", and I note that Eric has backed away somewhat from that
position lately.
This is a pretty outrageous statement to make, given that you know full
well that I have done exactly that.

You may not agree with the argument, but that is not the same as
asserting that the argument does not exist.

Unless you were meaning "emulating the brain" in the sense of emulating
it ONLY at the low level of neural wiring, which I do not advocate.

I don't find your nor Eric's nor anyone else's argument that brain-emulation
is the "golden path" very strongly convincing...

However, I found Eric's argument by reference to the compressed nature of
the genome, more convincing than your argument via the hypothesis of
irreducible emergent complexity...

Sorry if my choice of words was not adequately politic.  I find your argument
interesting, but it's certainly just as speculative as the various AGI theories
you dismiss....  It basically rests on a big assumption, which is that the
complexity of human intelligence is analytically irreducible within pragmatic
computational constraints.  In this sense it's less an argument than a
conjectural
assertion, albeit an admirably bold one.

Ben,

This is even worse.

The argument I presented was not a "conjectural assertion", it made the following coherent case:

1) There is a high prima facie *risk* that intelligence involves a significant amount of irreducibility (some of the most crucial characteristics of a complete intelligence would, in any other system, cause the behavior to show a global-local disconnect), and

2) Because of the unique and unusual nature of complexity there is only a vanishingly small chance that we will be able to find a way to assess the exact degree of risk involved, and

3) (A corollary of (2)) If the problem were real, but we were to ignore this risk and simply continue with an "engineering" approach (pretending that complexity is insignificant), then the *only* evidence we would ever get that irreducibility was preventing us from building a complete intelligence would be the fact that we would simply run around in circles all the time, wondering why, when we put large systems together, they didn't quite make it, and

4) Therefore we need to adopt a "Precautionary Principle" and treat the problem as if irreducibility really is significant.


Whether you like it or not - whether you've got too much invested in the contrary point of view to admit it, or not - this is a perfectly valid and coherent argument, and your attempt to try to push it into some lesser realm of a "conjectural assertion" is profoundly insulting.




Richard Loosemore


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=72132038-3654d5

Reply via email to