Hi Bill,

An excellent reply to my post since it gives me good points to directly respond to . . . .

I am not making the two assumptions that you list in the absolute sense although I am making them in the practical sense (which turns out to be a very important difference). Let me explain . . . .

We are debating what is necessary for AGI. I am certainly contending that any idea that is necessary for AGI is not too complicated for an ordinary human to understand.

I am also contending that, while there may be ideas that are too difficult for humans to comprehend, that the world is messy enough and variable-interlinked enough that we currently don't have the data that would allow a system to find such a concept (nor a system that would truly *understand* such a concept -- using understand in the sense of being able to build upon it). If you wanted to debate this latter point with me by saying that Google has sufficient data, I wouldn't want to argue the point except to say that Google really can't use the data to build upon.

There's also the argument that humans are not limited to what's currently in their working memory. When I am doing system design and am working at the top level, I can only keep the major salient features of the subsystems in mind. Then, I go through each of the subsystems individually and see if they indicate that I should re-evaluate any decisions made at the top level. And you continue down through the levels . . . . With proper encapsulation, etc., this always works. It is not necessarily optimal but it is certainly functional. If I use paper and other outside assistance, I can do even more.

If storage and access are the concern, your own argument says that a sufficiently enhanced human can understand anything and I am at a loss as to why an above-average human with a computer and computer skills can't be considered nearly indefinitely enhanced.

Regarding chess or Go masters -- while you couldn't point to a move and say you shouldn't have done that, I'm sure that the master could (probably in several instances) point to a move and say "I wouldn't have done that" and provided a better move (most often along with a variable-quality explanation of why it was a better move).

I consider all of this as an engineering problem rather than a science problem. Yes, my bridge isn't going to hold up near a black hole, but it is certainly sufficient for near-human conditions.

       Mark

----- Original Message ----- From: "BillK" <[EMAIL PROTECTED]>
To: <agi@v2.listbox.com>
Sent: Saturday, December 02, 2006 2:31 PM
Subject: Re: [agi] A question on the symbol-system hypothesis


On 12/2/06, Mark Waser wrote:

My contention is that the pattern that it found was simply not translated
into terms you could understand and/or explained.

Further, and more importantly, the pattern matcher *doesn't* understand it's results either and certainly could build upon them -- thus, it *fails* the
test as far as being the central component of an RSIAI or being able to
provide evidence as to the required behavior of such.


Mark, I think you are making two very basic wrong assumptions.

1) That humans are able to understand everything if it is explained to
them simply enough and they are given unlimited time.

2) That it is even possible to explain some very complex ideas in a
simple enough fashion.

Consider teaching the sub-normal. After much repetition they can be
trained to do simple tasks. Not understanding 'why', but they can
remember instructions eventually. Even high IQ humans have the same
equipment, just a bit better. They still have limits to how much they
can remember, how much information they can hold in their heads and
access. If you can't remember all the factors at once, then you can't
understand the result. You can write down the steps, all the different
data that affect the result, but you can't assemble it in your brain
to get a result.

And I think the chess or Go examples are a good example. People who
think that they can look through the game records and understand why
they lost are just not trained chess or go players. They have a good
reason to call some people 'Go masters' or 'chess masters'. I used to
play competitive chess and I can assure you that when our top board
player consistently beat us lesser mortals we could rarely point at
move 23 and say 'we shouldn't have done that'. It is *far* more subtle
than that. If you think you can do that, then you just don't
understand the problem.

BillK

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to