A human doesn't have enough time to look through millions of pieces of
data, and doesn't have enough memory to retain them all in memory, and
certainly doesn't have the time or the memory to examine all of the
10^(insert large number here) different relationships between these
pieces of data.

True, however, I would argue that the same is true of an AI. If you assume that an AI can do this, then *you* are not being pragmatic.

Understanding is compiling data into knowledge. If you're just brute forcing millions of pieces of data, then you don't understand the problem -- though you may be able to solve it -- and validating your answers and placing intelligent/rational boundaries/caveats on them is not possible.

----- Original Message ----- From: "Philip Goetz" <[EMAIL PROTECTED]>
To: <agi@v2.listbox.com>
Sent: Wednesday, November 29, 2006 1:14 PM
Subject: Re: Re: Re: [agi] A question on the symbol-system hypothesis


On 11/14/06, Mark Waser <[EMAIL PROTECTED]> wrote:
> Even now, with a relatively primitive system like the current
> Novamente, it is not pragmatically possible to understand why the
> system does each thing it does.

    Pragmatically possible obscures the point I was trying to make with
Matt. If you were to freeze-frame Novamente right after it took an action,
it would be trivially easy to understand why it took that action.

> because
> sometimes judgments are made via the combination of a large number of
> weak pieces of evidence, and evaluating all of them would take too
> much time....

    Looks like a time problem to me . . . . NOT an incomprehensibility
problem.

This argument started because Matt said that the wrong way to design
an AI is to try to make it human-readable, and constantly look inside
and figure out what it is doing; and the right way is to use math and
statistics and learning.

A human doesn't have enough time to look through millions of pieces of
data, and doesn't have enough memory to retain them all in memory, and
certainly doesn't have the time or the memory to examine all of the
10^(insert large number here) different relationships between these
pieces of data.  Hence, a human shouldn't design AI systems in a way
that would require a human to have these abilities.

The question is all about pragmatics.  If you dismiss pragmatics, you
are not part of this conversation.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to