On 7/11/07, Don Dailey <[EMAIL PROTECTED]> wrote:

The dirty hack I'm referring to is the robotic way this is implemented
in programs, not how it's done in humans.  With a pattern based program
you essentially specify everything and the program is not a participant
in the process.   It comes down to a list of do's and dont's and if we
can claim that knowledge was imparted it might be true, but no wisdom or
understanding was.

I'm compelled to point out that neural nets, _trained_ on patterns, which
patterns themselves are then discarded, have the ability to "recognize"
novel patterns, ones which have never been previously seen, let alone
stored.  The list of do's and dont's has been discarded, and what to do
or not do, in a situation that may never have been seen before, is inferred,
not looked-up in a library of rules.

So, it is not true that with a pattern-based program "you essentially specify
everything".  At least, not if you have thrown the patterns away, and
have substituted multilayer feedforward networks for that _training_data_.

UCT simulates understanding and wisdom,  patterns just simulates
knowledge.

This is a very strong assertion.  We eagerly await the proof.  :-)

I can just as easily assert:

Trained neural nets simulate understanding and wisdom.  (A static
pattern library merely simulates knowledge, I agree.)

Again, this is largely philosophical because even UCT programs are
robots just following instructions.   It's all about what you are trying
to simulate and why it's called AI.    I think UCT tries to simulate
understanding to a much greater extent than raw patterns in a
conventional program.

Than raw patterns, yes.  Trained neural nets, too, try to simulate
understanding to a much greater extent than do raw patterns.

Of course Don is right, it boils down to philosophy.  And while we're
on that topic, ...

I regret some of the terms that have come into use with regard to AI,
due to the (misguided, in my humble opinion) philosophy of some.

The very name "artificial intelligence" bothers me; AI programs are neither.

When humans run certain computer programs, the programs may seem
"intelligent" enough to perform other tasks.  By the implied reasoning, taken
to its logical conclusion, a hammer is "intelligent" enough to drive a nail.

The military has their so-called "smart bombs" but in truth, machines and
algorithms are no more intelligent than hammers.

By a similar token, "pattern recognition" bothers me.  Machines and
algorithms don't "recognize" anything, ever.  That's anthropomorphism.

A somewhat better term is "pattern classification", but machines don't really
classify anything, either.  It is we humans who classify, _using_ the machines.

It's like saying that the hammer drives the nail, when in fact it is the human
who does so, _using_ the hammer.

And there is nothing particularly "neural" about "neural networks", other
than their origins.  (True, they were first invented -- discovered, really -- by
someone who was trying to simulate a neuron, but they are much more
general than that.)  I prefer the term "multilayer feedforward network" for
the type of "neural net" commonly used in many domains.  (And now in go!)

This sort of semantic nitpicking may seem too severe.  However, it keeps me
from falling into the camp of those who believe that machines will one day
literally become intelligent, develop self-awareness, and achieve consciousness.

Ain't gonna happen.
--
Rich

P.S. -- I hated the movie "AI".
_______________________________________________
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Reply via email to