On Monday 11 June 2007 09:47:38 pm James Ratcliff wrote:
> 
> "J Storrs Hall, PhD" <[EMAIL PROTECTED]> wrote: On Monday 11 June 2007 
08:12:08 pm James Ratcliff wrote:
> > 1. Is anyone taking an approach to AGI without the use of Symbol 
Grounding?
> 
> You'll have to go into that a bit more for me please.

Here's how Harnad defines it in his original paper:

" My own example of the symbol grounding problem has two versions, one 
difficult, and one, I think, impossible. The difficult version is: Suppose 
you had to learn Chinese as a second language and the only source of 
information you had was a Chinese/Chinese dictionary. The trip through the 
dictionary would amount to a merry-go-round, passing endlessly from one 
meaningless symbol or symbol-string (the definientes) to another (the 
definienda), never coming to a halt on what anything meant.
 The only reason cryptologists of ancient languages and secret codes seem to 
be able to successfully accomplish something very like this is that their 
efforts are grounded in a first language and in real world experience and 
knowledge. The second variant of the Dictionary-Go-Round, however, goes far 
beyond the conceivable resources of cryptology: Suppose you had to learn 
Chinese as a first language and the only source of information you had was a 
Chinese/Chinese dictionary! This is more like the actual task faced by a 
purely symbolic model of the mind: How can you ever get off the symbol/symbol 
merry-go-round? How is symbol meaning to be grounded in something other than 
just more meaningless symbols? This is the symbol grounding problem."

The reason this doesn't apply to AI the way philosophers tend to think it does 
is that there is a difference between a dictionary and a computer (or any 
other working machine): the computer has *mechanism* which can act out 
semantic primitives *by itself*. Thus the recursive construction of meaning 
does have a terminating base case.
> 
> ... There's a whole raft of 
> philosophical conundrums (qualia among them) that simply evaporate if you 
take the systems approach to AI and say "we're going to build a machine that 
does this kind of thing, and we're going to assume that the human brain is 
such a machine as well."
> 
> In what way?  I try to edge around most of the fuzzy, magic points of 
philosophy and just get to what needs to be programmed.

Good -- that's exactly what I was urging. DON'T try to get into the 
philosophical end of it -- you'll argue for 3000 years and come to no useful 
conclusions.

> On the other hand, the trend to building robots in AI can be a valuable tool 
to keep oneself from doing the hard part of the problem in preparing the 
input for the program, thus fooling oneself into thinking the program has 
solved a harder problem than it has.
> 
> What is the "hard part of the problem in preparing the input for the 
program"?

Forall u: place(u) implies can(monkey, move(monkey, box, u))

can(monkey, climbs(monkey,box))

place(under(bananas))

at(box, under(bananas)) and on(monkey, box) implies can(monkey, reach(monkey,
 bananas))

Forall p forall x: reach(p,x) implies cause(has(p,x))

etc.

The "feet of clay" stage is to put this problem into a computer in symbolic 
predicate logic. The hard part is going from a video/audio stream that would 
represent the monkey's experience to the rules that represent the monkey's 
model of how the world works.

Josh

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Reply via email to