I'll take a stab at both of these...

The Chinese Room to me simply states that understanding cannot be decomposed into sub-understanding pieces. I don't see it as addressing grounding, unless you believe that understanding can only come from the outside world, and must become part of the system as atomic pieces of understanding. I don't see any reason to think that, but proving it is another matter -- proving negatives is always difficult.

As to philosophy, I tend to think of it's relationship to AI as somewhat the same as alchemy's relationship to chemistry. That is, it's one of the origins of the field, and has some valid ideas, but it lacks the hard science and engineering needed to get things actually working. This is admittedly perhaps a naive view, and reflects the traditional engineering distrust of the humanities. I state it not to be critical of philosophy, but to give you an idea how some of us think of the area.

Terren Suydam wrote:
Abram,

If that's your response then we don't actually agree.
I agree that the Chinese Room does not disprove strong AI, but I think it is a 
valid critique for purely logical or non-grounded approaches. Why do you think 
the critique fails on that level?  Anyone else who rejects the Chinese Room 
care to explain why?

(I know this has been discussed ad nauseum, but that should only make it easier 
to point to references that clearly demolish the arguments. It should be noted 
however that relatively recent advances regarding complexity and emergence 
aren't quite as well hashed out with respect to the Chinese Room. In the 
document you linked to, mention of emergence didn't come until a 2002 reference 
attributed to Kurzweil.)

If you can't explain your dismissal of the Chinese Room, it only reinforces my 
earlier point that some of you who are working on AI aren't doing your homework 
with the philosophy. It's ok to reject the Chinese Room, so long as you have 
arguments to do it (and if you do, I'm all ears!) But if you don't think the 
philosophy is important, then you're more than likely doing Cargo Cult AI.

(http://en.wikipedia.org/wiki/Cargo_cult)

Terren

--- On Tue, 8/5/08, Abram Demski <[EMAIL PROTECTED]> wrote:

From: Abram Demski <[EMAIL PROTECTED]>
Subject: Re: [agi] Groundless reasoning --> Chinese Room
To: agi@v2.listbox.com
Date: Tuesday, August 5, 2008, 9:49 PM
Terren,
I agree. Searle's responses are inadequate, and the
whole thought
experiment fails to prove his point. I think it also fails
to prove
your point, for the same reason.

--Abram





-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to