On Wed, Aug 6, 2008 at 12:04 AM, Terren Suydam <[EMAIL PROTECTED]> wrote:
>
> Abram,
>
> If that's your response then we don't actually agree.

Sorry, I meant "I agree that Searle's responses are inadequate".

>
> I agree that the Chinese Room does not disprove strong AI, but I think it is 
> a valid critique for purely logical or non-grounded approaches. Why do you 
> think the critique fails on that level?  Anyone else who rejects the Chinese 
> Room care to explain why?

I explained somewhat in my first reply to this thread. Basically, as I
understand you, you are saying that the original chinese room does not
have understanding, but if we modify the argument to connect it up to
a robot with adequate senses, it could have understanding (if the
human inside could work fast enough to show it). But, if I am willing
to grant that such a robot has understanding (despite the human
controller having no understanding of the data being manipulated),
then I may very well be willing to grant that the original Chinese
room has understanding (as I am willing to grant).

I do distrust some philosophy, but other issues I think are very
important. For example, I am very interested in the foundations of
mathematics.

-Abram

>
> (I know this has been discussed ad nauseum, but that should only make it 
> easier to point to references that clearly demolish the arguments. It should 
> be noted however that relatively recent advances regarding complexity and 
> emergence aren't quite as well hashed out with respect to the Chinese Room. 
> In the document you linked to, mention of emergence didn't come until a 2002 
> reference attributed to Kurzweil.)
>
> If you can't explain your dismissal of the Chinese Room, it only reinforces 
> my earlier point that some of you who are working on AI aren't doing your 
> homework with the philosophy. It's ok to reject the Chinese Room, so long as 
> you have arguments to do it (and if you do, I'm all ears!) But if you don't 
> think the philosophy is important, then you're more than likely doing Cargo 
> Cult AI.
>
> (http://en.wikipedia.org/wiki/Cargo_cult)
>
> Terren
>
> --- On Tue, 8/5/08, Abram Demski <[EMAIL PROTECTED]> wrote:
>
>> From: Abram Demski <[EMAIL PROTECTED]>
>> Subject: Re: [agi] Groundless reasoning --> Chinese Room
>> To: agi@v2.listbox.com
>> Date: Tuesday, August 5, 2008, 9:49 PM
>> Terren,
>> I agree. Searle's responses are inadequate, and the
>> whole thought
>> experiment fails to prove his point. I think it also fails
>> to prove
>> your point, for the same reason.
>>
>> --Abram
>>
>
>
>
>
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to