Substituting an actual human invalidates the experiment, because then you are 
bringing something in that can actually do semantics. The point of the argument 
is to show how merely manipulating symbols (i.e. the syntactical domain) is not 
a demonstration of understanding, no matter what the global result of the 
manipulation is (e.g., a chess move). 

--- On Wed, 8/6/08, Valentina Poletti <[EMAIL PROTECTED]> wrote:
From: Valentina Poletti <[EMAIL PROTECTED]>
Subject: Re: [agi] Groundless reasoning --> Chinese Room
To: agi@v2.listbox.com
Date: Wednesday, August 6, 2008, 11:27 AM

by translator i meant human translator btw. what this experiment does suggest 
is that linguistic abilities require energy (the book alone would do nothing). 
and that they are independent of humanness (the machine could do it), whether 
they involve 'understanding' or not.



On 8/6/08, Valentina Poletti <[EMAIL PROTECTED]> wrote:
Ok, I really don't see how it proves that then. In my view, the book could be 
replaced with a chinese-english translator and the same exact outcome will be 
given. Both are using their static knowledge for this process, not experience. 



On 8/6/08, Terren Suydam <[EMAIL PROTECTED]> wrote: 





Hi Valentina,

I think the distinction you draw between the two kinds of understanding is 
illusory. Mutual human experience is also an emergent phenomenon. Anyway, 
that's not the point of the Chinese Room argument, which doesn't say that a 
computer understands symbols in a different way than humans, it says that a 
computer has no understanding, period.


Terren

--- On Wed, 8/6/08, Valentina Poletti <[EMAIL PROTECTED]> wrote:




My view is that the problem with the Chinese Room argument is precisely the 
manner in which it uses the word 'understanding'. It is implied that in this 
context this word refers to mutual human experience. Understanding has another 
meaning, namely the emergent process some of you described, which can happen in 
a computer in a different way from the way it happens in a human being. In fact 
notice that the experiment says that the computer will not understand chinese 
the way humans do. Therefore it implies the first meaning, not the second.

 
Regarding grounding, I think that any intelligence has to collect data from 
somewhere in order to lear. Where it collects it from will determine the type 
of intelligence it is. Collecting stories is still a way of collecting 
information, but such an intelligence will never be able to move in the real 
world, as it has no clue regarding it. On the other hand an intelligence who 
learns by moving in the real world, yet has never read anything, will gather no 
information from a book.







agi | Archives  | Modify Your Subscription










agi | Archives  | Modify Your Subscription 




-- 
A true friend stabs you in the front. - O. Wilde

Einstein once thought he was wrong; then he discovered he was wrong.


For every complex problem, there is an answer which is short, simple and wrong. 
- H.L. Mencken 


-- 
A true friend stabs you in the front. - O. Wilde

Einstein once thought he was wrong; then he discovered he was wrong.


For every complex problem, there is an answer which is short, simple and wrong. 
- H.L. Mencken 




  
    
      
      agi | Archives

 | Modify
 Your Subscription


      
    
  





      


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to