"J Storrs Hall, PhD" <[EMAIL PROTECTED]> wrote: On Monday 11 June 2007 08:12:08 
pm James Ratcliff wrote:
> 1. Is anyone taking an approach to AGI without the use of Symbol Grounding?

You'll have to go into that a bit more for me please.

Symbol grounding is something of a red herring. There's a whole raft of 
philosophical conundrums (qualia among them) that simply evaporate if you take 
the systems approach to AI and say "we're going to build a machine that does 
this kind of thing, and we're going to assume that the human brain is such a 
machine as well."

In what way?  I try to edge around most of the fuzzy, magic points of 
philosophy and just get to what needs to be programmed.

On the other hand, the trend to building robots in AI can be a valuable tool to 
keep oneself from doing the hard part of the problem in preparing the input for 
the program, thus fooling oneself into thinking the program has solved a harder 
problem than it has.

What is the "hard part of the problem in preparing the input for the program"?

Josh

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



_______________________________________
James Ratcliff - http://falazar.com
Looking for something...
       
---------------------------------
You snooze, you lose. Get messages ASAP with AutoCheck
 in the all-new Yahoo! Mail Beta. 

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Reply via email to