On 10/23/06, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
> 2) the distinction between
> 2a) using ungrounded formal symbols to pretend to represent knowledge,  e.g. an explicit labeled internal symbol for "cat", one for "give", etc.
> 2b) having an AI system recognize patterns in its perception and action experience, and build up its own concepts (including symbolic ones) via learning; which means that concepts like "cat" and "give" will generally be represented as complex, distributed structures in the knowledge base, not as individual tokens
 
I think in G0, symbols are grounded and they exist in complex relations with other symbols.  What may be misleading is that you see I talk about a symbol like "love" or "3" in isolation and you think that is very not-AGI to do so.  But I have a KB of facts about "love", "3", etc, even augmented with probabilities.  There is no real difference between this and your graphical representation.  Any graph can be completely described by listing all its nodes and vertices.
 
YKY

This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to