Mike, we think alike, but there's a small point in which
our thoughts diverge. We agree that entirely symbolic architectures
will fail, possibly sooner than predicted by its creators. 
But we've got to be careful regarding our notion of "symbol".
If "symbol" is understood in a large enough context, then I don't
think I'll follow. For instance, the weight adaptation of a 
neural net during learning can be thought as a symbolic process of 
some sort (the machine is manipulating strings of bits). This is 
different than "the real thing" (read: our brain), which is a more 
conventional dynamic physical system. That's not a "symbol", the 
way I'm using the word.

So I consider that symbolic systems (in that large sense that includes
numeric values) can be capable of some sort of "intelligence", even if
this system is not directly fed with sensory signals (images, for 
instance). Blind humans (and particularly Hellen Keller) are the
examples that may demonstrate the point. 

For this to be clear, I have to say that I'm talking about "computer
intelligence", not "human-level intelligence". We are still infants
in relation to the first, and very, very far from the latter (which
will, probably, include such far-fetched things like "consciousness").

For that "computer intelligence" to work I find it necessary to
use symbolic and statistical (inductive) machinery. Logical deduction
cannot create new knowledge, only inductive can (I know that Karl 
Popper, in his grave, may not agree).

So I'm trying to discriminate three kinds of system, with only two 
currently largely implemented. The first (already developed) are the 
neural nets of the connectionists. They are hard to interface with 
symbolic systems. The second are Cyc-like knowledge bases, which
excel in knowledge representation but fail in the generation of
knowledge (knowledge discovery). There's a third kind of system
(that I can think of few example implementations) that deal with
symbols but that can generate knowledge by statistical processing,
induction, analogical mapping of structures, categorization, etc.

Sergio Navega.



  ----- Original Message ----- 
  From: Mike Tintner 
  To: agi@v2.listbox.com 
  Sent: Tuesday, June 12, 2007 2:15 PM
  Subject: Re: [agi] Symbol Grounding


  Sergio:This is because in order to *create* knowledge
  (and it's all about self-creation, not of "external insertion"), it 
  is imperative to use statistical (inductive) methods of some sort. 
  In my way of seeing things, any architecture based solely on logical 
  (deductive) grounds is doomed to fail.

  Sergio, I liked your post, but I think you fudged the conclusion. Why not put 
it more simply?:

  any symbolic AGI architecture - based solely on symbols - will fail, i.e. 
will fail to create new knowledge, (other than trivial logical deductions).

  Only architectures in which symbols are grounded in images can succeed, 
(although you can argue further that those images must in turn be grounded in a 
sensorimotor system).

  To argue otherwise is to dream that you can walk on air,  not on the ground, 
(or that you can understand walking without actually being able to walk or 
having any motor capacity).

  You say that some AI researchers are still fooled here - I personally haven't 
come across a single one who isn't still clinging to at least some extent to 
the symbolic dream. No one wants to face the intellectual earthquake - the 
collapse of the Berlin Wall between symbols and images - that is necessary and 
inevitably coming. Can you think of anyone?

  P.S. As I finish this, another v.g. post related to all  this -  from Derek 
Zahn:

  "Some people, especially those espousing a modular software-engineering type 
of approach seem to think that a perceptual system basically should spit out a 
token for "chair" when it sees a chair, and then a reasoning system can take 
over to reason about chairs and what you might do with them -- and further it 
is thought that the "reasoning about chairs" part is really the essence of 
intelligence, whereas chair detection is just discardable pre-processing.  My 
personal intuition says that by the time you have taken experience and boiled 
it down to a token labeled "chair" you have discarded almost everything 
important about the experience and all that is left is something that can be 
used by our logical inference systems.  And although that ability to do logical 
inference (probabilistic or pure) is a super-cool thing that humans can do, it 
is a fairly minor part of our intelligence."

  Exactly. New knowledge about chairs or walking or any other part of the 
world, is not created by logical manipulation of symbols in the abstract. It is 
created by the exercise of "image-ination" in the broadest sense - going back 
and LOOKING at chairs (either in one's mind or the real world) and observing in 
sensory images those parts and/or connections between parts of things that 
symbols have not yet reached/ named.

  Image-ination is not peripheral to - it's the foundation (the grounding) of - 
reasoning and thought.

  P.P.S. This whole debate is analogous and grounded in the debate that Bacon 
had with Scholastic philosophers. They too thought that new knowledge could 
comfortably be created in one's armchair from the symbols of books,  and did 
not want to be dragged out into the field to confront the sensory images of 
direct observation and experiment. There could only be one winner then - and 
now.



------------------------------------------------------------------------------
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?&; 


------------------------------------------------------------------------------


  No virus found in this incoming message.
  Checked by AVG Free Edition. 
  Version: 7.5.472 / Virus Database: 269.8.13/844 - Release Date: 6/11/aaaa 
17:10

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Reply via email to