Re: [agi] Symbol Grounding

2007-06-12 Thread Sergio Navega

From: "J Storrs Hall, PhD" <[EMAIL PROTECTED]>

On Tuesday 12 June 2007 11:24:16 am David Clark wrote:

... What if models of how the world works
could be coded by "symbol grounded" humans so that, as the AGI learned,
it
could test it's theories and assumptions on these models without
necessarily
actually having a direct connection to the real world?  Would this symbol
grounding merry-go-round still apply?


Nope, that would work fine. So would an oracle that came up with the same
program by magic (or a Solomonoff machine that did it by exhaustive
search).
The key is having a valid model (i.e. one that correctly predicts the
world),
not howyou happened to get the model.

That said, the less contact you have with the real world, the harder it is
to
come up with useful models :-)


That's certainly true, but only up to a certain level. A system
that has been "grounded" by humans would have problems to reason
with sensorimotor activities, but would be ok to reason in abstract
ways. Besides, if we think of this system not as an autonomous agent,
but as a cooperative helper to a human (kind of a "creative assistant"),
it can be seen as a very useful piece of machinery. I really don't
know if this will fit well in this world of ours, where google-like
things operate on a highly distributed fashion, but I still believe
that there's a competitive advantage of having our own "personal
assistant", with "local knowledge bases" that helps us coming up with
good ideas. A system such as this could be seen as "interestingly
intelligent and useful" and still be eons from "human-like" intelligence.
A failure, if seen from traditional AI perspective, but an important
step towards achieving it.

Sergio Navega.









Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



--
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.472 / Virus Database: 269.8.14/845 - Release Date: 6/12/
06:39



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Symbol Grounding

2007-06-12 Thread Sergio Navega
Mike, we think alike, but there's a small point in which
our thoughts diverge. We agree that entirely symbolic architectures
will fail, possibly sooner than predicted by its creators. 
But we've got to be careful regarding our notion of "symbol".
If "symbol" is understood in a large enough context, then I don't
think I'll follow. For instance, the weight adaptation of a 
neural net during learning can be thought as a symbolic process of 
some sort (the machine is manipulating strings of bits). This is 
different than "the real thing" (read: our brain), which is a more 
conventional dynamic physical system. That's not a "symbol", the 
way I'm using the word.

So I consider that symbolic systems (in that large sense that includes
numeric values) can be capable of some sort of "intelligence", even if
this system is not directly fed with sensory signals (images, for 
instance). Blind humans (and particularly Hellen Keller) are the
examples that may demonstrate the point. 

For this to be clear, I have to say that I'm talking about "computer
intelligence", not "human-level intelligence". We are still infants
in relation to the first, and very, very far from the latter (which
will, probably, include such far-fetched things like "consciousness").

For that "computer intelligence" to work I find it necessary to
use symbolic and statistical (inductive) machinery. Logical deduction
cannot create new knowledge, only inductive can (I know that Karl 
Popper, in his grave, may not agree).

So I'm trying to discriminate three kinds of system, with only two 
currently largely implemented. The first (already developed) are the 
neural nets of the connectionists. They are hard to interface with 
symbolic systems. The second are Cyc-like knowledge bases, which
excel in knowledge representation but fail in the generation of
knowledge (knowledge discovery). There's a third kind of system
(that I can think of few example implementations) that deal with
symbols but that can generate knowledge by statistical processing,
induction, analogical mapping of structures, categorization, etc.

Sergio Navega.



  ----- Original Message - 
  From: Mike Tintner 
  To: agi@v2.listbox.com 
  Sent: Tuesday, June 12, 2007 2:15 PM
  Subject: Re: [agi] Symbol Grounding


  Sergio:This is because in order to *create* knowledge
  (and it's all about self-creation, not of "external insertion"), it 
  is imperative to use statistical (inductive) methods of some sort. 
  In my way of seeing things, any architecture based solely on logical 
  (deductive) grounds is doomed to fail.

  Sergio, I liked your post, but I think you fudged the conclusion. Why not put 
it more simply?:

  any symbolic AGI architecture - based solely on symbols - will fail, i.e. 
will fail to create new knowledge, (other than trivial logical deductions).

  Only architectures in which symbols are grounded in images can succeed, 
(although you can argue further that those images must in turn be grounded in a 
sensorimotor system).

  To argue otherwise is to dream that you can walk on air,  not on the ground, 
(or that you can understand walking without actually being able to walk or 
having any motor capacity).

  You say that some AI researchers are still fooled here - I personally haven't 
come across a single one who isn't still clinging to at least some extent to 
the symbolic dream. No one wants to face the intellectual earthquake - the 
collapse of the Berlin Wall between symbols and images - that is necessary and 
inevitably coming. Can you think of anyone?

  P.S. As I finish this, another v.g. post related to all  this -  from Derek 
Zahn:

  "Some people, especially those espousing a modular software-engineering type 
of approach seem to think that a perceptual system basically should spit out a 
token for "chair" when it sees a chair, and then a reasoning system can take 
over to reason about chairs and what you might do with them -- and further it 
is thought that the "reasoning about chairs" part is really the essence of 
intelligence, whereas chair detection is just discardable pre-processing.  My 
personal intuition says that by the time you have taken experience and boiled 
it down to a token labeled "chair" you have discarded almost everything 
important about the experience and all that is left is something that can be 
used by our logical inference systems.  And although that ability to do logical 
inference (probabilistic or pure) is a super-cool thing that humans can do, it 
is a fairly minor part of our intelligence."

  Exactly. New knowledge about chairs or walking or any other part of the 
world, is not created by logical manipulation of symbols in the abstract. It is 
created by the exercise of "image-ination" in the broadest sense - goin

Re: [agi] Symbol Grounding

2007-06-12 Thread J Storrs Hall, PhD
On Tuesday 12 June 2007 12:49:12 pm Derek Zahn wrote:
> Often I see AGI types referring to physical embodiment as a costly sideshow 
or as something that would be nice if a team of roboticists were available.  
But really, a simple robot is trivial to build, and even a camera on a 
pan/tilt base pointed at an interesting physical location is way easier to 
build than a detailed simulation world.  

Hear, hear. It took me two days to build two successive versions of Tommy 
complete with dual Firewire cameras. Right now I'm grovelling thru swamps of 
low-level software, but that's because I'm trying to use a GPGPU on one 
machine and 8-way SMP on another and have them work together :-)

> The next objection is that "image processing" is too expensive and 
difficult.  I guess my only thought about that it doesn't inspire confidence 
in an approach if the very first layer of neural processing is too hard.  I 
suspect the real issue is that even if you do the "image processing", then 
what?  What do you do with the output?

In my approach there isn't a boundary -- the same basic algorithms get used 
for image segmentation and concept formation. One can make things as 
different as one likes by optimizing for common cases/dimensionalities, but 
I'm nowhere near far enough along to worry about that. "Premature 
optimization is the root of all evil."

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Symbol Grounding

2007-06-12 Thread J Storrs Hall, PhD
On Tuesday 12 June 2007 11:24:16 am David Clark wrote:
> ... What if models of how the world works
> could be coded by "symbol grounded" humans so that, as the AGI learned, it
> could test it's theories and assumptions on these models without necessarily
> actually having a direct connection to the real world?  Would this symbol
> grounding merry-go-round still apply?

Nope, that would work fine. So would an oracle that came up with the same 
program by magic (or a Solomonoff machine that did it by exhaustive search). 
The key is having a valid model (i.e. one that correctly predicts the world), 
not howyou happened to get the model.

That said, the less contact you have with the real world, the harder it is to 
come up with useful models :-)

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


RE: [agi] Symbol Grounding

2007-06-12 Thread Derek Zahn
One last bit of rambling in addition to my last post:
 
When I assert that almost everything important gets discarded while merely 
distilling an array of rod and cone firings into a symbol for "chair", it's 
fair to ask exactly what that "other stuff" is.  Alas, I believe it is 
fundamentally impossible to tell you!
 
I have seen some people attempt to communicate it, perhaps with a phrase like 
"the play of shadow on the angle of the chair arm whose texture reminds me of 
the bus seat on that day with Julie in Madrid and the scratch on the leg which 
might be wood or might be plastic, sort of cone-like taking part of the chair's 
weight..."
 
The problem with trying to evoke the complexity and associative nature of the 
perceptual experience with a phrase like that is that every symbolist can 
easily nod and think about how all that gets encoded in their symbolic 
representation, with its nodes for bus and leg and the encoded memory of past 
events.
 
But actually, the "stuff" is not at the right level for communicating 
linguistically so the above type of description is a made-up sham, more 
misleading than revealing.
 
To the extent I have a theory about all this stuff, it's this:  animals, 
including our evolutionary forebears, have concepts much like we do.  However, 
somewhere recently in our history, something happened that greatly magnified 
our ability to use language, reason logically, and form dizzyingly abstract 
concepts.  I think it's likely that it was a single thing (or that these are 
aspects of the single thing) rather than postulating three different radical 
innovations occurring at once.  I'm not sure what that thing was, but I'd guess 
the following analogy:
 
Concepts formed in some part of the brain grew "handles" of some kind, which 
allows them to be manipulated in a flexible combinational way by some new or 
improved dynamic processing mechanism that is either unique to us or is maybe 
vastly expanded from the abilities of "lower" species.  Symbolists see the 
handles and the way they get tugged around and abstract it into combinatorial 
logics and linguistic grammars, but it doesn't do any good to tug handles 
around unless they are attached to the huge gooey blobs of mind-stuff, which 
are NOT logical or linguistic in nature.
 
I'm philosophically a "bottom upper" because I think the hard and interesting 
questions have to do with the nature of those gooey blobby concepts.  Examples 
of people who are trying to deal with that issue are Hawkins with his 
Hierarchical Temporal Memory, and Josh with his Interpolating Associative 
Memory (though those models are quite different in detail).  I don't have a 
model myself.
 
I do like to follow you "top downers" though as you do amazing things!
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Re: [agi] Symbol Grounding

2007-06-12 Thread Mike Tintner
Sergio:This is because in order to *create* knowledge
(and it's all about self-creation, not of "external insertion"), it 
is imperative to use statistical (inductive) methods of some sort. 
In my way of seeing things, any architecture based solely on logical 
(deductive) grounds is doomed to fail.

Sergio, I liked your post, but I think you fudged the conclusion. Why not put 
it more simply?:

any symbolic AGI architecture - based solely on symbols - will fail, i.e. will 
fail to create new knowledge, (other than trivial logical deductions).

Only architectures in which symbols are grounded in images can succeed, 
(although you can argue further that those images must in turn be grounded in a 
sensorimotor system).

To argue otherwise is to dream that you can walk on air,  not on the ground, 
(or that you can understand walking without actually being able to walk or 
having any motor capacity).

You say that some AI researchers are still fooled here - I personally haven't 
come across a single one who isn't still clinging to at least some extent to 
the symbolic dream. No one wants to face the intellectual earthquake - the 
collapse of the Berlin Wall between symbols and images - that is necessary and 
inevitably coming. Can you think of anyone?

P.S. As I finish this, another v.g. post related to all  this -  from Derek 
Zahn:

"Some people, especially those espousing a modular software-engineering type of 
approach seem to think that a perceptual system basically should spit out a 
token for "chair" when it sees a chair, and then a reasoning system can take 
over to reason about chairs and what you might do with them -- and further it 
is thought that the "reasoning about chairs" part is really the essence of 
intelligence, whereas chair detection is just discardable pre-processing.  My 
personal intuition says that by the time you have taken experience and boiled 
it down to a token labeled "chair" you have discarded almost everything 
important about the experience and all that is left is something that can be 
used by our logical inference systems.  And although that ability to do logical 
inference (probabilistic or pure) is a super-cool thing that humans can do, it 
is a fairly minor part of our intelligence."

Exactly. New knowledge about chairs or walking or any other part of the world, 
is not created by logical manipulation of symbols in the abstract. It is 
created by the exercise of "image-ination" in the broadest sense - going back 
and LOOKING at chairs (either in one's mind or the real world) and observing in 
sensory images those parts and/or connections between parts of things that 
symbols have not yet reached/ named.

Image-ination is not peripheral to - it's the foundation (the grounding) of - 
reasoning and thought.

P.P.S. This whole debate is analogous and grounded in the debate that Bacon had 
with Scholastic philosophers. They too thought that new knowledge could 
comfortably be created in one's armchair from the symbols of books,  and did 
not want to be dragged out into the field to confront the sensory images of 
direct observation and experiment. There could only be one winner then - and 
now.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Re: [agi] Symbol Grounding

2007-06-12 Thread Lukasz Stafiniak

On 6/12/07, Derek Zahn <[EMAIL PROTECTED]> wrote:


 Some people, especially those espousing a modular software-engineering type
of approach seem to think that a perceptual system basically should spit out
a token for "chair" when it sees a chair, and then a reasoning system can
take over to reason about chairs and what you might do with them -- and
further it is thought that the "reasoning about chairs" part is really the
essence of intelligence, whereas chair detection is just discardable
pre-processing.  My personal intuition says that by the time you have taken
experience and boiled it down to a token labeled "chair" you have discarded
almost everything important about the experience and all that is left is
something that can be used by our logical inference systems.


Assume that the inference systems do well. Therefore, not *that* much
information is discarded. Therefore, the inference systems have found
a workaround to collect the information about a particular "chair"
that is not directly accessible through a single token (e.g by a
subtle context of a myriad of other tokens).

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


RE: [agi] Symbol Grounding

2007-06-12 Thread Derek Zahn
I think probably "AGI-curious" person has intuitions about this subject.  Here 
are mine:
 
Some people, especially those espousing a modular software-engineering type of 
approach seem to think that a perceptual system basically should spit out a 
token for "chair" when it sees a chair, and then a reasoning system can take 
over to reason about chairs and what you might do with them -- and further it 
is thought that the "reasoning about chairs" part is really the essence of 
intelligence, whereas chair detection is just discardable pre-processing.  My 
personal intuition says that by the time you have taken experience and boiled 
it down to a token labeled "chair" you have discarded almost everything 
important about the experience and all that is left is something that can be 
used by our logical inference systems.  And although that ability to do logical 
inference (probabilistic or pure) is a super-cool thing that humans can do, it 
is a fairly minor part of our intelligence.
 
Often I see AGI types referring to physical embodiment as a costly sideshow or 
as something that would be nice if a team of roboticists were available.  But 
really, a simple robot is trivial to build, and even a camera on a pan/tilt 
base pointed at an interesting physical location is way easier to build than a 
detailed simulation world.  The next objection is that "image processing" is 
too expensive and difficult.  I guess my only thought about that it doesn't 
inspire confidence in an approach if the very first layer of neural processing 
is too hard.  I suspect the real issue is that even if you do the "image 
processing", then what?  What do you do with the output?
 
Ignoring those issues -- inventing a way of representing and manipulating 
"knowledge", and assuming that sensory processes can create those data 
structures if built properly -- can work IF it turns out that brains are just 
really really bad at being "intelligent".  That is, if the extreme tip of the 
evolutionary iceberg (some thousands of generations of lightly-populated 
species) finally stumbled on the fluid symbol-manipulating abilities that 
define intelligence, and the rest of the historical structures are only mildly 
more important than organs that pump blood -- if that's true, thinking about 
all this low-level grunk is a waste of time.  I actually hope that it's true, 
but I doubt it.  To the first people who had the ability to code our magical 
symbol processing abilities on a machine, it must have seemed like an exciting 
theory.
 
 
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Re: [agi] Symbol Grounding

2007-06-12 Thread James Ratcliff


David Clark <[EMAIL PROTECTED]> wrote: - Original Message - 
From: "J Storrs Hall, PhD" 
To: 
Sent: Tuesday, June 12, 2007 4:48 AM
Subject: Re: [agi] Symbol Grounding


> Here's how Harnad defines it in his original paper:
>
> " My own example of the symbol grounding problem has two versions, one
> difficult, and one, I think, impossible. The difficult version is: Suppose
> you had to learn Chinese as a second language and the only source of
> information you had was a Chinese/Chinese dictionary. The trip through the
> dictionary would amount to a merry-go-round, passing endlessly from one
> meaningless symbol or symbol-string (the definientes) to another (the
> definienda), never coming to a halt on what anything meant.
>  The only reason cryptologists of ancient languages and secret codes seem
to
> be able to successfully accomplish something very like this is that their
> efforts are grounded in a first language and in real world experience and
> knowledge. The second variant of the Dictionary-Go-Round, however, goes
far
> beyond the conceivable resources of cryptology: Suppose you had to learn
> Chinese as a first language and the only source of information you had was
a
> Chinese/Chinese dictionary! This is more like the actual task faced by a
> purely symbolic model of the mind: How can you ever get off the
symbol/symbol
> merry-go-round? How is symbol meaning to be grounded in something other
than
> just more meaningless symbols? This is the symbol grounding problem."

What if instead of having a Chinese/Chinese dictionary, you had a human
being(s) that is/are grounded in the real world to teach you about Chinese?
I would argue that even human beings require other intelligent human beings
to become intelligent themselves.  What if models of how the world works
could be coded by "symbol grounded" humans so that, as the AGI learned, it
could test it's theories and assumptions on these models without necessarily
actually having a direct connection to the real world?  Would this symbol
grounding merry-go-round still apply?

The real world connection and video image to symbol translation doesn't
necessarily need to come first!

Even with a good connection to the real world, I find it hard to believe
that the relationships between things in the real world (the models) will be
divinable any time soon.

David Clark
That goes along with my theory to date, most "grounding" that I will be giving 
the AGI is in the form of basic primitives given by humans or encoded 
originally, and allowing them to build on these grounding principles, rather 
than being able to directly experience them. Then the rest of the 
experiences are low-level variety try-test in a virtual environment. 
  Yeah directly trying to learn anything from real world vision and senses is 
still too hard at this stage.

>From some other articles I was reading about Symbol Grounding, it appeared 
>that many were talking rather that the symbols must be specifically grounded 
>in a simple semi-atomic type of object in the real world, not necessarily that 
>it had to be directly experienced by the AGI.  So if we give a full and as 
>complete as possible description of an "Apple" object, then we have in fact 
>grounded it, by saying what its size and shape and properties are It 
>reflects a real worl object now that can be differentiated from a horse or 
>whatever other object.  This type of grounding can be assisted by a human 
>teacher.

James Ratcliff


___
James Ratcliff - http://falazar.com
Looking for something...
   
-
Need a vacation? Get great deals to amazing places on Yahoo! Travel. 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Re: [agi] Symbol Grounding

2007-06-12 Thread Sergio Navega
Harnad's symbol grounding paper has been criticized some
times, but it remains a seminal idea. The problem faced
by many tradicional artificial cognitions is the exclusive reliance 
on arbitrary symbols, such as linguistic inputs. That approach
is appealing, and has fooled (it still fools) many researchers 
of the field. But it is very difficult to associate intelligent 
behavior with the manipulation of arbitrary (amodal) symbols. Another
way of seeing this is by reading about Lawrence Barsalou's 
"perceptual symbol systems". Symbolic architectures could live
if one thinks about using symbols that maintain some properties
of the proximal sensory data captured by the agent. That will
allow the "scaffolding" of such symbols with no danger of
incurring in the problems reported by Harnad. And that also
means that the architecture must have some kind of "statistical
layer" capable of creating symbols (and fade to extinction 
inappropriate ones). This happens, for instance, with blind
humans, which are living examples of this possibility.

So now commenting on Waser's question, one may be able to build
a system that has "symbolic anchors" instead of real statistical
experience (the ones that are directly derived from sensory inputs).
However that doesn't preclude the use of statistical methods, latter 
in the architecture. This is because in order to *create* knowledge
(and it's all about self-creation, not of "external insertion"), it 
is imperative to use statistical (inductive) methods of some sort. 
In my way of seeing things, any architecture based solely on logical 
(deductive) grounds is doomed to fail.

Sergio Navega.
  

  - Original Message - 
  From: Mark Waser 
  To: agi@v2.listbox.com 
  Sent: Tuesday, June 12, 2007 9:33 AM
  Subject: Re: [agi] Symbol Grounding


  >> a question is whether a software program could tractably learn language 
without such associations, by relying solely on statistical associations within 
texts. 

  Isn't there an alternative (or middle ground) of starting the software 
program with a seed of initial structure and then letting it grow from there 
(rather than relying only on statistical associations -- which I believe will 
be intractable for quite some time).


--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?&; 


--


  No virus found in this incoming message.
  Checked by AVG Free Edition. 
  Version: 7.5.472 / Virus Database: 269.8.13/844 - Release Date: 6/11/ 
17:10

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Re: [agi] Symbol Grounding

2007-06-12 Thread David Clark
- Original Message - 
From: "J Storrs Hall, PhD" <[EMAIL PROTECTED]>
To: 
Sent: Tuesday, June 12, 2007 4:48 AM
Subject: Re: [agi] Symbol Grounding


> Here's how Harnad defines it in his original paper:
>
> " My own example of the symbol grounding problem has two versions, one
> difficult, and one, I think, impossible. The difficult version is: Suppose
> you had to learn Chinese as a second language and the only source of
> information you had was a Chinese/Chinese dictionary. The trip through the
> dictionary would amount to a merry-go-round, passing endlessly from one
> meaningless symbol or symbol-string (the definientes) to another (the
> definienda), never coming to a halt on what anything meant.
>  The only reason cryptologists of ancient languages and secret codes seem
to
> be able to successfully accomplish something very like this is that their
> efforts are grounded in a first language and in real world experience and
> knowledge. The second variant of the Dictionary-Go-Round, however, goes
far
> beyond the conceivable resources of cryptology: Suppose you had to learn
> Chinese as a first language and the only source of information you had was
a
> Chinese/Chinese dictionary! This is more like the actual task faced by a
> purely symbolic model of the mind: How can you ever get off the
symbol/symbol
> merry-go-round? How is symbol meaning to be grounded in something other
than
> just more meaningless symbols? This is the symbol grounding problem."

What if instead of having a Chinese/Chinese dictionary, you had a human
being(s) that is/are grounded in the real world to teach you about Chinese?
I would argue that even human beings require other intelligent human beings
to become intelligent themselves.  What if models of how the world works
could be coded by "symbol grounded" humans so that, as the AGI learned, it
could test it's theories and assumptions on these models without necessarily
actually having a direct connection to the real world?  Would this symbol
grounding merry-go-round still apply?

The real world connection and video image to symbol translation doesn't
necessarily need to come first!

Even with a good connection to the real world, I find it hard to believe
that the relationships between things in the real world (the models) will be
divinable any time soon.

David Clark


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Symbol Grounding

2007-06-12 Thread Lukasz Stafiniak

On 6/12/07, Mark Waser <[EMAIL PROTECTED]> wrote:



>> a question is whether a software program could tractably learn language
without such associations, by relying solely on statistical associations
within texts.

Isn't there an alternative (or middle ground) of starting the software
program with a seed of initial structure and then letting it grow from there
(rather than relying only on statistical associations -- which I believe
will be intractable for quite some time).


It is at least conceivable. The idea is that you give the system
reasonable means to build models (= simulations). The "initial
structure" lets the system build approximate models to at least some
minimal but not isolated amount of texts. The system then should have
some explorative means to build new more complicated models (the hard
part). The model extension should be guided by parts of (partially or
approxiamtely) interpretable texts that "do not quite fit" (e.g. have
uninterpreted words). The extensions are then evaluated by their
"predictive" characteristics (how much new text can be consistently
interpreted in them).

Also, have a look at Polyscheme, etc.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Symbol Grounding

2007-06-12 Thread Mark Waser
>> a question is whether a software program could tractably learn language 
>> without such associations, by relying solely on statistical associations 
>> within texts. 

Isn't there an alternative (or middle ground) of starting the software program 
with a seed of initial structure and then letting it grow from there (rather 
than relying only on statistical associations -- which I believe will be 
intractable for quite some time).

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Re: [agi] Symbol Grounding

2007-06-12 Thread J Storrs Hall, PhD
On Monday 11 June 2007 09:47:38 pm James Ratcliff wrote:
> 
> "J Storrs Hall, PhD" <[EMAIL PROTECTED]> wrote: On Monday 11 June 2007 
08:12:08 pm James Ratcliff wrote:
> > 1. Is anyone taking an approach to AGI without the use of Symbol 
Grounding?
> 
> You'll have to go into that a bit more for me please.

Here's how Harnad defines it in his original paper:

" My own example of the symbol grounding problem has two versions, one 
difficult, and one, I think, impossible. The difficult version is: Suppose 
you had to learn Chinese as a second language and the only source of 
information you had was a Chinese/Chinese dictionary. The trip through the 
dictionary would amount to a merry-go-round, passing endlessly from one 
meaningless symbol or symbol-string (the definientes) to another (the 
definienda), never coming to a halt on what anything meant.
 The only reason cryptologists of ancient languages and secret codes seem to 
be able to successfully accomplish something very like this is that their 
efforts are grounded in a first language and in real world experience and 
knowledge. The second variant of the Dictionary-Go-Round, however, goes far 
beyond the conceivable resources of cryptology: Suppose you had to learn 
Chinese as a first language and the only source of information you had was a 
Chinese/Chinese dictionary! This is more like the actual task faced by a 
purely symbolic model of the mind: How can you ever get off the symbol/symbol 
merry-go-round? How is symbol meaning to be grounded in something other than 
just more meaningless symbols? This is the symbol grounding problem."

The reason this doesn't apply to AI the way philosophers tend to think it does 
is that there is a difference between a dictionary and a computer (or any 
other working machine): the computer has *mechanism* which can act out 
semantic primitives *by itself*. Thus the recursive construction of meaning 
does have a terminating base case.
> 
> ... There's a whole raft of 
> philosophical conundrums (qualia among them) that simply evaporate if you 
take the systems approach to AI and say "we're going to build a machine that 
does this kind of thing, and we're going to assume that the human brain is 
such a machine as well."
> 
> In what way?  I try to edge around most of the fuzzy, magic points of 
philosophy and just get to what needs to be programmed.

Good -- that's exactly what I was urging. DON'T try to get into the 
philosophical end of it -- you'll argue for 3000 years and come to no useful 
conclusions.

> On the other hand, the trend to building robots in AI can be a valuable tool 
to keep oneself from doing the hard part of the problem in preparing the 
input for the program, thus fooling oneself into thinking the program has 
solved a harder problem than it has.
> 
> What is the "hard part of the problem in preparing the input for the 
program"?

Forall u: place(u) implies can(monkey, move(monkey, box, u))

can(monkey, climbs(monkey,box))

place(under(bananas))

at(box, under(bananas)) and on(monkey, box) implies can(monkey, reach(monkey,
 bananas))

Forall p forall x: reach(p,x) implies cause(has(p,x))

etc.

The "feet of clay" stage is to put this problem into a computer in symbolic 
predicate logic. The hard part is going from a video/audio stream that would 
represent the monkey's experience to the rules that represent the monkey's 
model of how the world works.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Symbol Grounding

2007-06-12 Thread Benjamin Goertzel

"Symbol grounding" basically means the association of linguistic tokens
(words, linguistic concepts, etc.) with nonlinguistic (e.g.
perceptual-motor)
patterns.

E.g. associating the word "apple" with a set of visual images of apples, or
associating (some sense of) the word "from" with a set of remembered
episodes illustrating
that sense of "from-ness" ...

Clearly human language learning relies to a large extent on these kinds of
associations; a question is whether a software program could tractably learn
language without such associations, by relying solely on statistical
associations within texts.  (Note the word "tractably" in the prior sentence
-- I assume that given a sufficiently large corpus, language could be
learned solely from statistical text analysis.  But if "sufficiently large"
means octillions of documents, that doesn't matter until we get access to
the Galactic Empire's online library...)

Cyc would be an example of an AGI project that is trying to get deep
language understanding and general intelligence without any kind of symbol
grounding.

-- Ben G

On 6/11/07, James Ratcliff <[EMAIL PROTECTED]> wrote:


1. Is anyone taking an approach to AGI without the use of Symbol
Grounding?
Or is that intrinsic in everyones approaches at this stage?
(short of some Neural Network approaches)

2. How do you describe Symbol Grounding for an AGI?
What do you consider the best ways to have the system get Symbol
Grounding?

James

*Joshua Fox <[EMAIL PROTECTED]>* wrote:

Josh,

Thanks for that answer on the layering of mind.


> It's not that any existing level is wrong, but there aren't enough of
them, so
> that the higher ones aren't being built on the right primitives in
current
> systems. Word-level concepts in the mind are much more elastic and
plastic
> than logic tokens.

Could I ask also that you take a stab at a psychological/sociological
question:  Why have not the leading minds of AI (considering for this
purpose only the true creative thinkers with status in the community,
however small a fraction that may be) taken a sufficiently multi-layered,
grounded  approach up to now? Isn't the need for grounding and deep-layering
obvious to the most open-minded and intelligent of researchers?

Joshua
--
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;




___
James Ratcliff - http://falazar.com
Looking for something...

--
Looking for a deal? Find great prices on flights and 
hotelswith
 Yahoo! FareChase.
--
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Re: [agi] Symbol Grounding

2007-06-11 Thread James Ratcliff


"J Storrs Hall, PhD" <[EMAIL PROTECTED]> wrote: On Monday 11 June 2007 08:12:08 
pm James Ratcliff wrote:
> 1. Is anyone taking an approach to AGI without the use of Symbol Grounding?

You'll have to go into that a bit more for me please.

Symbol grounding is something of a red herring. There's a whole raft of 
philosophical conundrums (qualia among them) that simply evaporate if you take 
the systems approach to AI and say "we're going to build a machine that does 
this kind of thing, and we're going to assume that the human brain is such a 
machine as well."

In what way?  I try to edge around most of the fuzzy, magic points of 
philosophy and just get to what needs to be programmed.

On the other hand, the trend to building robots in AI can be a valuable tool to 
keep oneself from doing the hard part of the problem in preparing the input for 
the program, thus fooling oneself into thinking the program has solved a harder 
problem than it has.

What is the "hard part of the problem in preparing the input for the program"?

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



___
James Ratcliff - http://falazar.com
Looking for something...
   
-
You snooze, you lose. Get messages ASAP with AutoCheck
 in the all-new Yahoo! Mail Beta. 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Re: [agi] Symbol Grounding

2007-06-11 Thread J Storrs Hall, PhD
On Monday 11 June 2007 08:12:08 pm James Ratcliff wrote:
> 1. Is anyone taking an approach to AGI without the use of Symbol Grounding?


Symbol grounding is something of a red herring. There's a whole raft of 
philosophical conundrums (qualia among them) that simply evaporate if you 
take the systems approach to AI and say "we're going to build a machine that 
does this kind of thing, and we're going to assume that the human brain is 
such a machine as well."

On the other hand, the trend to building robots in AI can be a valuable tool 
to keep oneself from doing the hard part of the problem in preparing the 
input for the program, thus fooling oneself into thinking the program has 
solved a harder problem than it has.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Symbol Grounding

2007-06-11 Thread James Ratcliff
1. Is anyone taking an approach to AGI without the use of Symbol Grounding?
Or is that intrinsic in everyones approaches at this stage?
(short of some Neural Network approaches)

2. How do you describe Symbol Grounding for an AGI?
What do you consider the best ways to have the system get Symbol Grounding?

James

Joshua Fox <[EMAIL PROTECTED]> wrote: Josh,

Thanks for that answer on the layering of mind.


> It's not that any existing level is wrong, but there aren't enough of them, so
> that the higher ones aren't being built on the right primitives in current 
> systems. Word-level concepts in the mind are much more elastic and plastic
> than logic tokens.

Could I ask also that you take a stab at a psychological/sociological question: 
 Why have not the leading minds of AI (considering for this purpose only the 
true creative thinkers with status in the community, however small a fraction 
that may be) taken a sufficiently multi-layered, grounded  approach up to now? 
Isn't the need for grounding and deep-layering obvious to the most open-minded 
and intelligent of researchers? 

Joshua


 
-
 This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;


___
James Ratcliff - http://falazar.com
Looking for something...
   
-
Looking for a deal? Find great prices on flights and hotels with Yahoo! 
FareChase.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Re: [agi] Symbol grounding (was Failure scenarios)

2006-09-26 Thread YKY (Yan King Yin)

> > > My guess at a good basis for KR is simply the cleanest, most powerful, and> > > most general programming language I can come up with. That's because to learn
> > > new concepts and really understand them, the AI will have to do the> > > equivalent of writing recognizers, simulators, experiment generators, etc,> > > for the phenomenon in question. I propose to give it the same tools I'd want
> > > if I were doing the same job.> > [...]> The crux of AI is in the learning algorithms; any KR that is> sufficiently general and is compatible with sufficiently powerful> learning algorithms will do...

In fact, the expressiveness of the KR is inversely related to the efficiency of learning in that KR.  In other words, there is an expressiveness vs learning efficiency trade-off.

 
Too expressive --> too little inductive bias --> inefficient learning.
Too much inductive bias --> KR not expressive enough --> some knowledge/facts cannot be represented.
 
YKY

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Symbol grounding (was Failure scenarios)

2006-09-26 Thread Ben Goertzel

> My guess at a good basis for KR is simply the cleanest, most powerful, and
> most general programming language I can come up with. That's because to
learn
> new concepts and really understand them, the AI will have to do the
> equivalent of writing recognizers, simulators, experiment generators, etc,
> for the phenomenon in question. I propose to give it the same tools I'd
want
> if I were doing the same job.


Yes, that's about right...

Novamente's KR for procedural knowledge is a cute little language
called Combo, which is sorta like LISP with a bad haircut and with
extra in-built operators corresponding to some Novamente cognitive
operations like inference

Novamente's KR for declarative knowledge consists of probabilistic
relationships (simple and complex ones); and, there are mechanisms to
convert btw procedural and declarative knowledge...

The crux of AI is in the learning algorithms; any KR that is
sufficiently general and is compatible with sufficiently powerful
learning algorithms will do...

Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Symbol grounding (was Failure scenarios)

2006-09-26 Thread Gregory Johnson
Alas but the poor AI will only have the internet,  grid and anything accessible therefrom
much like an alien studying earth 50 light years from here will only have radio and TV
signals.

To be able to really learn,  an AI must have some direct covert connection directly to human
neural systems and the proper interface to tinker and poke about.

That and real time programmer  24/7  links  , as a "help desk". 

A bit off topic but bootstrapping is likely how the first AI will 
learn.  Perhaps then it will first off  design better systems to increase its capability to learn.

Perhaps the military internet/grid links to "supersoldiers"  will be how the first
near AI learns how to upgrade itself to become a fully functional AI.

My question then is.. If the "3 laws of robotics a la Isacc Asimov "
are coded into the source code of the Pre AI in order to keep it
"friendly" then it will have to solve first the conundrum of obeying
the 3 laws and still
being the support system for human killing machines without becoming a  schitzophrenic AI.

Morris


On 9/26/06, J. Storrs Hall, PhD. <[EMAIL PROTECTED]> wrote:
On Monday 25 September 2006 21:11, Ben Goertzel wrote:> [Harnad]For an AI program to have semantics about the world (or any part of it) it hasto contain a working model. Its input and output mechanisms need to recognize
the thing, whatever it is, but that's secondary. (and unfortunately that'sALL that current-practice machine learning is working on.) But the AI's modelneeds to be able to simulate, predict, facilitate the description of possible
or likely variants, and so forth.My guess at a good basis for KR is simply the cleanest, most powerful, andmost general programming language I can come up with. That's because to learnnew concepts and really understand them, the AI will have to do the
equivalent of writing recognizers, simulators, experiment generators, etc,for the phenomenon in question. I propose to give it the same tools I'd wantif I were doing the same job.Josh-This list is sponsored by AGIRI: 
http://www.agiri.org/emailTo unsubscribe or change your options, please go to:http://v2.listbox.com/member/[EMAIL PROTECTED]


This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Symbol grounding (was Failure scenarios)

2006-09-26 Thread J. Storrs Hall, PhD.
On Monday 25 September 2006 21:11, Ben Goertzel wrote:
> [Harnad]
> Suppose you had to learn Chinese as a first language and the only
> source of information you had was a Chinese/Chinese dictionary![8]
> ...
> The standard reply of the symbolist (e.g., Fodor 1980, 1985) is that
> the meaning of the symbols comes from connecting the symbol system to
> the world "in the right way." 
> ...

I think that both Harnad and Fodor miss a bet. A moth flying around a light 
bulb acts like a clueless GOFAI system -- and its symbols are fully grounded 
in sensors and effectors. What's happened is that it's programmed with a 
simplistic heuristic (solar navigation) that fails in a world containing 
lightbulbs. What it's missing is just what we're looking for: a model that it 
can expand to understand new things in the world.

> My conjecture is that a probabilistic weighted, labeled hypergraph --
> with an appropriate collection of node/link types -- is a sufficiently
> and appropriately flexible KR format, which can be made to adapt
> itself based on the data within it, via coupling it with a careful
> combination of evolutionary and inferential learning mechanisms...

The solution to the symbol grounding problem is actually pretty simple. A 
computer is not a dictionary; its a machine. Its "understanding" of 
arithmetic is not based on chasing definitions back forever. If it did, it 
would be in an infinite recursive loop. Instead it has primitives that have 
semantics because when the primitives add 2 and 2 they get 4. 

The Wright brothers built a glider that didn't fly very well. (Lift/drag of 2 
or thereabouts.) They came home and built a wind tunnel, did experiments, and 
next year's glider was immensely better (l/d over 10). How can a box in 
Dayton have semantics to answer important questions about hillsides in Kitty 
Hawk? Because it was part of the real world? No, no, no, no, no. It was 
because it was a *working model* and it gave the *right answer*. If they had 
had a Cray running fluid dynamics codes they would have gotten the *same 
answer* and thus the Cray would have had the same semantics.

For an AI program to have semantics about the world (or any part of it) it has 
to contain a working model. Its input and output mechanisms need to recognize 
the thing, whatever it is, but that's secondary. (and unfortunately that's 
ALL that current-practice machine learning is working on.) But the AI's model 
needs to be able to simulate, predict, facilitate the description of possible 
or likely variants, and so forth.

My guess at a good basis for KR is simply the cleanest, most powerful, and 
most general programming language I can come up with. That's because to learn 
new concepts and really understand them, the AI will have to do the 
equivalent of writing recognizers, simulators, experiment generators, etc, 
for the phenomenon in question. I propose to give it the same tools I'd want 
if I were doing the same job.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]