Re: [agi] Re: Language learning

2008-04-23 Thread YKY (Yan King Yin)
There is no doubt that learning new languages at an older age is much
more difficult than younger.  I wonder if there are some hard
computational constraints that we must observe in order for the
learning algorithm to be tractable.  Perhaps sensory / linguistic
learning should be most intense during the earliest stage of AGI
knowledge acquisition, while the emphasis should be shifted to other
cognitive areas later.

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


[agi] Other AGI-like communities

2008-04-23 Thread Joshua Fox
To return to the old question of why AGI research seems so rare, Samsonovich
et al. say (
http://members.cox.net/alexei.v.samsonovich/samsonovich_workshop.pdf)

'In fact, there are several scientific communities pursuing the same or
similar goals, each unified under their own unique slogan: machine /
artificial consciousness, human-level intelligence, embodied cognition,
situation awareness, artificial general intelligence, commonsense
reasoning, qualitative reasoning, strong AI, biologically inspired
cognitive architectures (BICA), computational consciousness,
bootstrapped learning, etc. Many of these communities do not recognize
each other.'

Could this be the case: That there are many investigators outside the AGI
community who share the goals and many of the methods with AGI-ers?

Joshua

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Language learning

2008-04-23 Thread Vladimir Nesov
On Wed, Apr 23, 2008 at 10:55 AM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
 There is no doubt that learning new languages at an older age is much
  more difficult than younger.  I wonder if there are some hard
  computational constraints that we must observe in order for the
  learning algorithm to be tractable.  Perhaps sensory / linguistic
  learning should be most intense during the earliest stage of AGI
  knowledge acquisition, while the emphasis should be shifted to other
  cognitive areas later.


Can you give some reasonable references that support this position? It
doesn't seem obvious that it's indeed the case, and it's probably
difficult to come up with an adequate way of comparing adult learning
with child learning. Adult can learn a new language in several months,
by a full-time effort, on adult level, while it takes much longer
for children to get there.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-23 Thread J Storrs Hall, PhD
On Tuesday 22 April 2008 01:22:14 pm, Richard Loosemore wrote:

 The solar system, for example, is not complex:  the planets move in 
 wonderfully predictable orbits.

http://space.newscientist.com/article/dn13757-solar-system-could-go-haywire-before-the-sun-dies.html?feedId=online-news_rss20

How will life on Earth end? The answer, of course, is unknown, but two new 
studies suggest a collision with Mercury or Mars could doom life long before 
the Sun swells into a red giant and bakes the planet to a crisp in about 5 
billion years.
The studies suggest that the solar system's planets will continue to orbit 
the Sun stably for at least 40 million years. But after that, they show there 
is a small but not insignificant chance that things could go terribly awry.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Other AGI-like communities

2008-04-23 Thread Pei Wang
As usual, it is a matter of degree --- each of the communities Alexei
listed has some similarity with AGI in the research goals and
techniques explored, but at the same time, there are noticeable
differences in the assumptions and focuses, which are not merely a
difference in name.

Given what is going on, we can expect closer relationship among these
communities in the future, though probably not a complete merging very
soon.

Pei

On Wed, Apr 23, 2008 at 5:21 AM, Joshua Fox [EMAIL PROTECTED] wrote:

 To return to the old question of why AGI research seems so rare, Samsonovich
 et al. say
 (http://members.cox.net/alexei.v.samsonovich/samsonovich_workshop.pdf)

 'In fact, there are several scientific communities pursuing the same or
 similar goals, each unified under their own unique slogan: machine /
 artificial consciousness, human-level intelligence, embodied cognition,
 situation awareness, artificial general intelligence, commonsense
 reasoning, qualitative reasoning, strong AI, biologically inspired
 cognitive architectures (BICA), computational consciousness,
 bootstrapped learning, etc. Many of these communities do not recognize
 each other.'

 Could this be the case: That there are many investigators outside the AGI
 community who share the goals and many of the methods with AGI-ers?

  Joshua





  

  agi | Archives | Modify Your Subscription

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Other AGI-like communities

2008-04-23 Thread Ben Goertzel
On Wed, Apr 23, 2008 at 5:21 AM, Joshua Fox [EMAIL PROTECTED] wrote:

 To return to the old question of why AGI research seems so rare, Samsonovich
 et al. say
 (http://members.cox.net/alexei.v.samsonovich/samsonovich_workshop.pdf)

 'In fact, there are several scientific communities pursuing the same or
 similar goals, each unified under their own unique slogan: machine /
 artificial consciousness, human-level intelligence, embodied cognition,
 situation awareness, artificial general intelligence, commonsense
 reasoning, qualitative reasoning, strong AI, biologically inspired
 cognitive architectures (BICA), computational consciousness,
 bootstrapped learning, etc. Many of these communities do not recognize
 each other.'

I believe these various academic subcommunities ARE quite aware of each other

And I would divide them into two categories

1)
Those that are concerned with rather specialized approaches to
intelligence, e.g. qualitative reasoning, commonsense reasoning etc.

2)
Those that do not really constitute a coherent research community,
e.g. BICA, human-level AI ... but rather merely constitute a few
assorted workshops, journal special issues, etc.

-- Ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Other AGI-like communities

2008-04-23 Thread Ben Goertzel
On Wed, Apr 23, 2008 at 11:29 AM, Mike Tintner [EMAIL PROTECTED] wrote:
 Ben/Joshua:

  How do you think the AI and AGI fields relate to the embodied  grounded
 cognition movements in cog. sci? My impression is that the majority of
 people here (excluding you) still have only limited awareness of them  - 
 are still operating in total  totally doomed defiance of their findings:

My opinion is that the majority of people here are aware of these
ideas, and consider them unproven speculations not agreeing with their
own intuition ;-)

  Grounded cognition rejects traditional views that cognition is computation
  on amodal symbols in a modular system, independent of
  the brain's modal systems for perception, action, and introspection.
  Instead, grounded cognition proposes that modal simulations,
  bodily states, and situated action underlie cognition.  Barsalou

  Grounded cognition here obviously means not just pointing at things, but
 that all traditional rational operations are, and have to be, supported by
 image-inative simulation in any form of general intelligence.

I wouldn't agree with such a strong statement.  I think the grounding
of ratiocination in image-ination is characteristic of human
intelligence, and must thus be characteristic of any highly human-like
intelligent system ... but, I don't see any reason to believe it's the
ONLY path.

The minds we know or can imagine, almost surely constitute a
teeny-tiny little backwater of the overall space of possible minds ;-)

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Other AGI-like communities

2008-04-23 Thread Mike Tintner

Ben/Joshua:

How do you think the AI and AGI fields relate to the embodied  grounded 
cognition movements in cog. sci? My impression is that the majority of 
people here (excluding you) still have only limited awareness of them  -  
are still operating in total  totally doomed defiance of their findings:


Grounded cognition rejects traditional views that cognition is computation
on amodal symbols in a modular system, independent of
the brain's modal systems for perception, action, and introspection.
Instead, grounded cognition proposes that modal simulations,
bodily states, and situated action underlie cognition.  Barsalou

Grounded cognition here obviously means not just pointing at things, but 
that all traditional rational operations are, and have to be, supported by 
image-inative simulation in any form of general intelligence.



Joshua Fox [EMAIL PROTECTED] wrote:


To return to the old question of why AGI research seems so rare, 
Samsonovich

et al. say
(http://members.cox.net/alexei.v.samsonovich/samsonovich_workshop.pdf)

'In fact, there are several scientific communities pursuing the same or
similar goals, each unified under their own unique slogan: machine /
artificial consciousness, human-level intelligence, embodied 
cognition,

situation awareness, artificial general intelligence, commonsense
reasoning, qualitative reasoning, strong AI, biologically inspired
cognitive architectures (BICA), computational consciousness,
bootstrapped learning, etc. Many of these communities do not recognize
each other.'


I believe these various academic subcommunities ARE quite aware of each 
other


And I would divide them into two categories

1)
Those that are concerned with rather specialized approaches to
intelligence, e.g. qualitative reasoning, commonsense reasoning etc.

2)
Those that do not really constitute a coherent research community,
e.g. BICA, human-level AI ... but rather merely constitute a few
assorted workshops, journal special issues, etc.

-- Ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com



--
No virus found in this incoming message.
Checked by AVG.
Version: 7.5.524 / Virus Database: 269.23.3/1393 - Release Date: 4/23/2008 
8:12 AM






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-23 Thread Richard Loosemore

J Storrs Hall, PhD wrote:

On Tuesday 22 April 2008 01:22:14 pm, Richard Loosemore wrote:

The solar system, for example, is not complex:  the planets move in 
wonderfully predictable orbits.


http://space.newscientist.com/article/dn13757-solar-system-could-go-haywire-before-the-sun-dies.html?feedId=online-news_rss20

How will life on Earth end? The answer, of course, is unknown, but two new 
studies suggest a collision with Mercury or Mars could doom life long before 
the Sun swells into a red giant and bakes the planet to a crisp in about 5 
billion years.
The studies suggest that the solar system's planets will continue to orbit 
the Sun stably for at least 40 million years. But after that, they show there 
is a small but not insignificant chance that things could go terribly awry.


I am confused about the intended message.

If you take the above quote from me in its original context, your 
illustration perfectly supports what I said, but with that one paragraph 
taken out of context it looks as if you are trying to contradict it.



Richard Loosemore

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Language learning

2008-04-23 Thread J. Andrew Rogers


On Apr 22, 2008, at 11:55 PM, YKY (Yan King Yin) wrote:

There is no doubt that learning new languages at an older age is much
more difficult than younger.



I seem to recall that recent research does not support this  
assertion.  Rate of language learning is essentially the same for both  
adults and children and is a function of the amount of time spent  
trying to learn it.  The apparent absolute differences in rate of  
learning turned out to be attributable to adults spending a smaller  
percentage of their time learning a new language than children on  
average, which gave the false impression that adults learn languages  
more slowly.


I am too lazy to dig up cites at the moment, but I definitely remember  
discussions of this research in the not too distant past.


Cheers,

J. Andrew Rogers

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Other AGI-like communities

2008-04-23 Thread a

Ben Goertzel wrote:

I wouldn't agree with such a strong statement. I think the grounding
of ratiocination in image-ination is characteristic of human
intelligence, and must thus be characteristic of any highly human-like
intelligent system ... but, I don't see any reason to believe it's the
ONLY path
Yes, you are correct that it is not the only path. However, the 
requirement perceptual grounding depends on the definition of intelligence.


If you want to make a mathematical intelligence, you do not need it. But 
if you want to build some life-extension technologies and nanobots, 
perceptual grounding is needed to build these nanobots. You would 
require visual intelligence to build these nanobots. Replacing human 
physical labor would also require visual-motor coordination. It is 
impossible to bootstrap perceptual grounding from a purely symbolic AGI. 
It does not know how to build 3D robots.


Purely symbolic ontologies can produce unsatisfying results. The 
categorization of objects are arbitrarily assigned by humans via their 
perception. It would cause conflict and roundaboutness. If the AI goes 
through long chains of deductive inference, the result would be 
inaccurate because small errors and ambiguities in the categorization of 
symbols would magnify and produce huge errors.


Probabilistic reasoning is an improvement, but it would ultimately 
produce inaccurate results and errors just as the same.


Thus, the AI engine needs perceptual grounding to refresh or prune 
the illogical inferences.


Furthermore, symbolic ontologies are /inductive/ and perceptual 
reasoning is /deductive/. Symbolic ontologies are inductive because 
categorization inevitably raises ambiguities. Perceptual reasoning, 
however, is deductive because it is not categorized arbitrarily and is 
global and continuous.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


[agi] Why Symbolic Representation without Imaginative Simulation Won't Work

2008-04-23 Thread Mike Tintner
I think one can now present a convincing case why any symbolic/linguistic 
approach to AGI, that is not backed by imaginative simulation, simply will 
not work. For example, any attempt to build an AGI with a purely symbolic 
database of knowledge mined from the Net or other texts, is  doomed.


This is obviously something I have long argued, but it has been difficult to 
find a truly focussed argument with sufficiently general application and 
power.


The basic argument:

Language depends on

1) General Activity Language - a core, very extensive vocabulary of words 
for basic kinds of movements, which we all acquire normally very early. 
These words/movements are essential for moving about, and manipulating the 
world -  and understanding how the world moves. They are also essential for 
General Intelligence, because they apply to all activities, and are central 
to the acquisition of new physical activities.


2) Our movement words, (like, in fact, all words),  are general, open-ended 
concepts which cover, in this case, vast, all-encompassing ranges of 
specific, possible movements. In order to interpret them, we continually 
have to decide which one of a vast range, is appropriate in a given 
environment - for example, just which direction and angle we are going to 
decide appropriate to reach out - horizontally, vertically, at 45% , 60%, 
75%, etc. etc.


3). It is, if not absolutely impossible, utterly impractical, and absurdly 
complicated, to instantiate a movement-word by any kind of symbolic 
process - by, for example, first trying to symbolically label each and every 
one of a range of possible movements.


The only practical way - and the ideal way - is to decide the specific 
movement, by an imaginative/ sensorimotor simulation. Exactly what this 
should entail is open to discussion, (and getting much discussion 
elsewhere), but for the sake of focussing our minds here, let's think of it 
, if only provisionally, as some kind of visual mapping process.


4)The same basic argument can be extended to every area of language. I am 
focussing on this particular area because it is not only fundamental to any 
worldview, but can be treated very concretely, and from a more or less 
mathematical and robotics POV.



The argument in detail:

1) General Activity Language - it is acknowledged that we rapidly acquire a 
certain vocabulary of basic words. What I'm focussing on here is that we 
especially acquire a core of hundreds of basic movement words, such as:


reach, push, pull, hit, punch, throw, kick, wave, catch, handle, grab, put, 
move, enter, exit, slip, slide, remove, connect, disconnect, fit, step, 
stride, walk, run, climb, jump, hop, leap, press, lift, raise, lower, drop, 
pick up, fall, slip, knock, tap,  shake, rock, roll, scratch,  settle, 
unsettle, slap, slop, fix, propel, repel, rope, stick, withdraw, touch, 
finger, point, hold, snatch, thrust, scrape, grip, grasp, grope, back, 
support, circle, rotate,


These can be considered as basic level concepts, which, like dog, cat, 
bird, chair, are the easiest to visualise - in this case as movements.


We also acquire a range of superordinate movement concepts, involving much 
more general, and not so immediately obvious to visualise,  categories of 
movement, like:


come, go, make, start, stop, give, take, use, do, be, get, dance, play, 
heat, cool, add, subtract, travel, journey, advance, retreat


(These can be compared to similar superordinate, not so 
obvious-to-visualise, concepts such as: animal, furniture, etc.)


We also acquire a rich range of subordinate concepts, involving more 
specific types of movements, some of which may belong to specific 
activities, like:


hammer, nail, screw, chop, slice, net, bat, elbow, head-butt, pin, clip, 
vacuum, catapult, glue, brick, 


We also acquire a whole set of prepositions which give direction to those 
movements, such as:


in, into, on, onto, out, towards, away from, up, down, through, around, 
inside, outside, over. under, along, underneath, about 


An AGI POV allows us to appreciate that this core vocabulary is a brilliant 
invention of the human mind, although no doubt, animals share many of the 
same concepts. These are general movements which can be applied to any 
physical activity. They can be, and are, used to acquire new physical 
activities/ skills. Look at the instruction manuals for virtually any 
activity, and you will find that extensive use is made of these basic words. 
A how-to-cook or a how-to-play-a-sport manual will liberally tell you to 
move, put, take, go, add etc,  and won't be couched in entirely 
activity-specific words, like play a forehand/backhand/ drop shot, or 
execute a pas-de-deux. (Any AGI must have this vocabulary to succeed).


2) Our movement concepts are, like all our concepts, general and open-ended. 
They cover vast ranges of possible specific movements, typically 
all-encompassing. For example,  concepts like reach, push, pull can 
have a 

Re: [agi] Re: Language learning

2008-04-23 Thread YKY (Yan King Yin)
On Thu, Apr 24, 2008 at 2:20 AM, J. Andrew Rogers
[EMAIL PROTECTED] wrote:

 On Apr 22, 2008, at 11:55 PM, YKY (Yan King Yin) wrote:
  There is no doubt that learning new languages at an older age is much
  more difficult than younger.

 I seem to recall that recent research does not support this assertion.  Rate
 of language learning is essentially the same for both adults and children
 and is a function of the amount of time spent trying to learn it.  The
 apparent absolute differences in rate of learning turned out to be
 attributable to adults spending a smaller percentage of their time learning
 a new language than children on average, which gave the false impression
 that adults learn languages more slowly.

 I am too lazy to dig up cites at the moment, but I definitely remember
 discussions of this research in the not too distant past.

I think a person thinks in his/her first language, and when talking in
a second language there is some extra processing going on (though it
may not be exactly a translation process), which slow things down,
giving the popular impression that immigrants are a bit dumber.  I'm
not sure how great this effect is, but I'd be very surprised if it
doesn't exist.  Afterall, I have spent a lot of time learning English
and I still find it a severe handicap when communicating in English.

PS:  children don't spend a lot of time learning languages.  At far as
I know, when I was a kid I spend most of my time playing around ;)
YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Why Symbolic Representation without Imaginative Simulation Won't Work

2008-04-23 Thread Mark Waser

I think one can now present a convincing case why any symbolic/linguistic
approach to AGI, that is not backed by THE SECRET SAUCE, simply will
not work.

The only practical way - and the ideal way - is to decide the specific
movement, by THE SECRET SAUCE. Exactly what this
should entail is open to discussion, (and getting much discussion
elsewhere), but for the sake of focussing our minds here, let's think of 
it,

if only provisionally, as some kind of TASTE TREAT.

3) The ideal and simplest way to work out which specific movement is
required is by THE SECRET SAUCE - here some TASTE TREAT.

And the neuroscientific evidence keeps piling up that we do indeed plan
movements by THE SECRET SAUCE .

4)It shouldn't be too hard to see that the necessity of testing symbolic
language by THE SECRET SAUCE applies, by extension, to many other
areas of the world, as well as that of the movements of objects and
creatures.  Descriptions of the forms of all objects and things. All
physical activities - hunting, sex, eating. All interactions between
creatures. Conversations. Emotions...  Statements about all these also
typically depend on physical, imaginative knowledge of things' forms,
movements and behaviour.

In fact, there is, as Lakoff argues, no area that can be understood without
THE SECRET SAUCE , But I accept the need to demonstrate this further
with respect to more abstract areas. By all means challenge me, and I'll
think about it.

In the meantime, I believe I have made a convincing case that you cannot
understand how the world moves - and the core movement vocabulary of
language - without THE SECRET SAUCE . And if you can't do that, you
can't have a viable worldview.

ROTFLMAO!

So, why don't you cite real neuroscientific evidence (as in journal 
citations) for THE SECRET SAUCE?


You haven't made any case at all.  You've simply made statements that most 
of us disagree with and call it a proof.  BAH!



- Original Message - 
From: Mike Tintner [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, April 23, 2008 5:43 PM
Subject: [agi] Why Symbolic Representation without Imaginative Simulation 
Won't Work



I think one can now present a convincing case why any symbolic/linguistic 
approach to AGI, that is not backed by imaginative simulation, simply will 
not work. For example, any attempt to build an AGI with a purely symbolic 
database of knowledge mined from the Net or other texts, is  doomed.


This is obviously something I have long argued, but it has been difficult 
to find a truly focussed argument with sufficiently general application 
and power.


The basic argument:

Language depends on

1) General Activity Language - a core, very extensive vocabulary of 
words for basic kinds of movements, which we all acquire normally very 
early. These words/movements are essential for moving about, and 
manipulating the world -  and understanding how the world moves. They are 
also essential for General Intelligence, because they apply to all 
activities, and are central to the acquisition of new physical activities.


2) Our movement words, (like, in fact, all words),  are general, 
open-ended concepts which cover, in this case, vast, all-encompassing 
ranges of specific, possible movements. In order to interpret them, we 
continually have to decide which one of a vast range, is appropriate in a 
given environment - for example, just which direction and angle we are 
going to decide appropriate to reach out - horizontally, vertically, at 
45% , 60%, 75%, etc. etc.


3). It is, if not absolutely impossible, utterly impractical, and absurdly 
complicated, to instantiate a movement-word by any kind of symbolic 
process - by, for example, first trying to symbolically label each and 
every one of a range of possible movements.


The only practical way - and the ideal way - is to decide the specific 
movement, by an imaginative/ sensorimotor simulation. Exactly what this 
should entail is open to discussion, (and getting much discussion 
elsewhere), but for the sake of focussing our minds here, let's think of 
it , if only provisionally, as some kind of visual mapping process.


4)The same basic argument can be extended to every area of language. I am 
focussing on this particular area because it is not only fundamental to 
any worldview, but can be treated very concretely, and from a more or less 
mathematical and robotics POV.



The argument in detail:

1) General Activity Language - it is acknowledged that we rapidly acquire 
a certain vocabulary of basic words. What I'm focussing on here is that we 
especially acquire a core of hundreds of basic movement words, such as:


reach, push, pull, hit, punch, throw, kick, wave, catch, handle, grab, 
put, move, enter, exit, slip, slide, remove, connect, disconnect, fit, 
step, stride, walk, run, climb, jump, hop, leap, press, lift, raise, 
lower, drop, pick up, fall, slip, knock, tap,  shake, rock, roll, scratch, 
settle, unsettle, slap, slop, fix, propel, 

Re: [agi] Why Symbolic Representation without Imaginative Simulation Won't Work

2008-04-23 Thread Abram Demski
On Wed, Apr 23, 2008 at 5:43 PM, Mike Tintner [EMAIL PROTECTED] wrote:
[..]
  And these different instantiations *have* to be fairly precise, if we are
 to understand a text, or effect an instruction, successfully. The next
 sentence in the text may demand that we know the rough angle of reaching -
 and that, say, it was impossible because there was a particular kind of
 object in the way.

The above paragraph is, as I see it, the crux of your argument. If you
can't prove that one point, the argument doesn't hold water. But it
seems to me that needing to know that there was a particular kind of
object in the way is not entirely common. I'd think the exact
physical circumstances are typically less important to understand than
the intentions of people involved, the purposes of nearby objects,
etc. If so, the arguments you make earlier about how many possible
combinations of angles there are (and hand positions etc) are
irrelevant. Those details can be abstracted away.

  It would be absurd and almost certainly impossible to try working out
 movements by symbolic means - by, say, listing every possible angle at which
 an arm can reach out, and listing the normal heights of different objects
 that can be reached for - or trying to apply some set of mathematical,
 formulaic approach to the problem.

It is not clear what you mean by symbolic here. Surely any
simulation, including those you suggest, will be symbolic-- all we've
got to work with are 1s and 0s. But that's not what you mean. It seems
as if you mean something any representations that are abstract (as
opposed to concrete image-manipulation). But it seems odd to eliminate
abstract representations altogether... so perhaps you are suggesting
that abstract must always be accompanied by concrete?

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Why Symbolic Representation without Imaginative Simulation Won't Work

2008-04-23 Thread Mike Tintner

Abram,

Both to-the-point responses. One: how much, you're asking, are statements 
about movement central to language?  Extremely central. That's precisely why 
we have this core general activity/movement language that we all share - 
all those very basic movement words - we use them so often. How if it can't 
interpret those words specifically, is an AGI going to understand sports 
reports, murder reports, recipes, or texts re machine assembly, 
construction, manufacturing, and a million other very physical activities? 
Or physics, biology, medicine etc etc? How is it going to understand how 
people walk, run, and generally navigate their environments, houses, cities?


You seem to be expressing a desperate hope that maybe language has mainly 
just a set of general movement statements - generalisations about how things 
move, that don't need to be interpreted specifically.


As I discussed with Stephen Reed recently, it would seem that many texts 
which AGI-ers apply themselves to, do have this unreal, general nature. He 
hit him it will say, and you only have to know that that was generally 
possible, not the precise movement. But in reality and if you are going to 
engage with specific environments and situations, then of course you have to 
specify movements to an enormous extent.


And how could an AGI have any intelligence worth talking about, if it can't 
work out, say how to navigate your cluttered house, or a crowded railway 
station, or conduct a battle, or whatever? Of course, language isn't just 
All men move, Socrates is a man, therefore Socrates moves.


Two: yes, I v. much believe that rationality (symbolic language/logic/maths 
and schematic geometry) and imagination are interdependent.  Abstract must 
always be accompanied by/ grounded in concrete, but definitely not replaced.


Two:
Abram Demski:  MT: And these different instantiations *have* to be fairly 
precise, if we are

to understand a text, or effect an instruction, successfully. The next
sentence in the text may demand that we know the rough angle of 
reaching -

and that, say, it was impossible because there was a particular kind of
object in the way.


The above paragraph is, as I see it, the crux of your argument. If you
can't prove that one point, the argument doesn't hold water. But it
seems to me that needing to know that there was a particular kind of
object in the way is not entirely common. I'd think the exact
physical circumstances are typically less important to understand than
the intentions of people involved, the purposes of nearby objects,
etc. If so, the arguments you make earlier about how many possible
combinations of angles there are (and hand positions etc) are
irrelevant. Those details can be abstracted away.


 It would be absurd and almost certainly impossible to try working out
movements by symbolic means - by, say, listing every possible angle at 
which

an arm can reach out, and listing the normal heights of different objects
that can be reached for - or trying to apply some set of mathematical,
formulaic approach to the problem.


It is not clear what you mean by symbolic here. Surely any
simulation, including those you suggest, will be symbolic-- all we've
got to work with are 1s and 0s. But that's not what you mean. It seems
as if you mean something any representations that are abstract (as
opposed to concrete image-manipulation). But it seems odd to eliminate
abstract representations altogether... so perhaps you are suggesting
that abstract must always be accompanied by concrete?




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: [agi] Adding to the extended essay on the complex systems problem

2008-04-23 Thread Ed Porter
Richard,

 

In your blog you said:

 

- Memory.  Does the mechanism use stored information about what it was
doing fifteen minutes ago, when it is making a decision about what to do
now?  An hour ago?  A million years ago?  Whatever:  if it remembers, then
it has memory.

 

- Development.  Does the mechanism change its character in some way over
time?  Does it adapt?

 

- Identity.  Do individuals of a certain type have their own unique
identities, so that the result of an interaction depends on more than the
type of the object, but also the particular individuals involved?

 

- Nonlinearity.  Are the functions describing the behavior deeply
nonlinear?

 

These four characteristics are enough. Go take a look at a natural system in
physics, or an engineering system, and find one in which the components of
the system interact with memory, development, identity and nonlinearity.
You will not find any that are understood.

.

Notice, above all, that no engineer has ever tried to persuade one of these
artificial systems to conform to a pre-chosen overall behavior..

 

 

I am quite sure there have been many AI system that have had all four of
these features and that have worked pretty much as planned and whose
behavior is reasonably well understood (although not totally understood, as
is nothing that is truly complex in the non-Richard sense), and whose
overall behavior has been as chosen by design (with a little experimentation
thrown in) .  To be fair I can't remember any off the top of my head,
because I have read about many AI systems over the years.  But recording
episodes is very common in many prior AI systems.  So is adaptation.
Nonlinearity is almost universal, and Identity as you define it would be
pretty common.

 

So, please --- other people on this list help me out --- but I am quite sure
system have been built that prove the above quoted statement to be false.

 

Ed Porter   

 

-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, April 23, 2008 4:11 PM
To: agi@v2.listbox.com
Subject: [agi] Adding to the extended essay on the complex systems problem

 

Yesterday and today I have added more posts (susaro.com) relating to the 

definition of complex systems and why this should be a problem for AGI 

research.

 

 

 

 

Richard Loosemore

 

---

agi

Archives: http://www.listbox.com/member/archive/303/=now

RSS Feed: http://www.listbox.com/member/archive/rss/303/

Modify Your Subscription:
http://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: **SPAM** RE: [agi] Adding to the extended essay on the complex systems problem

2008-04-23 Thread Mark Waser
I So, please --- other people on this list help me out --- but I am quite 
sure system have been built that prove the above quoted statement to be false.


Sorry, Ed, but I'm not aware of any tightly-coupled system that has all of four 
of the behaviors.  The closest that I can come is a website with a growing 
twenty-questions or identification game with personalization (and does 
logging).  The real gotcha, though is the Are the functions describing the 
behavior deeply nonlinear.  You're just not going to find that with the first 
three.  Richard is being very careful here and I'd be really surprised if 
anyone can come up with anything close (that actually exists as opposed to 
being in the planning stage).




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Why Symbolic Representation P.S.

2008-04-23 Thread Mike Tintner

Abram,

Just to illustrate further, here's the opening lines of today's Times sports 
report on a football match.[Liverpool v Chelsea] How on earth could this be 
understood without massive imaginative simulation? [Stephen?] And without 
mainly imaginative memories of football matches?


John Arne Riise stood doubled over in his tiny corner of football hell. 
Agony engulfed him. One by one, teammates offered a pat on the back, a 
handshake, or just a touch, some form of human contact to show they cared. 
None of it did much good. He walked, step by aching step, to the sanctuary 
of the dressing-room, discarding bits of the apparatus of the professional 
footballer as he went. A tie-up here, a shin pad there.


He clamped down on his water bottle and held it between his teeth, like a 
bit to stop him gnawing through his bottom lip. A camera zoomed in to show 
muscles around his eyes and mouth tensing as his mind worked overtime. He 
looked like Harold Shand being driven to his execution in the final scenes 
of The Long Good Friday. A replay of every mistake he had made to get there 
was showing on his face. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Adding to the extended essay on the complex systems problem

2008-04-23 Thread Richard Loosemore

Ed Porter wrote:

Richard,
In your blog you said:

- Memory.  Does the mechanism use stored information about what it was 
doing fifteen minutes ago, when it is making a decision about what to do 
now?  An hour ago?  A million years ago?  Whatever:  if it remembers, 
then it has memory.


- Development.  Does the mechanism change its character in some way 
over time?  Does it adapt?


- Identity.  Do individuals of a certain type have their own unique 
identities, so that the result of an interaction depends on more than 
the type of the object, but also the particular individuals involved?


- Nonlinearity.  Are the functions describing the behavior deeply 
nonlinear?


These four characteristics are enough. Go take a look at a natural 
system in physics, or an engineering system, and find one in which the 
components of the system interact with memory, development, identity and 
nonlinearity.  You will not find any that are understood.


“…

“Notice, above all, that no engineer has ever tried to persuade one of 
these artificial systems to conform to a pre-chosen overall behavior….”


 

 

I am quite sure there have been many AI system that have had all four of 
these features and that have worked pretty much as planned and whose 
behavior is reasonably well understood (although not totally understood, 
as is nothing that is truly complex in the non-Richard sense), and whose 
overall behavior has been as chosen by design (with a little 
experimentation thrown in) .  To be fair I can't remember any off the 
top of my head, because I have read about many AI systems over the 
years.  But recording episodes is very common in many prior AI systems.  
So is adaptation.  Nonlinearity is almost universal, and Identity as you 
define it would be pretty common.


 

So, please --- other people on this list help me out --- but I am quite 
sure system have been built that prove the above quoted statement to be 
false.


Ed,

You have put words into my mouth:  I have never tried to argue that a 
narrow-AI system cannot work at all.


(Narrow AI is what you are referring to above:  it must be narrow AI, 
because there have not been any fully functioning *AGI* systems 
delivered yet, and you refr to systems that have been built).


The point of my argument is to claim that such narrow AI systems CANNOT 
BE EXTENDED TO BECOME AGI SYSTEMS.  The complex systems problem predicts 
that when people allow those four factors listed above to operate in a 
full AGI context, where the system is on its own for a lifetime, the 
complexity effects will then dominate.


In effect, what I am claiming is that people have been masking the 
complexity effects by mollycoddling their systems in various ways, and 
by not allowing them to run for long periods of time, or in general 
environments, or to ground their own symbols.


I would predict that when people do this mollycoddling of their AI 
systems, the complex systems effects would not become apparent very soon.


Guess what?  That exactly fits the observed history of AI.  When people 
try to make these AI systems operate in ways that brings out the 
complexity, the systems fail.




Richard Loosemore




P.S.  Please don't call it Richard-complexity  it has nothing to 
do with me:  this is complexity the way that lots of people understand 
the term.  If you need to talk about the concept that is the opposite of 
simple, it would be better to use complicated.  Personalizing it just 
creates confusion.











---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Why Symbolic Representation P.S.

2008-04-23 Thread Stephen Reed
Hi Mike,

John Arne Riise stood doubled over in his tiny corner of football hell.


These sentences are great demonstrations of why I favor a construction grammar. 
  It's not necessary to process the imagery from first principles.  These 
sentences are full of idioms that can be simply treated as constructions (i.e. 
form -- meaning pairs).
doubled over -- from WordNet: bent over or curled up, usually with laughter or 
paincorner of X hell -- a very uncomfortable situation involving Xtiny corner 
of X hell -- very uncomfortable situation involving X in which the agent (i.e. 
John Arne Riise) does not share the situation with anyone else...and so forth 
for the rest of the passage.  The downside of construction grammar is lots of 
constructions.  But human children learn them, by being taught and by 
observation / induction , so I think a dialog system can too.

This sort of text by the way, long ago put an end to the Cyc Project's then 
ambition to read and comprehend an article in a newspaper.  Texai may fail 
also, but certainly not in the same way Cyc did.
 
-Steve


Stephen L. Reed

Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860

- Original Message 
From: Mike Tintner [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, April 23, 2008 8:07:13 PM
Subject: Re: [agi] Why Symbolic Representation P.S.

 Abram,

Just to illustrate further, here's the opening lines of today's Times sports 
report on a football match.[Liverpool v Chelsea] How on earth could this be 
understood without massive imaginative simulation? [Stephen?] And without 
mainly imaginative memories of football matches?

John Arne Riise stood doubled over in his tiny corner of football hell. 
Agony engulfed him. One by one, teammates offered a pat on the back, a 
handshake, or just a touch, some form of human contact to show they cared. 
None of it did much good. He walked, step by aching step, to the sanctuary 
of the dressing-room, discarding bits of the apparatus of the professional 
footballer as he went. A tie-up here, a shin pad there.

He clamped down on his water bottle and held it between his teeth, like a 
bit to stop him gnawing through his bottom lip. A camera zoomed in to show 
muscles around his eyes and mouth tensing as his mind worked overtime. He 
looked like Harold Shand being driven to his execution in the final scenes 
of The Long Good Friday. A replay of every mistake he had made to get there 
was showing on his face. 


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com







  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com