Steve,

You raise huge issues. I broadly agree with the direction you're going with 
your multilevelled approach to physically implementing verbal commands. 
However, I'm quite sure there is still more than you think - including a whole 
level of image schemas - useful here to think of the analogy of geometry as a 
whole supportive level of science's upper level of words and other symbols.

I seriously recommend, in fact insist that you have got to get into 
Lakoff-Johnson,  and Rizzolatti-Gallese-Iacoboni & the mirror neurons crowd. 
These guys are working together & doing some of the hottest research at the mo. 
Try Chap 8 of Mark Johnson, The Meaning of the Body - and more. Basically, 
experiments show the brain does start to instantiate and process physical 
verbal commands and ideas on a pre-motor level all the time  - and indeed has 
to, if you think about it. If someone says "come with me to the supermarket", 
your brain has to process that on a motor level for you to immediately reply: 
"I can't, I've got a weak ankle." 

Actually, come to think of it, verbal porn is probably a truly great area to 
explore in terms of multilevelled, and v. physical processing here!

I haven't really thought about physical/robotic instantiation of commands much, 
except that the starting point will normally be that the body and its limbs 
typically offer something like a 180-360 degree spectrum of freedom of movement 
on any given plane, and then I guess, as you indicate, the brain-body will 
plump first for the easiest most direct line of physical approach to a target, 
and then adjust accordingly to obstacles. Clearly it will have certain movement 
sets/skills - so even if you are trying to dance around, say, freely, 
improvisationally, you tend to fall into certain familiar kinds of moves and 
find it difficult to "branch out in new directions." - As soon as one starts to 
think about these areas, it seems to me, the need for what I would call a loose 
"geoiconography" (as opposed to precise geometry/ geography) of thought - i.e. 
a system of mental image schemas - becomes apparent.
  ----- Original Message ----- 
  From: Stephen Reed 
  To: agi@v2.listbox.com 
  Sent: Friday, March 28, 2008 4:30 AM
  Subject: Re: [agi] Microsoft Launches Singularity




  ----- Original Message ----
  From: Mike Tintner <[EMAIL PROTECTED]>
  To: agi@v2.listbox.com
  Sent: Thursday, March 27, 2008 5:30:12 PM
  Subject: Re: [agi] Microsoft Launches Singularity


  Steve,

  Some odd thoughts in reply. Thanks BTW for article.

  1. You don't seem to get what's implicit in the main point - you can't 
reliably work out the sense of an enormous number of words by any kind of word 
lookup whatsoever. How do you actually work out how to "handle the object" - 
the slimy, slippery twisted ropey thing-y, or whatever? By looking at it. By 
looking at images of it - either directly or by entertaining them mentally - 
not consulting any kind of dictionary or word definitions at all. By imagining 
what parts of the object to grip, and how to configure your hands to grip it.


  Steve: Sorry that I missed that.  But your clarifying issue is quite 
interesting.  Let me try to tease appart your scenario and explain how the 
envisioned Texai system would process the command "handle the object".  I 
assume that you agree that an AGI designed to our mutual satisfaction should in 
principle be able to process that particular command with at least the same 
competence as a human.  So the issue for me is to explain in brief how Texai 
might do it.   

  First I assume that Texai has a body of commsense knowledge about, and skills 
applicable to,  the kinds of objects that can be handled.  If not, then there 
is a knowledge acquisition phase, and skill acquistion phase, that must be 
completed beforehand.  

  Second, I assume that the linquistic concepts are expressed internally by the 
system as symbolic terms.  Many terms, for example objects that can be handled, 
 are grounded to the real world by an abstraction hierarchy.  Descending down 
this hierarchy, objects are represented less and less as symbols in logical 
statements, and more and more as clustered feature vectors, and perhaps, at the 
lowest levels, as no internal state at all - just sensors and actuators in 
contact with the real world.

  Thirdly, I distinguish between the understanding the command "handle the 
object" and generating the behavior required to perform the command.  I think 
that you are conflating these two notions to make the scenario more difficult 
that it otherwise would be.  Perhaps as you know, Texai is a hierarchical 
control system.  I expect that skills will be present to handle various kinds 
of objects, so for me the issue is to determine the correct skill to invoke in 
order to perform the given command.  As I explained in my previous post, Fluid 
Construction Grammar does not determine semantics by word lookup, rather it 
looks up constructions, which might be words, but often are not.  

  Given these assumptions of mine, your scenario suggests that the object to be 
handled is one for which the system has no previous skill, or for which the 
existing skill cannot be recognized as applicable to the given object.  Because 
I now building a bootstrap dialog system, that is motivated entirely by the 
need to process novel situations, I am tempted to say that the system should 
simply ask the user to teach it how to handle the novel object, or to ask if an 
existing skill can be applied to the given object.  However, lets move beyond 
this approach, and I'll explain how the system uses existing perception and 
planning skills to handle the given object. 

  By way of simplification, I'll assume your intent when asking the system to 
"handle the object" means to pick it up with some physical actuator.  And I'll 
preface my explanation of this step by stating without proof that this task is 
analogous to those already solved by state-of-the-art, urban, driverless cars, 
e.g. "drive yourself to location X", where the driverless car has never been to 
X.  Rather than a futile attempt to explain all cases that come to mind, I'll 
discuss a couple to give a flavor my approach.  

  Case 1 The system can sense that the novel object is not dangerous and cannot 
be easily destroyed by its actuators.  Then I propose that the first strategy 
tried should be to pick it up in the most direct fashion, and compensate in 
subsequent attempts for failure modes that resulted from from the earlier 
attempts.  This is like the pole balancing task that can be accomplished by 
connectionist methods and no symbolic planning.

  Case 2 The system senses that the actions to pick up the object are not 
subject to experimentation, but must be performed correctly on the first 
attempt.  For this task, the system must observe all the object state that it 
can to remove uncertainty.  It must create a symbolic model of the object and 
its dynamics at the right level of abstraction, and perform planning using 
symbolic representions of its possible actions in order to create a trajectory 
that satisfies the command to "handle the object".  Then it must execute the 
plan, repairing the plan as needed as problem state evolves that was not 
planned in advance for (e.g. the object starts slipping from the system's 
grasp).  At lower abstraction levels, reactive behavior can substitute for 
planning (e.g. when slippage is detected by a sensor, tighen the gripping 
actuator).


  2. This discussion brings up an interesting question. I suspect that there is 
a great deal of selectivity going into what texts NLP chooses to process - and 
that they don't include how-to, instructional texts, like recipe books (and 
most educational texts),  which tell you to do things - like "take a cup," "add 
water etc" - and deal with a real world situation, in-the-world.  (?)  If 
you're dealing more in historical texts, - "the cat sat on the mat" etc - you 
don't have to confront the open-ended nature of words, quite so violently. Hey, 
the cat did some kind of sitting - as long as that's possible, who cares 
exactly what kind it was? But if you're a cool cat told to "sit" on a real mat 
that happens to be full of objects - , and you have to put those instructions 
into deeds rather than more words, - you care, and words' open-endedness 
becomes apparent.


  Steve:  I agree with your insight.  Much of NLU research is now focused on 
either information / document retrievel, or machine translation.  My main gripe 
while at Cycorp was that Cyc, in the same fashion you describe, concentrated on 
being taught facts and rules and then deductively answering queries.  But what 
could Cyc do beyond that?  An AGI aspiring system should be capable of 
representing skills (e.g. codelets or procedures), acquiring them by being 
taught, and able to perform them as commanded, or on its own initiative.  I 
speculate that it will be easier to ground linguistic symbolic terms in the 
rather precise world of computer programming and algorithms, but that truth 
remains to be seen. (e.g. Texai, compile and run the unit tests for the program 
that we wrote yesterday).



  3. While philosophically, intellectually, most people dealing with this area 
may expect words to have precise meanings,  they know practically and 
intuitively that this is impossible and work on the basis that words can have 
different meanings according to who uses them - and that they themselves keep 
shifting their usage of words. Philosophers, for example may argue 
philosophically that words can and should have precise meanings and be treated 
as true or false, but know in practice that pretty well all the major 
words/concepts in philosophy,  like "mind"/"consciousness"/"determinism" - have 
multiple, indeed endless definitions. Or just think about AGI'ers and 
"intelligence."


  Steve:  Actually at Cycorp, at one time we had dozens of Ph.D. philosophers 
whose responsibity was to add precise symbolic concepts to the Cyc knowledge 
base.  The company likewise had a smaller staff of Ph.D. computational 
linguists whose job was to interface NLP to the rather precise Cyc concepts.  
My experiences at Cycorp with their parsers (i.e. Link Grammar, HPSG, Stanford 
Parser & Charniak Parser) also have strongly influenced my choice to embrace 
Fluid Construction Grammar.  Despite the current lack of English coverage in 
FCG, there is much less impedence mismatch between sytactic form and semantics.


  IOW any general intelligence that wants to successfully use language must 
have a metacognitive/ metalinguistic level of thought  - where it asks 
explicitly, as we do, "what does that word mean?"/"do I like that definition?" 
/ "is it reliable?" / "how should I use/order words?" / "what is the best kind 
of diction when talking about this subject?".Life's complicated!


  Steve:  Given this statement, you might agree with my bootstrap English 
dialog approach, in which metalinguistic skills are the first ones hard-coded.


  P.S. If you haven't read, I recommend Lakoff's Case Study on "Over" at end of 
"Women, Fire and Dangerous Things" - shows vast number of meanings and schemas 
that can be attached to that word - and amplifies this discussion.


  Steve: No I actually do not yet have this text by Lakoff, but I have some 
recent experience with another preposition "on".  In my first use case "the 
book is on the table", I accomodate the following alternative interpretations 
in order to test my design for disambiguation:

    a.. book - a bound book copy
    b.. book - a sheath of paper, e.g. match book
    c.. is - has as an attribute
    d.. is - situation described as
    e.. on - an operational device
    f.. "on the table" - subject to negotiation [ a multiword construction ]

    g.. on - located on the surface of
  I hope you don't mind me using your issues to explain how Texai should work.
  -Steve


  Stephen L. Reed

  Artificial Intelligence Researcher
  http://texai.org/blog
  http://texai.org
  3008 Oak Crest Ave.
  Austin, Texas, USA 78704
  512.791.7860






------------------------------------------------------------------------------
  Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it now.

------------------------------------------------------------------------------
        agi | Archives  | Modify Your Subscription  



------------------------------------------------------------------------------


  No virus found in this incoming message.
  Checked by AVG. 
  Version: 7.5.519 / Virus Database: 269.22.1/1347 - Release Date: 3/27/2008 
7:15 PM

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com

Reply via email to