Re: [agi] SOTA

2006-10-23 Thread Bob Mottram
On 20/10/06, Pei Wang <[EMAIL PROTECTED]> wrote: Because of this, I'm not sure how long robotics can keepits recent improving rate without major progress in AI in general.I wonder if there is anyone in this list who has been actually workingin the field of robotics, and I would be very interested i

Re: [agi] Language modeling

2006-10-23 Thread Pei Wang
On 10/22/06, Matt Mahoney <[EMAIL PROTECTED]> wrote: Also to Novamente, if I understand correctly. Terms are linked by a probability and confidence. This seems to me to be an optimization of a neural network or connectionist model, which is restricted to one number per link, representing pr

Re: [agi] Language modeling

2006-10-23 Thread Ben Goertzel
Hi Matt,Regarding logic-based knowledge representation and language/perceptual/action learning -- I understand the nature of your confusion, because the point you are confused on is exactly the biggest point of confusion for new members of the Novamente AI team. A very careful distinction needs to

Re: [agi] Language modeling

2006-10-23 Thread Richard Loosemore
Matt Mahoney wrote: My concern is that structured knowledge is inconsistent with the development of language in children. As I mentioned earlier, natural language has a structure that allows direct training in neural networks using fast, online algorithms such as perceptron learning, rather than

Re: [agi] Language modeling

2006-10-23 Thread Starglider
On 23 Oct 2006 at 10:06, Ben Goertzel wrote: > A very careful distinction needs to be drawn between: > > 1) the distinction between > 1a) using probabilistic and formal-logical operators for representing > knowledge > 1b) using neural-net type operators (or other purely quantitative, non- > logic

Re: [agi] Language modeling

2006-10-23 Thread Ben Goertzel
Hi, > For instance, this means that the "cat" concept may well not be > expressed by a single "cat" term, but perhaps by a complex learned> (probabilistic) logical predicate.I don't think it's really useful to discuss representing word meaningswithout a sufficiently powerful notion of context (whic

Re: [agi] Language modeling

2006-10-23 Thread YKY (Yan King Yin)
On 10/23/06, Matt Mahoney <[EMAIL PROTECTED]> wrote: > [...] > One aspect of NARS and many other structured or semi-structured knowledge representations that concerns me is the direct representation of concepts such as "is-a", equivalence, logic ("if-then", "and", "or", "not"), quantifiers ("all",

Re: [agi] Language modeling

2006-10-23 Thread YKY (Yan King Yin)
On 10/23/06, Ben Goertzel <[EMAIL PROTECTED]> wrote: > > 2) the distinction between> 2a) using ungrounded formal symbols to pretend to represent knowledge,  e.g. an explicit labeled internal symbol for "cat", one for "give", etc.> 2b) having an AI system recognize patterns in its perception and a

Re: [agi] Language modeling

2006-10-23 Thread Starglider
Ben Goertzel wrote: >> The limited expressive scope of classic ANNs was actually essential >> for getting relatively naïve and simplistic learning algorithms (e.g. >> backprop, Hebbian learning) to produce useful solutions to an >> interesting (if still fairly narrow) class of problems. > > Well,

Re: [agi] Language modeling

2006-10-23 Thread Ben Goertzel
YKY,Of course there is no a priori difference betw a set of nodes and links and a set of logical relationships...The question with your DB of facts about "love" and so forth is whether it captures the subtler uncertain patterns regarding love that we learn via experience  My strong suspicion is

Re: [agi] Language modeling

2006-10-23 Thread justin corwin
I don't exactly have the same reaction, but I have some things to add to the following exchange. On 10/23/06, Richard Loosemore <[EMAIL PROTECTED]> wrote: Matt Mahoney wrote: > Children also learn language as a progression toward increasingly complex patterns. > - phonemes beginning at 2-4 week

Re: [agi] Language modeling

2006-10-23 Thread Bob Mottram
In child development understanding seems to considerably precede the ability to articulate that understanding.  Also development seems to generally move from highly abstract representations (stick men, smily suns) to more concrete adult-like ones. On 23/10/06, justin corwin <[EMAIL PROTECTED]> wrot

Re: [agi] SOTA

2006-10-23 Thread Neil H.
On 10/23/06, Bob Mottram <[EMAIL PROTECTED]> wrote: Another interesting development is the rise of the use of invariant feature detection algorithms together with geometric hashing for some kinds of object recognition. The most notable successes to date have been using David Lowe's SIFT method,

Re: [agi] SOTA

2006-10-23 Thread Bob Mottram
It's a shame that Evolution Robotics weren't able to develop that system further.  A logical progression would be to extend the geometric hashing to 3D and eventually 4D, although that would require a stereo camera or some other way of measuring distances to the observed features.  Even so that dem

Re: [agi] SOTA

2006-10-23 Thread Neil H.
On 10/23/06, Bob Mottram <[EMAIL PROTECTED]> wrote: It's a shame that Evolution Robotics weren't able to develop that system further. A logical progression would be to extend the geometric hashing to 3D and eventually 4D, although that would require a stereo camera or some other way of measurin

Re: [agi] SOTA

2006-10-23 Thread Bob Mottram
You can get depth information from single camera motion (eg Andrew Davison's MonoSLAM), but this requires an initial size calibration and continuous tracking.  If the tracking is lost at any time you need to recalibrate.  This makes single camera systems less practical.  With a stereo camera the ba

Re: [agi] SOTA

2006-10-23 Thread Neil H.
On 10/23/06, Bob Mottram <[EMAIL PROTECTED]> wrote: My inside sources tell me that there's little or no software development going on at Evolution Robotics, and that longstanding issues and bugs remain unfixed. They did licence their stuff to WoWee, and also Whitebox Robotics, so its likely we'l

Re: [agi] Language modeling

2006-10-23 Thread Matt Mahoney
I am interested in identifying barriers to language modeling and how to overcome them. I have no doubt that probabilistic models such as NARS and Novamente can adequately represent human knowledge. Also, I have no doubt they can learn e.g. relations such as "all frogs are green" from examples

Re: [agi] Language modeling

2006-10-23 Thread Ben Goertzel
So my question is: what is needed to extend language models to the level of compound sentences? More training data? Different training data? A new theory of language acquisition? More hardware? How much? What is needed is: A better training approach, involving presentation of compound se