Re: [agi] Language modeling

2006-10-25 Thread Matt Mahoney
- Original Message From: Richard Loosemore <[EMAIL PROTECTED]> To: agi@v2.listbox.com Sent: Tuesday, October 24, 2006 12:37:16 PM Subject: Re: [agi] Language modeling >Matt Mahoney wrote: >> Converting natural language to a formal representation requires language >

Re: [agi] Language modeling

2006-10-24 Thread Pei Wang
On 10/23/06, Matt Mahoney <[EMAIL PROTECTED]> wrote: I am interested in identifying barriers to language modeling and how to overcome them. I have no doubt that probabilistic models such as NARS and Novamente can adequately represent human knowledge. NARS isn't a probabilistic model, though

Re: [agi] Language modeling

2006-10-24 Thread Richard Loosemore
Matt Mahoney wrote: I am interested in identifying barriers to language modeling and how to overcome them. I have no doubt that probabilistic models such as NARS and Novamente can adequately represent human knowledge. Also, I have no doubt they can learn e.g. relations such as "all frogs are g

Re: [agi] Language modeling

2006-10-24 Thread Lukasz Kaiser
Hi. The state of the art in language modeling is at the level of simple sentences, modeling syntax using n-grams (usually trigrams) or hidden Markov models ... Just a remark: google recently made their up-to-5-grams available through LDC http://googleresearch.blogspot.com/2006/08/all-our-n-gra

Re: [agi] Language modeling

2006-10-23 Thread Ben Goertzel
So my question is: what is needed to extend language models to the level of compound sentences? More training data? Different training data? A new theory of language acquisition? More hardware? How much? What is needed is: A better training approach, involving presentation of compound se

Re: [agi] Language modeling

2006-10-23 Thread Matt Mahoney
I am interested in identifying barriers to language modeling and how to overcome them. I have no doubt that probabilistic models such as NARS and Novamente can adequately represent human knowledge. Also, I have no doubt they can learn e.g. relations such as "all frogs are green" from examples

Re: [agi] Language modeling

2006-10-23 Thread Bob Mottram
In child development understanding seems to considerably precede the ability to articulate that understanding.  Also development seems to generally move from highly abstract representations (stick men, smily suns) to more concrete adult-like ones. On 23/10/06, justin corwin <[EMAIL PROTECTED]> wrot

Re: [agi] Language modeling

2006-10-23 Thread justin corwin
I don't exactly have the same reaction, but I have some things to add to the following exchange. On 10/23/06, Richard Loosemore <[EMAIL PROTECTED]> wrote: Matt Mahoney wrote: > Children also learn language as a progression toward increasingly complex patterns. > - phonemes beginning at 2-4 week

Re: [agi] Language modeling

2006-10-23 Thread Ben Goertzel
YKY,Of course there is no a priori difference betw a set of nodes and links and a set of logical relationships...The question with your DB of facts about "love" and so forth is whether it captures the subtler uncertain patterns regarding love that we learn via experience  My strong suspicion is

Re: [agi] Language modeling

2006-10-23 Thread Starglider
Ben Goertzel wrote: >> The limited expressive scope of classic ANNs was actually essential >> for getting relatively naïve and simplistic learning algorithms (e.g. >> backprop, Hebbian learning) to produce useful solutions to an >> interesting (if still fairly narrow) class of problems. > > Well,

Re: [agi] Language modeling

2006-10-23 Thread YKY (Yan King Yin)
On 10/23/06, Ben Goertzel <[EMAIL PROTECTED]> wrote: > > 2) the distinction between> 2a) using ungrounded formal symbols to pretend to represent knowledge,  e.g. an explicit labeled internal symbol for "cat", one for "give", etc.> 2b) having an AI system recognize patterns in its perception and a

Re: [agi] Language modeling

2006-10-23 Thread YKY (Yan King Yin)
On 10/23/06, Matt Mahoney <[EMAIL PROTECTED]> wrote: > [...] > One aspect of NARS and many other structured or semi-structured knowledge representations that concerns me is the direct representation of concepts such as "is-a", equivalence, logic ("if-then", "and", "or", "not"), quantifiers ("all",

Re: [agi] Language modeling

2006-10-23 Thread Ben Goertzel
Hi, > For instance, this means that the "cat" concept may well not be > expressed by a single "cat" term, but perhaps by a complex learned> (probabilistic) logical predicate.I don't think it's really useful to discuss representing word meaningswithout a sufficiently powerful notion of context (whic

Re: [agi] Language modeling

2006-10-23 Thread Starglider
On 23 Oct 2006 at 10:06, Ben Goertzel wrote: > A very careful distinction needs to be drawn between: > > 1) the distinction between > 1a) using probabilistic and formal-logical operators for representing > knowledge > 1b) using neural-net type operators (or other purely quantitative, non- > logic

Re: [agi] Language modeling

2006-10-23 Thread Richard Loosemore
Matt Mahoney wrote: My concern is that structured knowledge is inconsistent with the development of language in children. As I mentioned earlier, natural language has a structure that allows direct training in neural networks using fast, online algorithms such as perceptron learning, rather than

Re: [agi] Language modeling

2006-10-23 Thread Ben Goertzel
Hi Matt,Regarding logic-based knowledge representation and language/perceptual/action learning -- I understand the nature of your confusion, because the point you are confused on is exactly the biggest point of confusion for new members of the Novamente AI team. A very careful distinction needs to

Re: [agi] Language modeling

2006-10-23 Thread Pei Wang
On 10/22/06, Matt Mahoney <[EMAIL PROTECTED]> wrote: Also to Novamente, if I understand correctly. Terms are linked by a probability and confidence. This seems to me to be an optimization of a neural network or connectionist model, which is restricted to one number per link, representing pr

[agi] Language modeling

2006-10-22 Thread Matt Mahoney
- Original Message From: Pei Wang <[EMAIL PROTECTED]> To: agi@v2.listbox.com Sent: Saturday, October 21, 2006 7:03:39 PM Subject: Re: [agi] SOTA >Well, in that sense NARS also has some resemblance to a neural >network, as well as many other AI systems. Also to Novamente