Re: [agi] Understanding Natural Language

2006-11-30 Thread James Ratcliff
Once you have these sentences in predicate form, it becomes much easier to do some statistical matching on them, and group and classify them together to generate a set of more logical statements, and to disambiguate the simple english term you use first, into a single Term entity in the

Re: [agi] Understanding Natural Language

2006-11-29 Thread J. Storrs Hall, PhD.
On Tuesday 28 November 2006 17:50, Philip Goetz wrote: I see that a raster is a vector. I see that you can have rasters at different resolutions. I don't see what you mean by map the regions that represent the same face between higher and lower-dimensional spaces, or what you are taking the

Re: [agi] Understanding Natural Language

2006-11-29 Thread Philip Goetz
On 11/28/06, Matt Mahoney [EMAIL PROTECTED] wrote: First order logic (FOL) is good for expressing simple facts like all birds have wings or no bird has hair, but not for statements like most birds can fly. To do that you have to at least extend it with fuzzy logic (probability and

Re: [agi] Understanding Natural Language

2006-11-29 Thread Philip Goetz
Oops - looking back at my earlier post, I said that English sentences translate neatly into predicate logic statements. I should have left out logic. I like using predicates to organize sentences. I made that post because Josh was pointing out some of the problems with logic, but then making

Re: [agi] Understanding Natural Language

2006-11-29 Thread J. Storrs Hall, PhD.
On Wednesday 29 November 2006 13:56, Matt Mahoney wrote: How is a raster scan (16K vector) of an image useful? The difference between two images of faces is the RMS of the differences of the images obtained by subtracting pixels. Given an image of Tom, how do you compute the set of all

Re: [agi] Understanding Natural Language

2006-11-29 Thread J. Storrs Hall, PhD.
On Wednesday 29 November 2006 16:04, Philip Goetz wrote: On 11/29/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote: There will be many occurances of the smaller subregions, corresponding to all different sizes and positions of Tom's face in the raster. In other words, the Tom's face region

Re: [agi] Understanding Natural Language

2006-11-29 Thread Philip Goetz
On 11/29/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote: On Wednesday 29 November 2006 16:04, Philip Goetz wrote: On 11/29/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote: There will be many occurances of the smaller subregions, corresponding to all different sizes and positions of Tom's

Re: [agi] Understanding Natural Language

2006-11-29 Thread Russell Wallace
On 11/28/06, Philip Goetz [EMAIL PROTECTED] wrote: I see evidence of dimensionality reduction by humans in the fact that adopting a viewpoint has such a strong effect on the kind of information a person is able to absorb. In conversations about politics or religion, I often find ideas that to

Re: [agi] Understanding Natural Language

2006-11-28 Thread J. Storrs Hall, PhD.
On Monday 27 November 2006 10:35, Ben Goertzel wrote: Amusingly, one of my projects at the moment is to show that Novamente's economic attention allocation module can display Hopfield net type content-addressable-memory behavior on simple examples. As a preliminary step to integrating it with

Re: Re: [agi] Understanding Natural Language

2006-11-28 Thread Ben Goertzel
On 11/28/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote: On Monday 27 November 2006 10:35, Ben Goertzel wrote: Amusingly, one of my projects at the moment is to show that Novamente's economic attention allocation module can display Hopfield net type content-addressable-memory behavior on

Re: [agi] Understanding Natural Language

2006-11-28 Thread J. Storrs Hall, PhD.
On Monday 27 November 2006 10:35, Ben Goertzel wrote: ... An issue with Hopfield content-addressable memories is that their memory capability gets worse and worse as the networks get sparser and sparser. I did some experiments on this in 1997, though I never bothered to publish the results

Re: Re: [agi] Understanding Natural Language

2006-11-28 Thread Ben Goertzel
My approach, admittedly unusual, is to assume I have all the processing power and memory I need, up to a generous estimate of what the brain provides (a petawords and 100 petaMACs), and then see if I can come up with operations that do what it does. If not it, would be silly to try and do the

Re: [agi] Understanding Natural Language

2006-11-28 Thread Philip Goetz
On 11/24/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote: On Friday 24 November 2006 06:03, YKY (Yan King Yin) wrote: You talked mainly about how sentences require vast amounts of external knowledge to interpret, but it does not imply that those sentences cannot be represented in (predicate)

Re: Re: [agi] Understanding Natural Language

2006-11-28 Thread Philip Goetz
On 11/27/06, Ben Goertzel [EMAIL PROTECTED] wrote: An issue with Hopfield content-addressable memories is that their memory capability gets worse and worse as the networks get sparser and sparser. I did some experiments on this in 1997, though I never bothered to publish the results ... some

Re: [agi] Understanding Natural Language

2006-11-28 Thread Philip Goetz
On 11/26/06, Pei Wang [EMAIL PROTECTED] wrote: Therefore, the problem of using an n-space representation for AGI is not its theoretical possibility (it is possible), but its practical feasibility. I have no doubt that for many limited application, n-space representation is the most natural and

Re: [agi] Understanding Natural Language

2006-11-28 Thread Matt Mahoney
] - Original Message From: Philip Goetz [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Tuesday, November 28, 2006 2:47:41 PM Subject: Re: [agi] Understanding Natural Language On 11/24/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote: On Friday 24 November 2006 06:03, YKY (Yan King Yin) wrote

Re: [agi] Understanding Natural Language

2006-11-28 Thread J. Storrs Hall, PhD.
On Tuesday 28 November 2006 14:47, Philip Goetz wrote: The use of predicates for representation, and the use of logic for reasoning, are separate issues. I think it's pretty clear that English sentences translate neatly into predicate logic statements, and that such a transformation is likely

Re: [agi] Understanding Natural Language

2006-11-28 Thread Philip Goetz
I think that Matt and Josh are both misunderstanding what I said in the same way. Really, you're both attacking the use of logic on the predicates, not the predicates themselves as a representation, and so ignoring the distinction I was trying to create. I am not saying that rewriting English

Re: [agi] Understanding Natural Language

2006-11-28 Thread Philip Goetz
Oops, Matt actually is making a different objection than Josh. Now it seems to me that you need to understand sentences before you can translate them into FOL, not the other way around. Before you can translate to FOL you have to parse the sentence, and before you can parse it you have to

Re: [agi] Understanding Natural Language

2006-11-28 Thread Philip Goetz
On 11/28/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote: Sorry -- should have been clearer. Constructive Solid Geometry. Manipulating shapes in high- (possibly infinite-) dimensional spaces. Suppose I want to represent a face as a point in a space. First, represent it as a raster. That is in

Re: [agi] Understanding Natural Language

2006-11-28 Thread Matt Mahoney
: Philip Goetz [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Tuesday, November 28, 2006 5:45:51 PM Subject: Re: [agi] Understanding Natural Language Oops, Matt actually is making a different objection than Josh. Now it seems to me that you need to understand sentences before you can translate them

Re: [agi] Understanding Natural Language

2006-11-27 Thread J. Storrs Hall, PhD.
On Sunday 26 November 2006 18:02, Mike Dougherty wrote: I was thinking about the N-space representation of an idea... Then I thought about the tilting table analogy Richard posted elsewhere (sorry, I'm terrible at citing sources) Then I starting wondering what would happen if the N-space

Re: Re: [agi] Understanding Natural Language

2006-11-27 Thread Ben Goertzel
Amusingly, one of my projects at the moment is to show that Novamente's economic attention allocation module can display Hopfield net type content-addressable-memory behavior on simple examples. As a preliminary step to integrating it with other aspects of Novamente cognition (reasoning,

Re: Re: [agi] Understanding Natural Language

2006-11-27 Thread YKY (Yan King Yin)
I'm not saying that the n-space approach wouldn't work, but I have used that approach before and faced a problem. It was because of that problem that I switched to a logic-based approach. Maybe you can solve it. To illustrate it with an example, let's say the AGI can recognize apples, bananas,

Re: Re: [agi] Understanding Natural Language

2006-11-27 Thread Mike Dougherty
On 11/27/06, YKY (Yan King Yin) [EMAIL PROTECTED] wrote: The problem is that this thing, on, is not definable in n-space via operations like AND, OR, NOT, etc. It seems that on is not definable by *any* hypersurface, so it cannot be learned by classifiers like feedforward neural networks or

Re: Re: [agi] Understanding Natural Language

2006-11-27 Thread YKY (Yan King Yin)
On 11/28/06, Mike Dougherty [EMAIL PROTECTED] wrote: perhaps my view of a hypersurface is wrong, but wouldn't a subset of the dimensions associated with an object be the physical dimensions? (ok, virtual physical dimensions) Is On determined by a point of contact between two objects? (A is

Re: [agi] Understanding Natural Language

2006-11-27 Thread J. Storrs Hall, PhD.
On Monday 27 November 2006 11:49, YKY (Yan King Yin) wrote: To illustrate it with an example, let's say the AGI can recognize apples, bananas, tables, chairs, the face of Einstein, etc, in the n-dimensional feature space. So, Einstein's face is defined by a hypersurface where each point is

Re: [agi] Understanding Natural Language

2006-11-26 Thread J. Storrs Hall, PhD.
On Saturday 25 November 2006 13:52, Ben Goertzel wrote: About Teddy Meese: a well-designed Teddy Moose is almost surely going to have the big antlers characterizing a male moose, rather than the head-profile of a female moose; and it would be disappointing if a Teddy Moose had the head and

Re: Re: [agi] Understanding Natural Language

2006-11-26 Thread Ben Goertzel
HI, Therefore, the problem of using an n-space representation for AGI is not its theoretical possibility (it is possible), but its practical feasibility. I have no doubt that for many limited application, n-space representation is the most natural and efficient choice. However, for a general

Re: Re: [agi] Understanding Natural Language

2006-11-26 Thread Pei Wang
On 11/26/06, Ben Goertzel [EMAIL PROTECTED] wrote: HI, Therefore, the problem of using an n-space representation for AGI is not its theoretical possibility (it is possible), but its practical feasibility. I have no doubt that for many limited application, n-space representation is the most

Re: [agi] Understanding Natural Language

2006-11-26 Thread J. Storrs Hall, PhD.
My best ideas at the moment don't have one big space where everything sits, but something more like a Society of Mind where each agent has its own space. New agents are being tried all the time by some heuristic search process, and will come with new dimensions if that does them any good.

Re: [agi] Understanding Natural Language

2006-11-26 Thread Richard Loosemore
J. Storrs Hall, PhD. wrote: My best ideas at the moment don't have one big space where everything sits, but something more like a Society of Mind where each agent has its own space. New agents are being tried all the time by some heuristic search process, and will come with new dimensions if

Re: [agi] Understanding Natural Language

2006-11-26 Thread J. Storrs Hall, PhD.
On Sunday 26 November 2006 14:14, Pei Wang wrote: In this design, the tough job is to make the agents working together to cover all kinds of tasks, and for this part, I'm afraid that the multi-dimensional space representation won't help much. Also, we haven't seen much work on high-level

Re: Re: [agi] Understanding Natural Language

2006-11-26 Thread Andrii (lOkadin) Zvorygin
] Understanding Natural Language On 11/24/06, Matt Mahoney [EMAIL PROTECTED] wrote: Andrii (lOkadin) Zvorygin [EMAIL PROTECTED] wrote: I personally don't understand why everyone seems to insist on using ambiguous illogical languages to express things when there are viable alternative available

Re: [agi] Understanding Natural Language

2006-11-26 Thread Mike Dougherty
On 11/26/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote: But I really think that the metric properties of the spaces continue to help even at the very highest levels of abstraction. I'm willing to spend some time giving it a shot, anyway. So we'll see! I was thinking about the N-space

Re: Re: [agi] Understanding Natural Language

2006-11-26 Thread Matt Mahoney
. -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: Andrii (lOkadin) Zvorygin [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Sunday, November 26, 2006 4:37:02 PM Subject: Re: Re: [agi] Understanding Natural Language On 11/25/06, Matt Mahoney [EMAIL PROTECTED] wrote: Andrii

Re: Re: [agi] Understanding Natural Language

2006-11-25 Thread Ben Goertzel
I constructed a while ago (mathematically) a detailed mapping from Novamente Atoms (nodes/links) into n-dimensional vectors. You can certainly view the state of a Novamente system at a given point in time as a collection of n-vectors, and the various cognition methods in Novamente as mappings

Re: [agi] Understanding Natural Language

2006-11-25 Thread J. Storrs Hall, PhD.
On Saturday 25 November 2006 12:42, Ben Goertzel wrote: I'm afraid the analogies between vector space operations and cognitive operations don't really take you very far. For instance, you map conceptual blending into quantitative interpolation -- but as you surely know, it's not just **any**

Re: Re: Re: [agi] Understanding Natural Language

2006-11-25 Thread Ben Goertzel
. -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: Andrii (lOkadin) Zvorygin [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Saturday, November 25, 2006 5:01:04 AM Subject: Re: Re: [agi] Understanding Natural Language On 11/24/06, Matt Mahoney [EMAIL PROTECTED] wrote: Andrii (lOkadin

Re: [agi] Understanding Natural Language

2006-11-24 Thread J. Storrs Hall, PhD.
On Friday 24 November 2006 06:03, YKY (Yan King Yin) wrote: You talked mainly about how sentences require vast amounts of external knowledge to interpret, but it does not imply that those sentences cannot be represented in (predicate) logical form. Substitute bit string for predicate logic

Re: Re: [agi] Understanding Natural Language

2006-11-24 Thread Ben Goertzel
Oh, I think the representation is quite important. In particular, logic lets you in for gazillions of inferences that are totally inapropos and no good way to say which is better. Logic also has the enormous disadvantage that you tend to have frozen the terms and levels of abstraction. Actual

Re: Re: [agi] Understanding Natural Language

2006-11-24 Thread Andrii (lOkadin) Zvorygin
It was a true solar-plexus blow, and completely knocked out, Perkins staggered back against the instrument-board. His outflung arm pushed the power-lever out to its last notch, throwing full current through the bar, which was pointed straight up as it had been when they made their landing.

Re: [agi] Understanding Natural Language

2006-11-23 Thread Richard Loosemore
Excellent. Summarizing: the idea of understanding something (in this case a fragment of (written) natural language) involves many representations being constructed on many levels simultaneously (from word recognition through syntactic parsing to story-archetype recognition). There is