Re: Re: [agi] Understanding Natural Language

2006-11-28 Thread Ben Goertzel
On 11/28/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote: On Monday 27 November 2006 10:35, Ben Goertzel wrote: Amusingly, one of my projects at the moment is to show that Novamente's economic attention allocation module can display Hopfield net type content-addressable-memory behavior on

Re: Re: [agi] Understanding Natural Language

2006-11-28 Thread Ben Goertzel
My approach, admittedly unusual, is to assume I have all the processing power and memory I need, up to a generous estimate of what the brain provides (a petawords and 100 petaMACs), and then see if I can come up with operations that do what it does. If not it, would be silly to try and do the

Re: Re: [agi] Understanding Natural Language

2006-11-28 Thread Philip Goetz
On 11/27/06, Ben Goertzel [EMAIL PROTECTED] wrote: An issue with Hopfield content-addressable memories is that their memory capability gets worse and worse as the networks get sparser and sparser. I did some experiments on this in 1997, though I never bothered to publish the results ... some

Re: Re: [agi] Understanding Natural Language

2006-11-27 Thread Ben Goertzel
Amusingly, one of my projects at the moment is to show that Novamente's economic attention allocation module can display Hopfield net type content-addressable-memory behavior on simple examples. As a preliminary step to integrating it with other aspects of Novamente cognition (reasoning,

Re: Re: [agi] Understanding Natural Language

2006-11-27 Thread YKY (Yan King Yin)
I'm not saying that the n-space approach wouldn't work, but I have used that approach before and faced a problem. It was because of that problem that I switched to a logic-based approach. Maybe you can solve it. To illustrate it with an example, let's say the AGI can recognize apples, bananas,

Re: Re: [agi] Understanding Natural Language

2006-11-27 Thread Mike Dougherty
On 11/27/06, YKY (Yan King Yin) [EMAIL PROTECTED] wrote: The problem is that this thing, on, is not definable in n-space via operations like AND, OR, NOT, etc. It seems that on is not definable by *any* hypersurface, so it cannot be learned by classifiers like feedforward neural networks or

Re: Re: [agi] Understanding Natural Language

2006-11-27 Thread YKY (Yan King Yin)
On 11/28/06, Mike Dougherty [EMAIL PROTECTED] wrote: perhaps my view of a hypersurface is wrong, but wouldn't a subset of the dimensions associated with an object be the physical dimensions? (ok, virtual physical dimensions) Is On determined by a point of contact between two objects? (A is

Re: Re: [agi] Understanding Natural Language

2006-11-26 Thread Ben Goertzel
HI, Therefore, the problem of using an n-space representation for AGI is not its theoretical possibility (it is possible), but its practical feasibility. I have no doubt that for many limited application, n-space representation is the most natural and efficient choice. However, for a general

Re: Re: [agi] Understanding Natural Language

2006-11-26 Thread Pei Wang
On 11/26/06, Ben Goertzel [EMAIL PROTECTED] wrote: HI, Therefore, the problem of using an n-space representation for AGI is not its theoretical possibility (it is possible), but its practical feasibility. I have no doubt that for many limited application, n-space representation is the most

Re: Re: [agi] Understanding Natural Language

2006-11-26 Thread Andrii (lOkadin) Zvorygin
] Understanding Natural Language On 11/24/06, Matt Mahoney [EMAIL PROTECTED] wrote: Andrii (lOkadin) Zvorygin [EMAIL PROTECTED] wrote: I personally don't understand why everyone seems to insist on using ambiguous illogical languages to express things when there are viable alternative available

Re: Re: [agi] Understanding Natural Language

2006-11-26 Thread Matt Mahoney
. -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: Andrii (lOkadin) Zvorygin [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Sunday, November 26, 2006 4:37:02 PM Subject: Re: Re: [agi] Understanding Natural Language On 11/25/06, Matt Mahoney [EMAIL PROTECTED] wrote: Andrii

Re: Re: [agi] Understanding Natural Language

2006-11-25 Thread Ben Goertzel
I constructed a while ago (mathematically) a detailed mapping from Novamente Atoms (nodes/links) into n-dimensional vectors. You can certainly view the state of a Novamente system at a given point in time as a collection of n-vectors, and the various cognition methods in Novamente as mappings

Re: Re: Re: [agi] Understanding Natural Language

2006-11-25 Thread Ben Goertzel
. -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: Andrii (lOkadin) Zvorygin [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Saturday, November 25, 2006 5:01:04 AM Subject: Re: Re: [agi] Understanding Natural Language On 11/24/06, Matt Mahoney [EMAIL PROTECTED] wrote: Andrii (lOkadin

Re: Re: [agi] Understanding Natural Language

2006-11-24 Thread Ben Goertzel
Oh, I think the representation is quite important. In particular, logic lets you in for gazillions of inferences that are totally inapropos and no good way to say which is better. Logic also has the enormous disadvantage that you tend to have frozen the terms and levels of abstraction. Actual

Re: Re: [agi] Understanding Natural Language

2006-11-24 Thread Andrii (lOkadin) Zvorygin
It was a true solar-plexus blow, and completely knocked out, Perkins staggered back against the instrument-board. His outflung arm pushed the power-lever out to its last notch, throwing full current through the bar, which was pointed straight up as it had been when they made their landing.