Once you have these sentences in predicate form, it becomes much easier to do
some statistical matching on them, and group and classify them together to
generate a set of more logical statements, and to disambiguate the simple
english term you use first, into a single Term entity in the
On Tuesday 28 November 2006 17:50, Philip Goetz wrote:
I see that a raster is a vector. I see that you can have rasters at
different resolutions. I don't see what you mean by map the regions
that represent the same face between higher and lower-dimensional
spaces, or what you are taking the
On 11/28/06, Matt Mahoney [EMAIL PROTECTED] wrote:
First order logic (FOL) is good for expressing simple facts like all birds have wings or no
bird has hair, but not for statements like most birds can fly. To do that you have to at
least extend it with fuzzy logic (probability and
Oops - looking back at my earlier post, I said that English sentences
translate neatly into predicate logic statements. I should have left
out logic. I like using predicates to organize sentences. I made
that post because Josh was pointing out some of the problems with
logic, but then making
On Wednesday 29 November 2006 13:56, Matt Mahoney wrote:
How is a raster scan (16K vector) of an image useful? The difference
between two images of faces is the RMS of the differences of the images
obtained by subtracting pixels. Given an image of Tom, how do you compute
the set of all
On Wednesday 29 November 2006 16:04, Philip Goetz wrote:
On 11/29/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:
There will be many occurances of the smaller subregions, corresponding to
all different sizes and positions of Tom's face in the raster. In other
words, the Tom's face region
On 11/29/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:
On Wednesday 29 November 2006 16:04, Philip Goetz wrote:
On 11/29/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:
There will be many occurances of the smaller subregions, corresponding to
all different sizes and positions of Tom's
On 11/28/06, Philip Goetz [EMAIL PROTECTED] wrote:
I see evidence of dimensionality reduction by humans in the fact that
adopting a viewpoint has such a strong effect on the kind of
information a person is able to absorb. In conversations about
politics or religion, I often find ideas that to
On Monday 27 November 2006 10:35, Ben Goertzel wrote:
Amusingly, one of my projects at the moment is to show that
Novamente's economic attention allocation module can display
Hopfield net type content-addressable-memory behavior on simple
examples. As a preliminary step to integrating it with
On 11/28/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:
On Monday 27 November 2006 10:35, Ben Goertzel wrote:
Amusingly, one of my projects at the moment is to show that
Novamente's economic attention allocation module can display
Hopfield net type content-addressable-memory behavior on
On Monday 27 November 2006 10:35, Ben Goertzel wrote:
...
An issue with Hopfield content-addressable memories is that their
memory capability gets worse and worse as the networks get sparser and
sparser. I did some experiments on this in 1997, though I never
bothered to publish the results
My approach,
admittedly unusual, is to assume I have all the processing power and memory I
need, up to a generous estimate of what the brain provides (a petawords and
100 petaMACs), and then see if I can come up with operations that do what it
does. If not it, would be silly to try and do the
On 11/24/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:
On Friday 24 November 2006 06:03, YKY (Yan King Yin) wrote:
You talked mainly about how sentences require vast amounts of external
knowledge to interpret, but it does not imply that those sentences cannot
be represented in (predicate)
On 11/27/06, Ben Goertzel [EMAIL PROTECTED] wrote:
An issue with Hopfield content-addressable memories is that their
memory capability gets worse and worse as the networks get sparser and
sparser. I did some experiments on this in 1997, though I never
bothered to publish the results ... some
On 11/26/06, Pei Wang [EMAIL PROTECTED] wrote:
Therefore, the problem of using an n-space representation for AGI is
not its theoretical possibility (it is possible), but its practical
feasibility. I have no doubt that for many limited application,
n-space representation is the most natural and
]
- Original Message
From: Philip Goetz [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, November 28, 2006 2:47:41 PM
Subject: Re: [agi] Understanding Natural Language
On 11/24/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:
On Friday 24 November 2006 06:03, YKY (Yan King Yin) wrote
On Tuesday 28 November 2006 14:47, Philip Goetz wrote:
The use of predicates for representation, and the use of logic for
reasoning, are separate issues. I think it's pretty clear that
English sentences translate neatly into predicate logic statements,
and that such a transformation is likely
I think that Matt and Josh are both misunderstanding what I said in
the same way. Really, you're both attacking the use of logic on the
predicates, not the predicates themselves as a representation, and so
ignoring the distinction I was trying to create. I am not saying that
rewriting English
Oops, Matt actually is making a different objection than Josh.
Now it seems to me that you need to understand sentences before you can
translate them into FOL, not the other way around. Before you can translate to
FOL you have to parse the sentence, and before you can parse it you have to
On 11/28/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:
Sorry -- should have been clearer. Constructive Solid Geometry. Manipulating
shapes in high- (possibly infinite-) dimensional spaces.
Suppose I want to represent a face as a point in a space. First, represent it
as a raster. That is in
: Philip Goetz [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, November 28, 2006 5:45:51 PM
Subject: Re: [agi] Understanding Natural Language
Oops, Matt actually is making a different objection than Josh.
Now it seems to me that you need to understand sentences before you can
translate them
On Sunday 26 November 2006 18:02, Mike Dougherty wrote:
I was thinking about the N-space representation of an idea... Then I
thought about the tilting table analogy Richard posted elsewhere (sorry,
I'm terrible at citing sources) Then I starting wondering what would
happen if the N-space
Amusingly, one of my projects at the moment is to show that
Novamente's economic attention allocation module can display
Hopfield net type content-addressable-memory behavior on simple
examples. As a preliminary step to integrating it with other aspects
of Novamente cognition (reasoning,
I'm not saying that the n-space approach wouldn't work, but I have used that
approach before and faced a problem. It was because of that problem that I
switched to a logic-based approach. Maybe you can solve it.
To illustrate it with an example, let's say the AGI can recognize apples,
bananas,
On 11/27/06, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
The problem is that this thing, on, is not definable in n-space via
operations like AND, OR, NOT, etc. It seems that on is not definable by
*any* hypersurface, so it cannot be learned by classifiers like feedforward
neural networks or
On 11/28/06, Mike Dougherty [EMAIL PROTECTED] wrote:
perhaps my view of a hypersurface is wrong, but wouldn't a subset of the
dimensions associated with an object be the physical dimensions? (ok,
virtual physical dimensions)
Is On determined by a point of contact between two objects? (A is
On Monday 27 November 2006 11:49, YKY (Yan King Yin) wrote:
To illustrate it with an example, let's say the AGI can recognize apples,
bananas, tables, chairs, the face of Einstein, etc, in the n-dimensional
feature space. So, Einstein's face is defined by a hypersurface where each
point is
On Saturday 25 November 2006 13:52, Ben Goertzel wrote:
About Teddy Meese: a well-designed Teddy Moose is almost surely going
to have the big antlers characterizing a male moose, rather than the
head-profile of a female moose; and it would be disappointing if a
Teddy Moose had the head and
HI,
Therefore, the problem of using an n-space representation for AGI is
not its theoretical possibility (it is possible), but its practical
feasibility. I have no doubt that for many limited application,
n-space representation is the most natural and efficient choice.
However, for a general
On 11/26/06, Ben Goertzel [EMAIL PROTECTED] wrote:
HI,
Therefore, the problem of using an n-space representation for AGI is
not its theoretical possibility (it is possible), but its practical
feasibility. I have no doubt that for many limited application,
n-space representation is the most
My best ideas at the moment don't have one big space where everything sits,
but something more like a Society of Mind where each agent has its own space.
New agents are being tried all the time by some heuristic search process, and
will come with new dimensions if that does them any good.
J. Storrs Hall, PhD. wrote:
My best ideas at the moment don't have one big space where everything sits,
but something more like a Society of Mind where each agent has its own space.
New agents are being tried all the time by some heuristic search process, and
will come with new dimensions if
On Sunday 26 November 2006 14:14, Pei Wang wrote:
In this design, the tough job is to make the agents working together
to cover all kinds of tasks, and for this part, I'm afraid that the
multi-dimensional space representation won't help much. Also, we
haven't seen much work on high-level
] Understanding Natural Language
On 11/24/06, Matt Mahoney [EMAIL PROTECTED] wrote:
Andrii (lOkadin) Zvorygin [EMAIL PROTECTED] wrote:
I personally don't understand why everyone seems to insist on using
ambiguous illogical languages to express things when there are viable
alternative available
On 11/26/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:
But I really think that the metric properties of the spaces continue to
help
even at the very highest levels of abstraction. I'm willing to spend some
time giving it a shot, anyway. So we'll see!
I was thinking about the N-space
.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Andrii (lOkadin) Zvorygin [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, November 26, 2006 4:37:02 PM
Subject: Re: Re: [agi] Understanding Natural Language
On 11/25/06, Matt Mahoney [EMAIL PROTECTED] wrote:
Andrii
I constructed a while ago (mathematically) a detailed mapping from
Novamente Atoms (nodes/links) into n-dimensional vectors. You can
certainly view the state of a Novamente system at a given point in
time as a collection of n-vectors, and the various cognition methods
in Novamente as mappings
On Saturday 25 November 2006 12:42, Ben Goertzel wrote:
I'm afraid the analogies between vector space operations and cognitive
operations don't really take you very far.
For instance, you map conceptual blending into quantitative
interpolation -- but as you surely know, it's not just **any**
.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Andrii (lOkadin) Zvorygin [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Saturday, November 25, 2006 5:01:04 AM
Subject: Re: Re: [agi] Understanding Natural Language
On 11/24/06, Matt Mahoney [EMAIL PROTECTED] wrote:
Andrii (lOkadin
On Friday 24 November 2006 06:03, YKY (Yan King Yin) wrote:
You talked mainly about how sentences require vast amounts of external
knowledge to interpret, but it does not imply that those sentences cannot
be represented in (predicate) logical form.
Substitute bit string for predicate logic
Oh, I think the representation is quite important. In particular, logic lets
you in for gazillions of inferences that are totally inapropos and no good
way to say which is better. Logic also has the enormous disadvantage that you
tend to have frozen the terms and levels of abstraction. Actual
It was a true solar-plexus blow, and completely knocked out, Perkins
staggered back against the instrument-board. His outflung arm pushed the
power-lever out to its last notch, throwing full current through the
bar, which was pointed straight up as it had been when they made their
landing.
Excellent.
Summarizing: the idea of understanding something (in this case a
fragment of (written) natural language) involves many representations
being constructed on many levels simultaneously (from word recognition
through syntactic parsing to story-archetype recognition). There is
43 matches
Mail list logo