Boris,
I am just not getting this.  So let me try starting with some simple questions.
I had said, "Forcing semantic values into 3-dimensional orthogonal
space seems amazingly confused to me."
You replied,
"You keep confusing source with destination, because you insist on
operating within your declarative memory, which is a rather
superficial subset of your cognitive model :)."

Are you replying using your theory as a model of the mind (indeed, as
a model of my mind!) with a smiley face to represent some humor about
doing that?  Did you think that my statement about forcing semantic
values was made in reference to something in your theory?  Because
that is not what I meant.  I was just saying that I have read papers
about using semantic vectors and my thoughts on that is that trying to
force semantic vectors into 3-dimensional space seems confused.

And, are you saying that declarative memory is a destination in your
model rather than a source? Is declarative memory derived?  That is
what you are saying right?

Is your theory a theory of how the brain works, a theory for
artificial general intelligence using computers or both?

Do you regularly see the kinds of thinking that people do in the terms
of your model?
Jim Bromer



--------------- Previous Messages ---------------
 Jim,
> I don't understand your comments about detecting patterns. You said:

This is interactive pattern projection, but you have to discover those
patterns first. Technically, you simply multiply all the vectors in a
pattern by a relative distance to a target coordinate. And then you
compare multiple patterns projected to the same coordinate, & multiply
the difference by relative strength of each pattern. That gives you a
combined prediction, or probability distribution if the patterns are
mutually exclusive.

That comment was about projecting patterns, not detecting them.

> What kind of patterns are you talking about? How do the elemental 
> observations (from the sensory device) get turned into vectors?

Comparisons generate derivatives. A vector is d(input) over
d(coordinate). Conventionally, it's over multiple coordinates
(dimensions), & the input can be a lower coordinate, but that's not
essential.

> Are you saying that the "higher level of search and generalization" are 
> where/how the pattern vectors are created?

No, all levels.

> Why or how would you pick out a particular target coordiate to use to combine 
> a prediction?

Well, coordinate resolution is variable, so I am talking about a
min->max span. Basically, vector projection is part of input selection
for a higher-level search. The target coordinate span is a feedback
from that higher level, or, if there aren't any, current_search_span *
selection_rate: preset lossiness / sparseness of representation on the
higher level.

> Are you saying that all predictions have individual coordinates?

Individual coordinate span. It's what + where, you got to have both.

> That alone means that they would have to exist in dynamic virtual space of 
> many dimensions. Forcing semantic values into 3-dimensional orthogonal space 
> seems amazingly confused to me.

You keep confusing source with destination, because you insist on
operating within your declarative memory, which is a rather
superficial subset of your cognitive model :).

We *derive* all our "semantic" values from 4D-continuous observation,
no need to "force" them into it.

> What kind of space would your vectors exist in, how do they get there and why 
> do you choose a particular coordinate for a combination of predictions?

As I said, hierarchical search generates incremental syntax, &
variables within it are individually evaluated for search on
successive levels. The strongest variable, whether it's an original
coordinate | modality or a derivative thereof, becomes a coordinate
for a higher level. The strength here must be averaged over higher
level span.

It's hard to explain this on "semantic" level, which is profoundly
confused in humans anyway. But a good intermediate example is Periodic
Table. You take atomic mass (which is a derived, not an original
variable) as top coordinate, compare pH value along that coordinate, &
notice recurrent periodicity in it's variation. Since pH is a main
chemical property, you then use it as a primary dimension that defines
a period, & atomic mass becomes a secondary dimension that defines a
sequence of periods. Both dimensions are derived, they may seem kind
of a halfway between original & "semantic", but the same derivation
process will get you to the latter

http://www.cognitivealgorithm.info/2012/01/cognitive-algorithm.html

Boris,

I don't understand your comments about detecting patterns. You said:

This is interactive pattern projection, but you have to discover those
patterns first. Technically, you simply multiply all the vectors in a
pattern by a relative distance to a target coordinate. And then you
compare multiple patterns projected to the same coordinate, & multiply
the difference by relative strength of each pattern. That gives you a
combined prediction, or probability distribution if the patterns are
mutually exclusive :).

What kind of patterns are you talking about?  How do the elemental
observations (from the sensory device) get turned into vectors?  Are
you saying that the "higher level of search and generalization" are
where/how the pattern vectors are created? Why or how would you pick
out a particular target coordiate to use to combine a prediction?  Are
you saying that all predictions have individual coordinates?

I have read papers on Semantic Vectors, (I do not need to be told that
the sources of semantic vectors are different than the sources of the
products of your system) and I have always felt that they were
absurdly inappropriate for semantics (or concepts) because they forced
the semantic concepts into a system that they did not fit into.  As is
so obvious to Two-Door, concepts are relativistic. That alone means
that they would have to exist in dynamic virtual space of many
dimensions.  Forcing semantic values into 3-dimensional orthogonal
space seems amazingly confused to me.

What kind of space would your vectors exist in, how do they get there
and why do you choose a particular coordinate for a combination of
predictions?

(Incidentally, just to remind you, my ideas of concepts are not
necessarily expressed as vectors although I am not close minded about
the idea.)

Jim Bromer


On Tue, Aug 21, 2012 at 2:22 PM, Boris Kazachenko <[email protected]> wrote:

> On the other hand I am interested in conjectures about conceptual vectors and 
> stuff like that

You can't formalize "conceptual" vectors, except in terms of
"conceptual" coordinates .

Jim Bromer

Thanks for the smiley faces Boris...
I disagree that you have to   multiply all the vectors in a pattern by
a relative distance to a target   coordinate in order to combine
imagined complex ideas and related   observations. Our theories are
very different. (On the other hand I am   interested in conjectures
about conceptual vectors and stuff like that.)

I am interested in a continuation of the explanation of your theories
and   I hope to get back to it soon.
Jim Bromer


On Tue, Aug 21, 2012 at 7:57 AM, Boris Kazachenko <[email protected]> wrote:

Jim,

>Where Boris and I disagree is that I feel that     because of relativity the 
>input source of an idea may not be the most     elemental source of the idea 
>that needs to be considered.

Right, but that's the simplest assumption, you must make     it unless
you know otherwise. And you only know otherwise if you've
discovered more "elemental" (stable) source on some higher level of
search     & generalization. That would generate a focusing / motor
feedback,     always derived from prior feedforward. As I keep saying,
complexity must be     incremental :).

> One simple example is that we can use our     imagination and study of the 
> subject of the concept in order to extend our     ideas about the subject 
> beyond those ideas which came directly from     observations of it.

This is interactive pattern projection, but you have to     discover
those patterns first. Technically, you simply multiply all the
vectors in a pattern by a relative distance to a target coordinate.
And then     you compare multiple patterns projected to the same
coordinate, &     multiply the difference by relative strength of each
pattern. That gives you     a combined prediction, or probability
distribution if the patterns are     mutually exclusive :).


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to