Very interesting!  I just started reading the home page link.  I was struck 
by this statement:

" HD/VSA addresses these challenges by providing a binding operator 
associating individual (John, Mary) with roles 
<https://en.wikipedia.org/wiki/Thematic_relation> (AGENT, PATIENT) and a 
superposition <https://en.wikipedia.org/wiki/Superposition_principle> operator 
that allows multiple associations to be composed into a coherent whole."

The Topic Maps model for organizing knowledge has *topics *- a topic is 
anything that can be talked about - and *relationships*.  A relationship 
type has a number of *roles*, and those roles are filled by topics. It 
sounds very similar, at a basic level.  A Topic Maps relationship would be 
the equivalent of the HD/VSA binding operator.

I have some reservations about using cosine similarity with vectors like 
this.  I have experimented with them a little, not in the area of AI but 
for answering queries in a certain space of questions and knowledge.  The 
trouble is that the components of a vector are not often orthogonal, so the 
simple ways to compute their projections are not valid.  You can crank out 
the results, but they will not be correct, to a degree that depends on the 
other vector involved.  I will be interested to learn how these 
investigators handle this.

As an example of what I mean, consider a vector of words, and you want to 
know how similar it is to another vector of words.  A simpleminded approach 
make each word into a vector component.  So here are two sentences:

"Which comes first, the chicken or the egg"
"Evolutionarily speaking a bird can be considered to be the reason for an 
egg"

Now make vectors of these two sentences, where every word is on its own 
axis.  You take the cosine by multiplying the value of each component in 
one vector by  the value of the same component in the other vector.  Each 
component here has a value of 0 or 1 (since the word is either present or 
not).  The only components that  match are "the" and "egg".  So the score - 
the cosine - will be very low.  However, we can see that the two sentences 
are actually very similar in meaning.

And how can we determine how orthogonal  a bird is to a chicken?

So this approach is too simple.  It will be interesting to see what these 
folks are really doing.  Personally, I expect that an approach using fuzzy 
logic would be promising.   It would be similar to using cosines to project 
one vector onto another, but with fuzzy operators instead of 
multiplication.  Why fuzzy logic?  Because it matches how people (and no 
doubt animals) actually assess things in the real world.  How you you judge 
how tall a person is?  You don't divide up the arithmetic range into spans 
- 5 ft to 5'2", 5'2" - 5'4", etc. (sorry, non-US units people) and see 
which bin the person falls into.  No, you have a notion of what "tall", 
"medium", "short" and "very short" mean, and you see how well the person 
*matches* each of them.  So the person might be "somewhat tall but not far 
from medium".

On Saturday, April 15, 2023 at 6:35:02 AM UTC-4 Edward K. Ream wrote:

> On Saturday, April 15, 2023 at 5:31:41 AM UTC-5 Edward K. Ream wrote:
>
> The article describes NVSA: Neuro-vector-symbolic Architecture. Googling 
> this term found the home page for (HD/VSA) Vector Symbolic Architecture 
> <https://www.hd-computing.com/#h.zgreogawc8qc>. This must be one of the 
> best home pages ever written!
>
>
> I recommend following *all* the links on this page. Most links point to 
> Wikipedia pages for *easy* math concepts. I bookmarked many of these 
> links.
>
> Edward
>

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to leo-editor+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/c3001d84-fca2-482f-8022-e8c6db3e54dfn%40googlegroups.com.

Reply via email to