Very refreshing, good article you wrote I liked it. Interesting way of looking 
at it, a FFNN made of dimensional manifold manipulators to find the patterns, 
clustering/ building a FFNN of related patterns. Analyzing, hmm, my mind is 
twisting, analyzing, a FFNN of related patterns = t, h, th, a, n, an, 
thank....wind = breeze = gust....count letters in this post = other 
task........or count > inject q everywhere > count again..........or what if 
"cat hat dog, boat tall ship, zoo wind cage, loop truck ?" is made of recognize 
item 1 as item 3 +/> recognize tuple 1 as tuple 2 as etc,  I mean why are the 
patterns related/clustered, if they are related then it must be either made of 
or translates (entails...cat=dog cuz cat eat, dog eat).......I know there is a 
LOT of patterns, most are rare, many are the same ones. Ok now this FFNN made 
of dimensional manifold manipulators to find the patterns, analyzing, hmm, so 
what does "I was walking down the road and saw a ?" kind of fold look like, why 
does it need nonlinear untangling? Or is it those weird patterns like cat gum 
dog, bike wind car, ..., hmm these curves/desired activations found a way to 
conect dimensions  so all the samples and unseen similar samples are in the 
same space... assuming your data is 2D or 45D images with red and more if want 
plotted dots in tanglish swirls that have a seemingly recognizable pattern even 
you could see it on screen... ok maybe given a string we map it to a space as 
recognized, then map to what comes next if even need another mapping, the 
words/BPE inputs are a single red dot plotted in some space, ..... nih this 
doesn't seem right, I must go to bed but this only seems to b a way to make it 
faster, not actually empirically why the next word / how / where it comes from, 
like how markov models tell you why it comes next - cuz it was saw 55 times 
more often based on conditional prompt given.

The eigenfaces was interesting, how simply does this work, is it not just 
word2vec really? Wiki etc is giving me overload of text the answer must be way 
simpler ...

Though you didn't answer the question I posted asking if backpropagation is 
just a way to make markov models fast ex. adding word2vec to MM may slow 
MM-based nets down, so maybe backprop is a way to make MM efficient. I mean, 
markov models are so easy, Transformers what the hell are those, see what I 
mean? Where does this manifoldulare dimensinoal folding sigmoids even come from 
where is Next Word ???
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta86fa089ebd8ca28-Mc8954cd7e10f3b4a7f335512
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to