Yes HHMMs are intractable, but you can make them more robust, you can build on 
it lots. PPM (Partial Predictions Match) is a good one, in short, it looks at 
the last 17 letters, 16, 15 ... 2, and 1 letters, and blends predictions of 
what letter usually comes next from experience over a dataset it looked at. It 
starts at 17 letters and if it has enough, it doesn't have to look at shorter 
views ex. last 3 letters.

On Saturday, February 13, 2021, at 2:16 PM, Juan Carlos Kuri Pinto wrote:
> However, *HMMs need a classifier* to tell them the frontiers of each complex 
> state.
> And deep learning is the perfect classifier because it perfectly classifies 
> bizarre states with rough frontiers, i.e real-world patterns.
I'm not sure this makes sense... PPM works... My version of AI just looks at 
data and can later get matches which show the frequencies of what letter or 
word follows. Input > output. Word2vec, another understandable algorithm, 
learns boat is similar to ship by X amount, again my AI can use this to 
"classify" a in >> A out to do translation.

As for bizarre fuzzy patterns, again this needs some examples to be clear... 
You can approximate a pattern good or poor or really poor, depending on the 
intelligent pattern finder and data size it trained on. If we take a 8000D 
input and need a 8000 layers to predict correctly, it's not that we need to 
untangle anything, it's simply narrowing down the matches if all 800 letters 
are present in the prompt, among other ways of detecting patterns. I know it 
looks complex if you think of an 8000D image or 8000 pixels and see in that 
space the data point (red, some blue, orange etc) are looping around each 
other, so is the input red, blue, what do i classify it as, well, this is a 
rather wrong way to look at it I believe, you need intelligent pattern finders 
and intelligence data, they are similar, i can't say it any other way simply 
this 8000D way of looking at it is not saying anything about AI, Transformers 
use backprop / gradient descent but that doesn't do everything they need BPE, 
self attention, translation embeds, more data, activation function, 
normalization, etc, we know what those do, there's a big pattern, but backprop 
doesn't do anything, it only listens to the mechanisms of AI....
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta86fa089ebd8ca28-M26d629e5a42489c871fadbcf
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to