Unless I'm wrong, an AI can't just talk to itself during file compression 
evaluation to get extra training data, to therefore compress better. Because 
every next letter it predicts would be already known with probability for any 
context that predicts it. Ex. you know the memory 'walking', you see 'walking' 
and predict 's', you store now 'walkings', not found in the dataset but from 
your brain. Problem here is, great, but next time you see 'walking', you 
predict 's', whether stored walkings or not. If you see 'walkings', (and you 
know 'walkingsz' after talking to self further ahead, as you already know 
kingsz prior), you predict 'z' only because you already know kingsz, ingsz, 
ngsz, gsz, sz, and z. So it'd be same if not had stored walkingsz.

Only if you do semantic discovery does your prediction get better without new 
data. Then, you can generate new entailment predictions, ex. you didn't know 
walking>fast, you see that the ing is recognizable and predict more randomly 
all that can come next, you don't know its likely that 'fast' comes after 
walking though, until you discover jogging and walking share predictions, now 
you bring predictions over to walk and know now 'walking>f' (for fast). So now, 
after seeing the two words share predictions, it's important you store over at 
the neuron walking this now > walking>fast. And so to do that you talk to 
yourself. Done. Made the sequence. Collected new thought/ insights/ DATA, for 
free. Data you love and data where you knew you are unsure of what letter to 
predict next. Because as said above, unless I'm wrong, storing self talkings is 
useless and doesn't improve prediction. You have to first take memories, get 
more out of them, then you can make new sequences.

Another way to do it is sparse matching and delay match models. So you see 
'walking', you know 'w_l_i_gs', so you predict walking>s, now you store 
'walkings', and that's it, you talked to yourself.

One problem with the aboves. You store the new predictions from translation and 
holed and etc matching, so they don't need to be computed again, though you 
stay between a trade off of mem and speed. But only when get new data. If you 
talk to yourself in brain, there is no need to do it, because if you try to get 
new predictions for 'walking' using a sparse memory, it would have already when 
saw walking, or the sparse memory like it, whichever came last you would have 
saw a similar match, already, when it came in. So we see the need to store 
sequences and use some the memory up, but why do we seem to do it in brain with 
no novel real world stimuli? Because we are still recalling real world data, we 
are still processing it, we don't ZIPADO done it in 10 seconds, we ramble on 
often, at least rarer pattern finder mechanisms do, that are not hardcoded but 
ran using memories. So, we are still processing the next letter, for all sorts 
of areas of the paragraph read recently or one that we love permanently ringing 
in our brain.

In conclusion, our many rarer softcoded mechanisms take longer to improve 
prediction, and are actually still reading the last seen or loved contexts! The 
us talking to ourselves is the act of the softcoded mechanisms taking long and 
searching ****and running using sequential memories****, hence seem conscious. 
It would be great if we could hardcode these in AGI on the fly, the common ones 
at least.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tdd35fb7acb50d151-M4cc6229bc039da43ea2f233f
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to