Good ideas link to the associated motor actions, all we need do is find the 
maze path in sensory data. Our text/image data on the internet describes the 
whole Earth/universe. More diverse big data leads to exponential compression 
shrinkage (ex. 1PB to 1GB) AND exponential free energy (knowledge) 
extraction/expansion (data evolution generation, self-recursion by 
self-imitation, old data>new data, equilibrium between comp/decomp for 
survival). This big data gives you not only exponentially good predictions till 
the end of time but also allow it to grow the big data even more by a trillion 
times the size!!!

Compressing the big diverse data tells you things about it so your translation 
and hence Next Word grabbing is good. It arranges the segmentation paths from 
Byte Pair Encoding (a|b | c), Hierarchy (i am/am fun/fun if), and averaging the 
nodes that differ in time delay and node relation (the new dog/new but this 
cat). The 3rd here is messing with me but I almost got it then we're homerun 
outta here. If you have removed nodes/connections by BPE/hierarchy, what more 
can you do? Well time delay and related nodes are shared. So we remove them. 
But does it increase translation accuracy? It should. But why? During lossless 
regeneration a given wiki8 context isn't exactly in the hierarchy but it 
matches a 'hash' like a collision. The key here is compressing wiki8 losslessly 
is good but allows for now unseen contexts to match correctly in hierarchy, 
fast. If unseen context x matches 50 contexts in the uncompressed hierarchy, 
that works, but, if (losslessly) compressed, it matches 8 contexts and, some it 
would have originally lighten up don't now as much I bet.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T409fc28ec41e6e3a-Mb199599ca24ec381ca3402d8
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to