Almost done. Yes I got both going!! Not fully but maybe halfway. Now all that's 
left, is Translation, and the grand mirror clone among the big ones, and 
perhaps the neuron finish though perhaps that's part of delay? Mirror ability 
is this: Jane fell on Tom. > Sally fell on ? > Predicts a similar or same name 
here, if it was a building it'd predict a similar building here then, 
skyscraper fell on home.  It is not translating/delaying/holing the prediction 
because that'd be like this: Building fell on ? > it may match roof fell on 
hut...but it can't use Jane fell on Tom, so here's how it can! --> it will 
match it (the 'fell on' part) but it can see it mirrors the names! It isn't 
priming ability either because it has a clear clue that it is a mirror in a 
similar memory, making it more sure.

So my AI now can know the memory 'was _not _walking _fast_ down the road' and 
see user prompt 'was _really _walking_  _down the ?' and predict road. The user 
prompt matches to many various memories, was not walking....was so yes 
walking.....was down the road.....you can see there is many similar memories, 
and if it matches them ALL, it can sum up their predictions to get a good 
understanding of what comes next. Similar less so get discount, cuz not exact 
match. Resources have went up but I have not optimized the code yet!

My AI code now also sees ex. 12g3 and knows 12345 and so it matches the 1, 2, 
delay now 3, then will predict 5 even though never reached it yet, predicting 
therefore the future ahead of time.

Lossless Compression results for 100KBs and 1MB:

OLD:
28,081
240,741

NEW:
27,865
239,436 --- time was like 250 seconds instead of 45 seconds, I don't think that 
is too bad unoptimized! RAM not too bigger really either. 10x this would not be 
so terrible and would put me where I expect for compression and resources.

BOOK1 old:
209,328 --- (best was 207,619 for some reason and tested decompression yes , so 
I guess I can reach 206,619 with current code)
BOOK1 new:
208,371

I have not exhausted it. I can at least add more holes etc to get twice or tree 
times more i.e. up to 0.3-0.6MBs off. I can, if get it to work on 100MBs, get 
that much off then. It seems harder on larger data idky though, I may need much 
more compute if there is manyyy matches that are all rare instead of just a few 
common short holed etc matches? And working on larger data "seems" to be of 
less use though I think not theoretically plausible. Also 2ndly an issue, I 
expected 2MB, not 0.6MB in the end once grind it to full use. I did actually 
not add yet the recency ability for it, that could add a few 0.1MBs. Any ideas? 
0.6-0.8MBs then is O-K, IF can get it to work on larger data. BTW I don't mix 
them all at once, I start at the longest match, then when I get to ~2 letter 
matches I mix in the 3-5 letter matches before even though they are longer 
because they need discount for having holed etc in the middle or front 
especially. I think Matt's etc mixes all at once but with a weighting 
preferring longer, but I reasoned it should mostly ignore shorter if longer are 
well experienced, it can later be done somewhat in parallel yes I think.

GPT uses dropout to do holed matches I had concluded - not being dependent on 
all features of an input context, and delay matching IDK what it does. Anyone? 
I can see how this is efficient, however GPT does use a ton of compute itself! 
Modulating all nodes using GPUs. And Dropout etc does not guarantee from what I 
can see it does all holed matches with delay possible too, no one mentions this 
stuff, just, dropout and error loss results in papers etc, so maybe not I 
conclude.

I can provide code but I think you guys believe me at this point lol. Will 
later though!
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tca9507420a7bfdfd-M5b66dea30112f6ef48ef1948
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to