About 10,000, 100,000, 1,000,000, 10,000,000 bytes:

0.44888181407357763
0.433259276532548
0.35976039165384843
0.43851910590941723

must be an issue somewhere

Well, I guess it was not sure at first, hence the large score, it doesn't 
change rapidly the more samples it uses see the upper 3.....and it slowly gets 
better ya....hmm......So if each correct letter prob it picks is 100%, then 
average would be high; 1.00000000....high=better predictor


code is so small now:
for m in range(256):
  if m == ord(window[-1]):
    list.append(predict[m])

print(round((sum(list) / len(list)), 200))


Now, we have to include program size....so...mine is basically 5,000 bytes not 
compressed......Hmm, if I'm compressing 100MBs, or rather trying to get score 
1.0, then, we could say if my code was 100MB big then maybe I am storing every 
BPC answer (not that this is BPC I'm doing), so 5KBs is like, take 100MB and 
minus 5KBs, and that gives you the discount to remove from my 0.44888 score 
above I got.....so 100,000,000 - 5,000 = 0.00005....therefore 0.44888 - 0.00005 
= 0.44883


RIGHT? Isn't this cool?
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Td13a829978c4c9f3-Mf59bb4ee48af19e335f636d6
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to