I revisited a thought and am confused and may see a way now. If we try throwing 
GPT-2 at Matt's contest, why wouldn't it get top rank? It's RAM that is big 
would be deleted after training no? Then upon decompression would start 
training again from scratch. Why is the RAM too big after compression? All you 
store is the steers for predictions for its slight inaccuracy per letter, not 
the model, to get the compressed file for all letters in the dataset.

Also if you say it would be too hard to run it, we could for 100MB. Though this 
isn't even a problem because you can still get the compression score even if 
expensive and timely.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tcc6753a33ad48f78-M7732782b4d807bee11bb4fef
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to