I think so it's like a new pipeline for them where they compress the videos and 
store them in a separate ANN that acts as the storage device instead for any 
other ANN, so they can modify the videos faster/ store them and use them 
without decompressing them, and I "think" while ANNs are fed already shrunken 
data to train on and then further compress it into learned features in the 
neural network, Deepmind goes further to have it fed the data already 
compressed also I am guessing, and so it is fed data that looks maybe just a 
bitttttt lower quality than otherwise fed. It sounds to me like they are 
training easier then, like 700x easier/ faster, by compressing lossyly the 
videos but maintaining the quality. I mean lossly compressed as a code would be 
small due to lossly and due to being a code - a code reversed to decompress 
into the video.

Like, if they can now train on hour long videos, then the 5 second videos they 
did must be easier to do.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb7e779c9b08a054e-Ma9ef86bfce469c8663f4ac83
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to