On Sunday, July 09, 2023, at 2:19 PM, James Bowery wrote: >> Good predictors (including AIXI, other AI, and lossless compression) >> are necessarily complex... >> Two examples: >> 1. SINDy, mentioned earlier, predicts a time series of real numbers by >> testing against a library of different functions and choosing the >> simplest combination. The bigger the library, the better it works. > > Hide quoted text > <https://agi.topicbox.com/groups/agi/Taf667527679b18c3/humesguillotine> > > Predictors deduce from previously compressed, or induced, models of > observations or data. > > The dynamical models _produced_ by SINDy do not contain the library of > different functions contained in the SINDy program. > > It is the dynamical models produced by AIT that do the predicting when called > upon by the SDT aspect of AIXI.
A "mathematical compression" would represent the libraries in a mathematically dense form so you can produce more libraries in a constrained compressor. Basically mathematically re-representing math iteratively using various means and heavily utilizing abstract algebraic structure. Math compresses into itself. And strings to be compressed are formulas that can be recognized. All strings are mathematical formulas... so a lossless compression would look for the shortest mathematical representation... the more mathematically intelligent the compressor the better the relative compression. John ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Taf667527679b18c3-Mfcfd2652d7defc97a6b9aea5 Delivery options: https://agi.topicbox.com/groups/agi/subscription