Something else that has occurred to me about lossless vs lossy aka AIT vs SDT:
The bit string to be losslessly compressed by AIT has to come from somewhere and that "somewhere" has to be, in some sense, "embodied" in its environment so as to receive data from its environment. The structure of its input will not necessarily take in _all_ data available in its environment. In fact, in all but trivially meaningless cases, it takes in a very small fraction of the data available in its environment. This implies a kind of "genetic lossiness" or biological SDT that is enormously lossy based on the evolutionary utility of its embodiment. This seems to me to stop at one level of regress; it's not "AIXIs all the way down". The level at which it stops is reproduction as utility function for evolution. On Wed, May 26, 2021 at 10:23 AM James Bowery <[email protected]> wrote: > > > On Sun, May 23, 2021 at 9:47 PM Matt Mahoney <[email protected]> > wrote: > >> On Sun, May 23, 2021 at 1:57 PM James Bowery <[email protected]> wrote: >> > On Sun, May 23, 2021 at 12:32 PM Matt Mahoney <[email protected]> >> wrote: >> >> >> >> ...Data compression alone doesn't lead to AGI, but it does measure >> prediction in signals with a high signal to noise ratio, like text. It's >> less useful for vision and robotics. >> > >> > Seems to me if a vision system can transform a 2D array of pixels into >> a 3D array of voxels into a CAD model that the CAD model of the 3D >> environment, that this would be both a highly compressed representation of >> the environment and highly useful for robotics. >> >> Image prediction is very useful. It is central to how we understand >> what we see. But the problem with turning the prediction algorithm >> into a compressor is that the input is mostly noise, which is >> meaningless and does not compress. So your compression ratio doesn't >> tell you much. > > > This goes back to the issue raised by reversible programming language > Kayak's inventor Ben Rudiak-Gould raised when the Hutter Prize was first > announced > <https://groups.google.com/g/comp.compression/c/Pwlq6pkyc8s/m/ZdHC8HvgCYAJ>. > I didn't feel all that comfortable with my answer (or yours) to his > question at the time, which relied on your present position, which is that > the signal to noise ratio is decisive in determining the utility of > lossless compression's applicability to prediction. At that time, I put a > mental bookmark in my response so I could revisit it later because I knew I > was adopting his position in arguing against it in the case of Wikipedia's > lower noise level -- and I didn't really believe his, or your position. > > So, 15 years on, here's what I should have said at the time if I had been > completely intellectually honest in my response to him: > > The "noise" level, in AIT terms, can be thought of as a constant of > integration and what we're interested in while searching the space of > algorithms is differentiability of the loss function (ie: the number of > bits in the executable archive of the data). > > While it is true that as one approaches the Kolmogorov Complexity limit, > the derivative of the AIT loss function provides less "utility" (in the > sense of determining an accurate model of the environment), it never goes > away. > > But, perhaps more important is the issue you raise about "subjectivity" > which gets into a different sense of "utility": > > >> If you use lossy compression, which is more >> appropriate, then you have to subjectively evaluate it for quality. >> Either way, you don't get a precise number like with text compression, >> which makes searching for better algorithms much slower. >> > > In AIXI terms, the difference between lossless and lossy compression is > the difference between AIT's and SDT's notion of "utility": The former > being concerned with what "is" and the latter being concerned with what > "ought" to be via a "subjective evaluation". > > SDT's decision tree is where data is thrown away, implicit in the data's > irrelevance in making decisions that yield utility. > > So, yes, you're right that lossy compression is essential to AGI, but > let's be careful about its proper place. > ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T95f11a183fb9b6e1-M8d59044bcd905f7f9fcd41bd Delivery options: https://agi.topicbox.com/groups/agi/subscription
