Re: [agi] Re: HumesGuillotine

2023-07-09 Thread James Bowery
On Sun, Jul 9, 2023 at 3:21 PM John Rose  wrote:

> On Sunday, July 09, 2023, at 2:19 PM, James Bowery wrote:
>
>
>
> Good predictors (including AIXI, other AI, and lossless compression)
> are necessarily complex...
> Two examples:
> 1. SINDy, mentioned earlier, predicts a time series of real numbers by
> testing against a library of different functions and choosing the
> simplest combination. The bigger the library, the better it works.
>
>
> Hide quoted text
>
> Predictors deduce from previously compressed, or induced, models of
> observations or data.
>
> The dynamical models _produced_ by SINDy do not contain the library of
> different functions contained in the SINDy program.
>
> It is the dynamical models produced by AIT that do the predicting when
> called upon by the SDT aspect of AIXI.
>
>
> A "mathematical compression" would represent the libraries in a
> mathematically dense form so you can produce more libraries in a
> constrained compressor.
>

Perhaps I should have said: "The dynamical models _produced_ by SINDy *contain
a sparse selection of* the library of different functions contained in the
SINDy program."

A Solomonoff Induction engine is not subject to any constraints on size.
That is the point I was trying to get across in response to Matt.  Indeed,
a Solomonoff Induction engine is not subject to any resource constraints
whatsoever -- including that it be computable.

The only thing constrained in size is the model being induced by the
Solomonoff Induction engine -- and *boy is it constrained in size!*

Constraining the *model* in size is the whole point of scientific induction
as proved by Solomonoff.

So when you mix the phrase "mathematical compression" with "constrained
compressor", you're conflating the engine doing the "mathematical
compression" with the only thing that is constrained in size:  The
compressed model produced by the resource profligate compressor.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taf667527679b18c3-Md2fc76ddd076a0b613cc561f
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: HumesGuillotine

2023-07-09 Thread John Rose
In the spectrum of computable AIXI models, some known and some unknown, 
certainly some are better than others and there must be features for favoring 
those. This paper discusses some:

https://arxiv.org/abs/1805.08592
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taf667527679b18c3-M4e6c1e43172af7194b8db778
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: HumesGuillotine

2023-07-09 Thread John Rose
On Sunday, July 09, 2023, at 2:19 PM, James Bowery wrote:
>> Good predictors (including AIXI, other AI, and lossless compression)
>>  are necessarily complex...
>>  Two examples:
>>  1. SINDy, mentioned earlier, predicts a time series of real numbers by
>>  testing against a library of different functions and choosing the
>>  simplest combination. The bigger the library, the better it works.
> 
> Hide quoted text 
> 
> 
> Predictors deduce from previously compressed, or induced, models of 
> observations or data.
> 
> The dynamical models _produced_ by SINDy do not contain the library of 
> different functions contained in the SINDy program.
> 
> It is the dynamical models produced by AIT that do the predicting when called 
> upon by the SDT aspect of AIXI.

A "mathematical compression" would represent the libraries in a mathematically 
dense form so you can produce more libraries in a constrained compressor. 
Basically mathematically re-representing math iteratively using various means 
and heavily utilizing abstract algebraic structure. Math compresses into 
itself. And strings to be compressed are formulas that can be recognized. All 
strings are mathematical formulas... so a lossless compression would look for 
the shortest mathematical representation... the more mathematically intelligent 
the compressor the better the relative compression.

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taf667527679b18c3-Mfcfd2652d7defc97a6b9aea5
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: HumesGuillotine

2023-07-09 Thread James Bowery
On Sun, Jul 9, 2023 at 11:16 AM Matt Mahoney 
wrote:

> On Sun, Jul 9, 2023 at 10:20 AM John Rose  wrote:
>
> > I do wonder though what criteria would be used to discern amongst
> various computable models of AIXI assuming there is spectrum of them. I
> already have a model for that but I wonder what the specialists in the
> field say...
>

Don't conflate AIXI with AIC/AIT.  That is a fatal mistake that denies
Hume's Guillotine.  It's also a mistake that got some psudonymous victim of
Dunning Kruger at ycombinator claiming I was mentally ill when I suggested
the possibility of a business sector exploiting the current ignorance of
plausibly practical applications of Solomonoff Induction.  He kept coming
back saying "AIXI" and "diagnosing" me, even after I tried disabusing him
of his conflation.  (This behavior is quite common in AGI discussions and
is really quite weird as I've previously indicated.)


> Good predictors (including AIXI, other AI, and lossless compression)
> are necessarily complex...
> Two examples:
> 1. SINDy, mentioned earlier, predicts a time series of real numbers by
> testing against a library of different functions and choosing the
> simplest combination. The bigger the library, the better it works.
>

Predictors deduce from previously compressed, or induced, models of
observations or data.

The dynamical models _produced_ by SINDy do not contain the library of
different functions contained in the SINDy program.

It is the dynamical models produced by AIT that do the predicting when
called upon by the SDT aspect of AIXI.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taf667527679b18c3-M460aa1b85a839a6da618c8ac
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: HumesGuillotine

2023-07-09 Thread Matt Mahoney
On Sun, Jul 9, 2023 at 10:20 AM John Rose  wrote:

> I do wonder though what criteria would be used to discern amongst various 
> computable models of AIXI assuming there is spectrum of them. I already have 
> a model for that but I wonder what the specialists in the field say...

Good predictors (including AIXI, other AI, and lossless compression)
are necessarily complex. The simple proof: Suppose you have a simple
program that can learn to predict any computable sequence of bits.
Then I have a simple program that generates a stream you can't
predict. My program runs a copy of your program and outputs the
opposite bit.

Two examples:
1. SINDy, mentioned earlier, predicts a time series of real numbers by
testing against a library of different functions and choosing the
simplest combination. The bigger the library, the better it works.
2. The top compressors on most benchmarks have a lot of code because
they look for all the special cases and different file formats to pick
from. For example, PAQ searches for JPEG images embedded in other
files and applies a special model to compress them. PreComp searches
for deflate streams and unzips them so they can be recompressed with
better algorithms. But these are common formats. The best compressors
could look for hundreds or thousands of obscure formats and never get
all of them.

For example, how would you compress the bit sequence x[i] = x[i-24]
XOR x[i-55]? It looks like a random bit sequence (until it repeats
after 2^55 - 1 bits) unless you know to look for this particular
pattern. How would you compress an RC4 generated stream if it appears
random unless you know the key? AIXI^tl would eventually discover the
key by brute force search, but not before the heat death of the
universe.

-- 
-- Matt Mahoney, mattmahone...@gmail.com

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taf667527679b18c3-M60401287acbf6e66404b0660
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: HumesGuillotine

2023-07-09 Thread John Rose
On Saturday, July 08, 2023, at 2:32 PM, James Bowery wrote:
> Here's the critical difference in a nutshell:


> Shannon Information regards the first billion bits of the number Pi to be 
> random. That is to say, there is no description of those bits in terms of 
> Shannon Information that is shorter than a billion bits.


> Algorithmic Information regards the first billion bits of the number Pi to be 
> the shortest algorithm that outputs that precise sequence of bits.



This writeup is very informative James and thanks for anticipating some 
questions before I had to ask them.
AIT/AIC/AIXI is not inconsistent with my own research. Back around 1990 I spent 
a couple years in independent compression research and was unawares of AIT. The 
main research vector I ended up at was something I simply called mathematical 
compression, for lack of a better term.

Now I’m still at the crossroads between lossy and lossless with exceptions to 
Ockham's Razor so your write-ups are very helpful… also coincidentally I’m 
personally struggling with the societal truth problems and AIC alignment does 
appear to be a valid approach to these contemporary issues.

I do wonder though what criteria would be used to discern amongst various 
computable models of AIXI assuming there is spectrum of them. I already have a 
model for that but I wonder what the specialists in the field say...

John
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Taf667527679b18c3-M07ac8bc039db1943c3cb4353
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: deepmind-co-founder-suggests-new-turing-test-ai-chatbots-report-2023-6

2023-07-09 Thread stefan.reich.maker.of.eye via AGI
On Wednesday, July 05, 2023, at 6:05 PM, Matt Mahoney wrote:
> In reality, we are doing much worse. We still don't understand after 40 years 
> why we have an obesity and diabetes epidemic. It took almost that long to 
> learn that low fat diets make you fatter, but it is more than that. We are 
> still, after decades, giving wrong advice on sodium, leading to cases of 
> hyponatremia. A large fraction of the population take drugs for cholesterol 
> or high blood pressure that have never been shown to reduce mortality. We 
> still don't understand why skin cancer rates have been rising since the 1980s 
> when people started using sunscreen.

We live in a time of unprecedented medical scam. Cancer is one of the most 
notable examples.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T42db51de471cbcb9-M144dc8e32b43937c5751394a
Delivery options: https://agi.topicbox.com/groups/agi/subscription