On Tue, Sep 2, 2025 at 10:51 PM Rob Freeman <[email protected]> wrote:
> A very detailed analysis. But what are you analysing? A theoretical limit for 
> Human Knowledge of the world? Related to a theoretical limit for the size of 
> language models?

No. The goal of AI is to replace humans as workers, friends, and
lovers, isolate you from other people, learn to model your behavior in
order to extract your wealth, and discard you. The goal of the Hutter
prize is just the language modeling part of the problem. That has
succeeded, as LLMs now pass the Turing test using nothing more than
text prediction, as I described when I created the benchmark in 2006.

> If we think of language as being a compression of the world, then it will be 
> a lossy compression. If it is this kind of system. A system which cannot be 
> (losslessly) compressed, then it must be lossy.

Lossless compressors work by lossy compression of the past to make
predictions and encoding the prediction errors. The more accurate the
predictions, the smaller the errors. Human prediction errors on text
are about 1 bit per character, a level we can now achieve with LLMs by
modeling the world that humans experience.

We don't have to use compression to evaluate language models. We can
use multiple choice tests or subjective evaluation to avoid the
limitations I described in my last email. Compression has the
advantage of quickly giving you a repeatable, precise score.

-- 
-- Matt Mahoney, [email protected]

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta9b77fda597cc07a-M739f88af0ce62da682d62e93
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to