On Mon, Sep 1, 2025 at 11:54 AM Matt Mahoney <[email protected]>
wrote:

> On Sun, Aug 31, 2025, 3:16 AM Rob Freeman <[email protected]>
> wrote:
>
>> Right at the beginning of the Hutter Prize I used to argue with you that
>> language models would be characterized by getting bigger, not compression.
>> 10 years later we got LLMs. Not known for their compactness...
>>
>
> The model representation in memory is several times larger than the input
>

I just want to emphasize that line.

What might be the theoretical limit in size, I wonder? Could there be no
limit?

Could this be a good thing? Don't we want more meaning in the world? How
much meaning is enough? Could our error, and the size/energy/training
problem of LLMs, be that we try to find all possible meaning, at once. And
must inevitably fail to finitely enumerate what contains infinite
possibilities. Possibilities that we want to be infinite. Because no-one
wants a finite end to creativity. And perhaps more importantly in our
current social context, what contradicts. Because I think it will be the
capstone insight of AGI: that meaning contradicts. So we must treat it as
something contextual. Contradiction is not a problem in context.

A question which is currently upturning society. Because we see it, but
no-one has a principled means to deal with it. Let alone generate it. No
objective way to deal with subjectivity.

-R

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta9b77fda597cc07a-Mfe6981c0ffd8010c465ed4b4
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to