Thanks Matt.

The funny thing is though, as I recall, finding semantic primitives
was the stated goal of Marcus Hutter when he instigated his prize.

That's fine. A negative experimental result is still a result.

I really want to emphasize that this is a solution, not a problem, though.

As the HNet paper argued, using relational categories, like language
embeddings, decouples category from pattern. It means we can have
categories, grammar "objects" even, it is just that they may
constantly be new. And being constantly new, they can't be finitely
"learned".

LLMs may have been failing to reveal structure, because there is too
much of it, an infinity, and it's all tangled up together.

We might pick it apart, and have language models which expose rational
structure, the Holy Grail of a neuro-symbolic reconciliation, if we
just embrace the constant novelty, and seek it as some kind of
instantaneous energy collapse in the relational structure of the data.
Either using a formal "Hamiltonian", or, as I suggest, finding
prediction symmetries in a network of language sequences, by
synchronizing oscillations or spikes.

On Sat, May 25, 2024 at 11:33 PM Matt Mahoney <mattmahone...@gmail.com> wrote:
>
> I agree. The top ranked text compressors don't model grammar at all.

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T682a307a763c1ced-Meac024d4e635bb1d9e8f34e9
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to