Electrodynamic Intelligence doesn’t argue that learning is tied to a
specific biological embodiment or that there’s only one valid substrate.
What it does argue is that *learning occurs from the physical dynamics of
signal propagation and memory-energy coupling,*  not from abstract
optimization rules. This isn't an anti-compression or anti-symbolic stance,
it is a move toward *physically embedded, self-organizing systems *that
learn because of how they are built, not because of what we train them to
do.
We agree that “can’t be compressed” doesn’t imply “must be embodied in one
way.” EDI doesn’t claim there's only one valid realization of learning or
intelligence. Rather, it highlights that certain energy-structured
conditions <https://zenodo.org/records/16997063>(e.g., coherence, memory
capacity, signal velocity) are necessary for physically grounded
intelligence to occur..
Propagation-driven learning is not about memorizing complexity but about
allowing local physical dynamics to shape the system's functional structure
over time. Compression is secondary, what's primary is whether the system
can self-organize adaptively through physical interaction to generate
self-learning and active memory consolidation.

--- Dorian Aur

On Sun, Aug 31, 2025 at 12:16 AM Rob Freeman <[email protected]>
wrote:

> On Sun, Aug 31, 2025 at 9:05 AM Matt Mahoney <[email protected]>
> wrote:
>
>> On Fri, Aug 29, 2025 at 10:45 PM Rob Freeman <[email protected]>
>> wrote:
>> > The contribution of the Hutter Prize to our knowledge has been barren.
>> It didn't find the semantic primitives Hutter envisioned for it.
>>
>> The purpose is not to find semantic primitives. It is to find efficient
>> algorithms for language modeling.
>>
>
> A casual search pulls up a stated goal of "compressing human knowledge".
> Does "compressing human knowledge" differ from finding semantic primitives?
>
> Now "compressing human knowledge" is reframed to be "find efficient
> algorithms for language modeling"?
>
> OK.
>
> In practice the biggest change to this compression model in 20 years has
> been to allow models to be bigger?
>
> But this is an old argument. Right at the beginning of the Hutter Prize I
> used to argue with you that language models would be characterized by
> getting bigger, not compression. 10 years later we got LLMs. Not known for
> their compactness. But OK, arguments over this go round in circles over
> what all the words "mean". There will be some way to argue an LLM is a
> compression of something, I'm sure. Yeah, they're a compression of what
> they generate, even if they themselves are bigger than what generates
> them...
>
> In practice, what we have are LLMs, which get big. No-one has found an
> upper bound to the improvement that can be made by making an LLM bigger, so
> far, to my knowledge.
>
> The very label LARGE, defines the field.
>
> What they do offer in terms of compactness is something of finite size,
> that appears to say new stuff/get bigger.
>
> You can look at the entire history of NNs as the reverse of compression,
> really. What's distributed representation? Compression?
>
> So is it a victory for compression to go from symbols to NNs, zero
> dimensions to N dimensions is a victory for compression?
>
> And then "deep" nets. More layers. More layers another victory for
> compression?
>
> And then the next advance is to go from supervised NNs to unsupervised.
> Less structure, or just less imposed structure? Less structure is a victory
> for compression? And then from unsupervised to generative...
>
> All the time the field tells itself it is trying to compress stuff, and
> all the time what works turns out to be less compressed. We walk eternally
> backwards into the future, thinking we're compressing stuff, but in
> practice ending up with models that are less compressed.
>
> I guess the question comes down to whether order is necessarily always a
> compression. Is the order generated by one of Wolfram's cellular automata
> in some way a compression of something?
>
> Anyway, whether seeking compression was a wild goose chase for AI is a
> separate argument.
>
> I don't know the OP for this thread was proposing. Sounds like Colin
> Hales' electromagnetic field, embodiment, arguments to me. (Embodiment
> being another form of a "can't be compressed" argument, BTW.) I'm skeptical
> of embodiment too. It takes non-compression to the opposite extreme to my
> view. That something can't be compressed, doesn't mean it can only have one
> physical realization.
>
> But the argument that AI is necessarily embodied as electromagnetic fields
> aside. I felt a need to query the idea that LLMs necessarily equate to rate
> based neural models.
>
> I don't think rate based models in neuroscience are settled science at
> all. And rate based models certainly don't strike me as a slam dunk
> argument for language models in their current form (i.e. LARGE.)
>
> -R
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/Ta9b77fda597cc07a-Mcba021e2f34c0476d420ac9b>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta9b77fda597cc07a-M8114726331ba42af0eb4173d
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to