Thank you Rob, that’s one of the most thoughtful and multidimensional
readings of this work I’ve seen, and I really appreciate how you’re
situating it among a range of frameworks, from Colin and Cronin, to
Freeman, Edelman, and even the linguistic lineage through Chomsky and
corpus-based models.

You’re absolutely right: *this isn’t a “pure embodiment” argument*, nor is
it a rejection of abstract learning. The point about dynamics, especially
when organized around parameters like coherence, energy flow, and memory
capacity, is to find *a physically grounded path *between the symbolic and
the embodied camps, and perhaps help dissolve some of the old dichotomies.
*On “Memory-Energy” Coupling*

To clarify that term: “memory-energy coupling” refers to *how stored energy
in physical substrates (like microtubules, memristive circuits, or
dendritic fields) *isn’t just passive storage but participates dynamically
in how information propagates, reinforces, and adapts. Think of it less
like static attractors or "bubble memories" and more like *fields that
condition their own future flow *— i.e., the substrate “remembers” not by
freezing a state, but by shaping the next wave of propagation.

It’s admittedly a bit of a shortcut term,  the aim is to capture a
non-symbolic, *field-mediated plasticity,* one where the history of signal
flow changes the energy landscape of future dynamics. So in short, the
memory-energy
model provides a physically grounded metric for how much information a
system can store *in an active, dynamic way*, which is crucial for
understanding learning, adaptation...
*On Shared Context and Prediction*

You bring up an important critique, *the lack of explicit modeling of
shared context or prediction* , and I agree that’s a central theme that
needs to be folded in more explicitly. At present, the model’s closest
analog to this is through the *network coherence factor* , which weights
how phase-synchronized or field-aligned different nodes or regions are
during active processing. This isn’t prediction per se, however it does
reflect *how distributed units align to form stable configurations*, which
are often* the substrate for expectations, resonances, and temporal
sequences.*

 I agree this doesn't go far enough to explicitly encode *semantic
generalization or predictive symmetry* , and this may well be where your
point about “internal meaning” can expand the model. Right now, the
feedback loops are environmental, as you note (similar to Edelman). As you
suggest, *recursive internal structure, * especially if structured around
shared contexts , might allow *meaning to emerge endogenously*, without
waiting for extrinsic signals to do the filtering.
*On Phase Transitions vs. Chaos*

You mentioned concern that this might be leaning too much toward static
attractors, and again, that's well taken. However,  the goal isn’t to
reduce dynamics to fixed points, rather, it’s to explore the *regime around
the phase transition,* where stability and fluidity coexist. Walter
Freeman’s work on chaotic attractors is deeply aligned here, and I’m glad
you brought him up. The “quantized” phrasing may be a bit misleading, it’s
meant to describe *threshold phenomena* in energy-coherence space, not
rigid states.
*Final Thought: A Potential Synthesis?*

If we can bring coherence, context-sharing, and recursive reconfiguration
into a unified model , where *meaning is emergent from stable-but-fluid
predictive dynamics,* then I think we're close to something quite powerful.
Your framing of oscillations encoding shared context fits beautifully into
that trajectory, and I’d be interested in integrating that perspective
further.
---Dorian Aur


On Sun, Aug 31, 2025 at 5:44 PM Rob Freeman <[email protected]>
wrote:

> Dorian,
>
> Ah, well I'm sympathetic to an approach looking at this as a problem of
> dynamics (though I'm not sure what "memory-energy" coupling could mean.)
>
> Colin Hales, who I think has corresponded here, has an electrodynamic
> field model which would seem to be more strictly embodied. He says it must
> be implemented as electrodynamic fields. Lee Cronin seems to have a similar
> argument, but from a chemistry perspective. George Lakoff, embodiment...
> basically as neurons, anyway. (The whole embodiment thing became something
> in the fields of linguistics which retained a basis in data after Chomsky:
> Functional, Cognitive Linguistics, and indeed generally in the "corpus
> based" fields most directly connected with machine learning. And quite
> rightly. They saw something. Insisting on basis in a "corpus" is an
> embodied form of non-compression. "Corpus" = body. Embodiment is a form of
> the non-compressibility argument, as I say. You need the whole corpus, said
> Corpus Linguistics. This as opposed to Chomsky, who argued language could
> be... not compressed... the whole point became that it could not be
> compressed, that's why Chomsky still rejects machine learning. Chomsky
> insisted any abstraction must be innate, exactly because it could not be
> compressed/learned. At one level, observable compressions contradicted. So
> linguistics resolved to embodied, or innate. But then LLMs ignored
> linguistics and went and "learned" over corpora anyway!)
>
> But you're not trapped by the embodiment interpretation of this. Good. Let
> alone Chomsky's "unlearnable" innate structure. You see a solution in
> dynamics. Good. (Once a dynamical system becomes chaotic, it does become
> embodied in a sense, but chaos is not limited to one embodiment.)
>
> So what are the parameters of this dynamics? You  say "certain
> energy-structured conditions (e.g., coherence, memory capacity, signal
> velocity) are necessary".
>
> "Coherence" I'm sympathetic to.
>
> I've been pushing the idea of a dynamical system solution parameterized by
> the "coherence" of oscillations. Driven by network symmetries of
> prediction. A dynamical system parametrized by similarity of context,
> anyway. A dynamical system in the sense the patterns grow and change.
> Actually I think its evolution is probably chaotic on some level.
> Concurring with Walter Freeman on that, from the neuroscience field.
>
> Checking your link, you try memristors.
>
> "emergent, quantized intelligence, analogous to a phase transition" sounds
> good. Phase transitions being a big theme of Walter Freeman.
>
> "adaptive behavior arises intrinsically from the physics of the
> substrate". Yes.
>
> Personally I think the crucial parameters are the ones already identified
> by LLMs: shared context or shared prediction. If you can harness dynamics
> which depend on those, it doesn't matter what your substrate is. I got
> focused on the potential of (neural activation) oscillations in a network
> of neurons representing language sequences. Just because synchronized
> oscillations struck me as dynamics to capture those parameters of shared
> context/prediction.
>
> From a casual glance at your paper I'm unable to tell if the parameters of
> your dynamics are also shared context/prediction. If they are, then it
> might be good.
>
> I feel you might be looking at static attractors in some sense though.
> Some kind of "bubble memory". Which then has meaning... how? I can't find
> the word "meaning" anywhere in your paper. Instead you have "adaptive,
> feedback-driven reconfiguration". So it seems you make something of the
> novelty of reconfiguration, but this has value only because of "feedback".
> So the system will give meaning to these reconfigurations by some kind of
> feedback from the environment?
>
> That sounds to me like George Edelman's Neural Darwinism. Endless random
> reconfigurations, which are selected for meaning by the environment (like
> his Nobel Prize winning immune system insight.)
>
> Of interest to me, I recently came across Eugene Izhikevich's
> "polychronizations". Stable sequences appearing (from the dynamics)
> spontaneously in networks of neurons. But Izhikevich didn't attribute
> meaning to these based on the way the sequences shared contexts either.
>
> It looks to me like you have the insights about dynamical systems, and the
> power of reordering/reconfiguration. But (like Edelman and Izhikevich) you
> may be missing shared context/prediction, as the key parameter. Making
> "meaning" internal to the system too, and not needing to be imposed by any
> external feedback.
>
> -R
>
> On Mon, Sep 1, 2025 at 2:23 AM Dorian Aur <[email protected]> wrote:
>
>>   Electrodynamic Intelligence doesn’t argue that learning is tied to a
>> specific biological embodiment or that there’s only one valid substrate.
>> What it does argue is that *learning occurs from the physical dynamics
>> of signal propagation and memory-energy coupling,*  not from abstract
>> optimization rules. This isn't an anti-compression or anti-symbolic stance,
>> it is a move toward *physically embedded, self-organizing systems *that
>> learn because of how they are built, not because of what we train them to
>> do.
>> We agree that “can’t be compressed” doesn’t imply “must be embodied in
>> one way.” EDI doesn’t claim there's only one valid realization of learning
>> or intelligence. Rather, it highlights that certain energy-structured
>> conditions <https://zenodo.org/records/16997063>(e.g., coherence, memory
>> capacity, signal velocity) are necessary for physically grounded
>> intelligence to occur..
>> Propagation-driven learning is not about memorizing complexity but about
>> allowing local physical dynamics to shape the system's functional structure
>> over time. Compression is secondary, what's primary is whether the system
>> can self-organize adaptively through physical interaction to generate
>> self-learning and active memory consolidation.
>>
>> --- Dorian Aur
>>
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/Ta9b77fda597cc07a-M2ce345fbb62815fcb868ce18>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta9b77fda597cc07a-Maf526e9fd7d1b20f5a90dd50
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to