On Mon, Sep 1, 2025 at 8:27 PM Rob Freeman <[email protected]>
wrote:

> On Mon, Sep 1, 2025 at 11:27 PM Dorian Aur <[email protected]> wrote:
>
>> ...
>>
> No issue with any of that Dorian. I'm broadly aligned on the dynamics. And
> the need for dynamics to model cognition.
>
> Note also the need for dynamics to come into it at some point is a core
> belief of the founder(?) of this list, Ben Goertzel. You may know he wrote
> a book "Chaotic Logic" in 1994. At the time of another bit of an uptick on
> emergent models back then. Emergent Computation was on the first Gartner
> Hype Cycle in 1995. People could see it, but it never really got traction.
> Very much like neural networks in general for the time. Emergent computing
> is awaiting its analogue of neural networks' Nvidia GPU moment to wake
> these 30 year old ideas.
>
> Ben characterizes what he's been doing since as seeking that (Nvidia GPU?)
> substrate on which to express the inevitable chaos. But there is broad
> agreement, chaos will be necessary.
>
> Perhaps the analogue is not Nvidia GPUs. Perhaps the analogue is "deep"
> networks. Emergent computing is awaiting its "deep" structural insight.
>
> I think the insight is... dynamics yes, leading to chaos (and a certain
> "quantum" quality.) But this emergent on the same "shared context and
> prediction" which is actually the basis of LLMs too. The single simplifying
> link between structure and meaning has been staring us in the face.
> Language leads you to it. Linguistics choked on it. It's just LLMs, because
> of their historical attachment to backprop, are not dynamic.
>

Absolutely,  I’m fully with you on the importance of dynamics, and I agree
that emergent computation has been waiting for its structural moment, not
just its hardware one. The idea that *shared context and prediction* form
the missing bridge between structure and meaning really resonates,
especially as something that both chaotic systems and LLMs are circling
from different angles. Maybe what’s next isn’t just deeper networks, but deeper
dynamics <http://dx.doi.org/10.2139/ssrn.5421075>

> *On Shared Context and Prediction*
>>
>> You bring up an important critique, *the lack of explicit modeling of
>> shared context or prediction* , and I agree that’s a central theme that
>> needs to be folded in more explicitly. At present, the model’s closest
>> analog to this is through the *network coherence factor* , which weights
>> how phase-synchronized or field-aligned different nodes or regions are
>> during active processing. This isn’t prediction per se, however it does
>> reflect *how distributed units align to form stable configurations*,
>> which are often* the substrate for expectations, resonances, and
>> temporal sequences.*
>>
>>  I agree this doesn't go far enough to explicitly encode *semantic
>> generalization or predictive symmetry* , and this may well be where your
>> point about “internal meaning” can expand the model. Right now, the
>> feedback loops are environmental, as you note (similar to Edelman). As you
>> suggest, *recursive internal structure, * especially if structured
>> around shared contexts , might allow *meaning to emerge endogenously*,
>> without waiting for extrinsic signals to do the filtering.
>>
>> Yes. Good. You have a "network coherence factor". And specifically this
> is based on "phase synchrony"? Phase synchrony for memristors may not be
> the same as for neuron spikes, but I'm also looking at phase synchrony as
> the relevant parameter (in contrast to spike rate.)
>
> Phase synchrony need not mean prediction. But if the phases synchronize on
> shared predictions, then it will. The question is how to make them
> synchronize on shared prediction. You can make a sequence network easily
> enough. But I struggled for a long time with how to recurrently feedback
> information from the posterior context in the sequence. Given A->X->B and
> A->Y->B, how does B feedback to X and Y to synchronise their phase? The...
> energy, actually does cycle around recurrently for language, because most
> words connect to most others. But it's not clear how it carries information
> about the downstream context (B) when it does that.
>
> I have an idea for that which I'm working on now. But maybe you have your
> own ideas. I'm interested to hear suggestions.
>
> Also note, shared context of this kind is also the basis of
> Izhikevich's polychrony. But Izhikevich has X and Y jointly locking
> together with B not with synchrony, but with co-ordinated delays. This
> might be better than synchrony. Much greater coding depth. And it natively
> addresses sequence.
>
> But as I say, maybe you can think of another "*network coherence factor" 
> *which
> will reflect posterior context in a sequence network (if you can, the
> sequence gives you meaning, and it is job done.)
>

Really appreciate this deep dive — you're putting your finger on a crucial
open problem in the model, and your framing around posterior context
feedback is spot on.

You're absolutely right that *phase synchrony alone* isn’t sufficient, it’s
more a structural potential for alignment than a guarantee of shared
prediction. The current "network coherence factor" in the EDI framework
measures the extent to which spatially distributed elements (e.g.,
memristors, nodes, or neural units) exhibit phase-locked dynamics, but *not
yet* whether they are aligned specifically around a shared internal model
or posterior expectation.

Your example — *A → X → B* and *A → Y → B* — nicely illustrates the
problem. Without some form of backward-influencing coherence from B,
there’s no endogenous pressure for X and Y to resolve toward a shared
prediction space. The system might form local attractors, but those won’t
necessarily encode semantically meaningful relationships unless there’s a
mechanism that binds downstream convergence (like B) back into earlier
divergent states (X, Y). I see now how this is related to your search
for *phase-based
recurrence* that carries posterior disambiguation upstream.

This is where your insight about polychrony really clicks — using *coordinated
delays* instead of pure synchrony might be key. Phase delays could provide
a richer encoding mechanism for temporal inference, especially in systems
where timing is more flexible than binary spiking. The idea that sequences
like A-X-B and A-Y-B could *resonate* based on shared downstream
convergence opens up the possibility for internal generalization — i.e.,
*meaning* emerging from structural overlap, not just external reinforcement.

In terms of building that into EDI, one thought is to expand the coherence
factor to include contextual phase alignment, where synchrony isn't just
measured in real time, but across *temporal offsets* informed by memory
traces or delay-based routing. This would be akin to allowing the system to
develop *internal echo patterns*, where future convergence states influence
present dynamics via phase delay channels. It's speculative, but
potentially testable in analog circuits or spiking memristive networks.

Long story short, I think you're exactly right: meaning *is* rooted in
shared prediction, and that likely requires phase-coherence mechanisms
that *bind
future states back into current processing*. Whether through synchrony,
delay coding, or a hybrid, that's the glue needed for semantics to
self-organize.   In EDI circuits, *recursive energy flow* should allow
stabilized downstream nodes (e.g., B) to influence earlier processing
elements (X, Y) via *field-modulated feedback and delay-weighted summation.*
 This *phase reentrance mechanism* supports predictive alignment without
external supervision, allowing shared context to dynamically shape semantic
stability within the physical substrate. The process uses core memristor
properties , *hysteresis, temporal delays, and field sensitivity * to
implement self-organizing, internally grounded representations, effectively
forming a substrate-level mechanism for *recursive internal semantics.*

> *On Phase Transitions vs. Chaos*
>>
>> You mentioned concern that this might be leaning too much toward static
>> attractors, and again, that's well taken. However,  the goal isn’t to
>> reduce dynamics to fixed points, rather, it’s to explore the *regime
>> around the phase transition,* where stability and fluidity coexist.
>> Walter Freeman’s work on chaotic attractors is deeply aligned here, and I’m
>> glad you brought him up. The “quantized” phrasing may be a bit misleading,
>> it’s meant to describe *threshold phenomena* in energy-coherence space,
>> not rigid states.
>>
> I actually have no problem with the "quantized" phrasing. It was an early
> observation of mine that these groupings (actually before looking at the
> dynamics, just looking at meaningful groupings in language) had a kind of
> "quantum" indeterminacy, contradiction, or "uncertainty principle".
>
> This relates to the contradictory/subjective meaning idea which I think
> prevents compression of "meaning". And I believe is a key insight we're
> ignoring in AI. I slip between this and chaos as the key insights (for
> AGI?) Both seem to be powers of assemblies of elements to defy abstraction.
> Perhaps chaos captures the growth/expansion aspect of it, and quantum
> captures the contradiction/subjectivity aspect of it. So both may apply.
>
> So I don't mind the quantum analogy at all. Though you need to be careful
> it doesn't immediately make people think of a subatomic connection,
> Penrose, etc. But in recent years I've found more and more people making
> the quantum analogy. (Bob Coecke was one of the first, applying quantum
> maths to distributional models of meaning, around 2007.)
>

Thanks,  really rich perspective, and I appreciate how you're bridging the
linguistic and dynamical domains*.* I completely agree that both *chaos*
and *quantum-like* behavior capture something vital about systems that defy
naive compression , especially when it comes to meaning.

Your point about *contradiction and subjectivity* as intrinsic properties
of semantic groupings really resonates. Meaning often resists reduction
precisely because it's contextually entangled , a kind of informational
uncertainty that isn’t just noise, but a structural feature. Framing this
as a kind of “semantic uncertainty principle” is both elegant and
pragmatically useful, especially when thinking about AGI.

I also take your note of caution about quantum analogies seriously. I’m not
trying to smuggle in Penrose-style quantum consciousness theories, but
rather to use the “quantized” language in the classical sense of *threshold
transitions*, where a system reorganizes qualitatively once certain
energy/integration parameters are crossed. Like you said, it’s more
about *emergent
regimes* than particles.

Walter Freeman’s ideas, and perhaps yours as well suggest that the *semantic
indeterminacy* and *nonlinear coherence* we see in language and thought
aren't bugs of biological wetware, but features of systems that are
operating near a *critical point*, where meaning can both stabilize and
remain plastic.

So yes, chaos and quantum analogies aren’t mutually exclusive, but
complementary lenses on the same underlying dynamics: chaos modeling
the *growth,
novelty, and sensitivity to context*, and quantum metaphors capturing
the *interference,
ambiguity, and irreducibility of internal states*.
Happy to continue exploring this middle ground, especially how these
dynamics might inform architectures that support genuinely emergent meaning

> *Final Thought: A Potential Synthesis?*
>>
>> If we can bring coherence, context-sharing, and recursive reconfiguration
>> into a unified model , where *meaning is emergent from stable-but-fluid
>> predictive dynamics,* then I think we're close to something quite
>> powerful. Your framing of oscillations encoding shared context fits
>> beautifully into that trajectory, and I’d be interested in integrating that
>> perspective further.
>>
> Great. I have some ideas I'm working on in a spiking neuron context. But
> I'd be interested to hear any ideas you may have on the problem of "network
> coherence factor" for a sequence network in your hardware context. It may
> be all that you need is to confront the idea that shared context in
> sequence maps to meaning. You may immediately have ideas how to extract
> attractors based on that (which will then implement the "internal meaning"
> we seek) in your (memristor?) context.
>
> -R
>
> Really appreciate this , it feels like we’re circling around a convergence
> point that could actually be operationalized.
>
> Yes, the idea that *shared context in sequence maps to meaning* is
> something I’ve been circling as well, but hadn’t yet articulated as crisply
> as you just did. That framing really helps clarify the challenge: if
> meaning is the result of sequence-based attractors stabilized through
> shared predictive context, then the job of a coherence factor is to detect
> and reinforce those attractors, not just on the basis of phase synchrony,
> but on semantic recurrence across diverging and converging paths.
>
> In the EDI hardware context (yes, memristors are a key substrate), one
> promising direction is to explore *coherence as a function of recursive
> alignment across temporal windows.* Instead of treating coherence as a
> static, real-time synchronization, we could design the coherence metric to
> weight repeated trajectory alignment — i.e., if multiple distinct input
> sequences collapse toward the same output state (like your A→X→B and A→Y→B
> example), then the system builds an attractor not only on *B*, but on the
> pattern of convergence itself. This gives us a kind of “predictive
> resonance,” which can then shape upstream dynamics through reconfiguration.
>
> What’s powerful about this in the EDI setting is that *propagation itself*
> is the learning rule. If we can define energy-efficient paths that favor
> convergence from divergent histories, and assign coherence to those
> energy-preserving reentry loops, then we may already be modeling “internal
> meaning”  attractors shaped not by external labeling, but by *structural
> recurrence*.
>
> So yes, I’m very interested in developing a coherence factor that reflects
> these predictive closures in the network. If you’re working on the spiking
> side, we may be able to sketch a dual formalism, one in delay-coded spikes,
> the other in analog propagation patterns, both  to encode *context-binding
> attractors* as units of meaning.
> ----Dorian Aur
>
>
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/Ta9b77fda597cc07a-M9b6cbfced16eafd4e8270415>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta9b77fda597cc07a-Ma936efae3295d2c9c5668f59
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to