For over 80 years, from the abstractions of McCulloch and Pitts to today’s
large language models, we have been building simulated intelligence.
Today's "AI" is a digital replica of brain-like processes, running on
silicon and operating through mathematical operations instead of biological
ones.

Think of it like a flight simulator. A simulator can accurately recreate
the cockpit experience and train pilots - it will never fly.  *The
simulator never actually leaves the ground*  . Digital AI is similar: it
emulates brain intelligence however lacks a true physical embodiment.

This is where *Electrodynamic Intelligence (EDI)
<https://doi.org/10.5281/zenodo.16929461>* offers a transformative path
forward.

Rather than modeling intelligence through symbolic or statistical
computation, in *EDI  cognition develops from real-time physical
interactions, e.g. self-organizing dynamics of charges and fields within
and across  neurons in an artificial brain*. The electrodynamic processes
are not metaphors; they are materially grounded forms of computation that
operate through ionic flows, field interactions, and nonlinear dynamics.

*EDI <https://bit.ly/45JPjsg>* is not a simulation of the brain, it is
an *embodied
approach to intelligence*, rooted in the same physical principles that
underlie the biological brain. It opens a path toward systems that
*act* through
matter rather than *model* through abstraction.

Just as real flight requires lift, drag, and thrust, not numbers on a
screen, this* embodied intelligence  requires the physics of the brain, not
just the logic of the code*. Ben, I hope we’ll have the opportunity to
feature a summary of this paper at the upcoming AGI conference

Fifteen years ago, when we launched *Neuroelectrodynamics* as a theoretical
 framework, the technological landscape simply wasn’t ready to support its
implementation. However now, *Colin*, the situation has changed
dramatically, we're in a position to build.

- Dorian Aur

PS Matt,  EDI doesn’t just solve these problems, it avoids them altogether
by not sharing the same structural assumptions. It’s not about controlling
artificial goals, it is about building intelligence that grows, adapts, and
lives in the world as we do, not above it.


[image: image.png]



On Mon, Aug 11, 2025 at 9:46 AM Matt Mahoney <[email protected]>
wrote:

> Discussion of AI existential risk on LessWrong. To summarize: we don't
> know how to solve the alignment problem. If we build AGI, it will probably
> kill all humans because we dont know how to give it the right goals.
> Therefore we should not build it, or at least build an "off" switch to
> quickly shut it down.
>
> My thoughts:
>
> 1. The premise seems correct. We measure intelligence by prediction
> accuracy. Wolpert's law says two agents cannot mutually predict each other.
> If an agent is smarter than you, then you can't predict its actions, and
> therefore cannot control it.
>
> 2. An LLM has no goals. It just predicts text. However, applications that
> use it do have goals. You can tell an LLM to express any human goals or
> feelings. So alignment seems solvable, at least for now.
>
> 3. Let's say we do solve the alignment problem. Then AGI will kill us by
> giving us everything we want. AI agents will replace not just workers, but
> friends and lovers too. We will become socially isolated and stop having
> children.
>
> 4. The goal of all agents in a finite universe is a state of maximum
> utilitiy, where any thought or perception is unpleasant because it would
> result in a different state. Your goal is death. You just don't know it
> because evolution programmed you to fear death.
>
> 5. An "off" switch will fail because AGI could kill us before we knew
> anything was wrong. I don't even know why they proposed it.
>
> 6. We will build AGI anyway because human labor costs $50 trillion per
> year, half of global GDP.
>
>
> Permalink
> <https://agi.topicbox.com/groups/agi/Te0af3a0c35a03987-M715ce3ed3ec93295db091304>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta9b77fda597cc07a-M3844a63b5215cb68a2825e8c
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to