What problem do you claim to solve? We already have LLMs that pass the Turing test, outscore humans on a academic tests, and speak 200 languages. We have self driving cars that are 12 times safer than humans. All of this by modeling spike rates, not spike direction. The majority of video and music is still human generated, and AI hasn't replaced most jobs yet. But I think these problems can be solved with more computing power and training data, not by fundamental changes to the algorithms.
Can you solve any simple pattern recognition problem using fewer parameters, less computation, less power, or faster by modeling spike direction instead of spike rate? What would the learning algorithm look like? If the dendrites are doing some essential computation like adding time delays, I don't think it would be hard to demonstrate, even if the simulation is much slower. -- Matt Mahoney, [email protected] On Tue, Aug 26, 2025, 3:50 PM Dorian Aur <[email protected]> wrote: > *1. What can’t traditional neural network models handle (spiking > rate-based)?* > While spiking neural networks that model firing rates provide valuable > insights into temporal dynamics, they struggle to capture the *spatiotemporal > intricacies of signal propagation *such as the directionality of spikes, > dendritic integration, and field-driven modulation of conductance. These > elements are crucial in biologically realistic computation but are often > omitted in conventional rate-based models. > > *2. Why isn’t modeling spike direction feasible on standard hardware?* > Spike directionality involves nuanced physical interactions, ionic flows, > membrane field dynamics, and propagation delays across branching dendrites. > Classical digital hardware lacks the native representation of these *analog > spatial-temporal properties*, so simulating them accurately would be both > inefficient and computationally expensive. Special-purpose hardware that > models *electrodynamic behavior directly* is needed to capture these > emergent properties. > > *3. Are there simulations showing improved AI performance via spike > direction models?* > Early simulations and theoretical models hint at enhancements in *pattern > differentiation, context-sensitive learning, and temporal encoding* when > directional propagation is taken into account. For instance, > propagation-driven learning suggests that the path of a spike, not just its > rate encode rich information, echoing how the same neuron may respond to > vastly different categories (e.g., a face, landscape, or object) depending > on pathway-driven plasticity. > > *4. How does spike direction work and what are electrode measurements > actually capturing?* > You're absolutely right that classical views confine action potentials to > travel from soma to axon terminals. However, in dendritic and local-field > contexts, the *spatial arrangement of synaptic inputs*, *branching > pathways*, and *local field potentials* influence the net signal > direction recorded by electrodes. The sub-millivolt voltages and > millisecond-level timing shifts you mentioned likely result from > overlapping neuronal activity in the vicinity. As I noted in my earlier > work (e.g., responses from the Nature preprint “the same neuron responds to > multiple categories…”), this diversity in response can be better explained > via *propagation-driven learning*, where *electrodynamic properties of > dendrites and axons* play a critical role in shaping network behavior > beyond what rate coding alone can describe. The specific pathways taken by > action potentials through axonal branches and dendritic arbors shape > information integration and memory encoding in *proteins and structural > components within axons and dendrites *which show *memresistive > characteristics*, modulating conductivity based on the history of ionic > and electrical activity. > > As we develop EDI that embody these dynamic, path-dependent properties, > we edge closer to creating machines that learn and generalize through > *embedded > physical adaptation not digital simulation* > > --- - Dorian Aur > > > On Mon, Aug 25, 2025 at 3:03 PM Matt Mahoney <[email protected]> > wrote: > >> In your 2010 Nature paper, you show that image recognition in human >> brains as detected by 4 electrodes is correlated with spike direction but >> not spike rate or inter spike intervals. This is in contrast to current >> neural models such as in LLMs that model spike rate as the relevant signal. >> This raises some questions. >> >> 1. What aspect of AI are we not yet capable of solving using >> neural networks modeling spiking rate? >> >> 2. Why can't we model spike direction on a computer? Why do we need >> specialized hardware? >> >> 3. Do you have any simulations of pattern recognition or some other >> problems relevant to AI that are improved by modeling spike direction >> instead of spike rate? >> >> 4. How does spike direction even work? My understanding is that spikes >> always travel from the neuron cell body along the axon to the synapses. >> Looking at the data in your paper, you are measuring voltages around 0.1 mV >> in 1 ms spikes and detecting phase shifts around 0.5 ms between the 4 >> electrodes. Action potentials between the inside and outside of the cell >> are actually about 100 mV, but smaller outside the cell, of course. It >> seems to me that the electrodes are actually picking up signals from >> several surrounding neurons and the "direction" is actually measuring >> spikes from different input neurons at the dendrite. >> >> I am aware that stereoscopic sound perception up to 1500 Hz depends on >> spike timing from each ear to communicate relative phase information. But >> this would not be hard to model in a computer. >> >> LLMs pass the Turing test using nothing more than text prediction. Is >> there something else we need? You seem to make this distinction between >> intelligence and modeling intelligence, as if there was an important >> difference, like between flying and modeling flight. I'm not clear on what >> problem you are trying to solve, what your proposed solution is, how it >> would work any better, or what you would measure to know that it worked. >> Are you saying we need different hardware for consciousness or something? I >> didn't read your Medium article because it's pay walled. >> >> -- Matt Mahoney, [email protected] >> >> On Mon, Aug 25, 2025, 2:01 PM Dorian Aur <[email protected]> wrote: >> >>> Thank you Matt, valid and important point. >>> >>> The architecture goes beyond replacing RAM with memristors - it >>> introduces a fundamentally different physical substrate and* >>> computational model*. Unlike traditional digital neural networks, which >>> simulate activity symbolically on von Neumann machines, *EDI >>> <https://bit.ly/45JPjsg>* is grounded in continuous spatial dynamics >>> driven by real ionic and charge interactions akin to what occurs in >>> biological neurons, see this paper >>> https://www.nature.com/articles/npre.2010.5345.2 >>> >>> The distinction is not just in the hardware, it is in the *computational >>> paradigm*: information is processed and stored in the same medium >>> through dynamic field interactions, not separated across memory and >>> processing units. This allows true material-based learning and >>> self-organization, which digital LLMs do not display. >>> Regarding simulations, initial models are currently in development. >>> However, the goal is not to simulate EDI in its entirety, as many of its >>> properties cannot be meaningfully captured in a digital environment. EDI >>> fundamentally departs from conventional AI architectures. Muchlike you >>> can't simulate flight in a way that generates real lift, some properties of >>> EDI, e.g. emergent spatiotemporal behavior, only manifest in physical >>> substrates - a new class of intelligence *from the physics of the >>> system itself.* >>> By using LLMs initially to pre-instantiate the early stages of >>> intelligence within an EDI framework (see papers's paragraph) , you can >>> train the system efficiently.If you replace the LLM's memory architecture >>> with memristors, you begin to approach the architecture we envisioned in >>> the manuscript Electrodynamic Intelligence (EDI). Once EDI develops its >>> own internal adaptive dynamics, grounded in physical memory, coupling, and >>> energy-efficient computation, the need for LLMs themselves diminishes. >>> >>> As biologically fragile organisms, we face significant limitations when >>> it comes to colonizing Mars or other planetary environments. To thrive >>> beyond Earth, we’ll need systems that can adapt, and build autonomously in >>> extreme conditions. We need an *Optimus 5.0 *equipped with an EDI >>> brain, vital for off-world infrastructure, habitat construction, and >>> autonomous problem-solving - today feels almost alien intelligence. >>> *Wouldn't >>> that feel like having an LLM in the 1940s? * >>> >>> With targeted investment from both public and private sectors, a >>> functional *EDI prototype* could realistically be developed within 2–3 >>> years, maybe less given the current pace of innovation >>> >>> - --Dorian Aur >>> >>> *Artificial General Intelligence List <https://agi.topicbox.com/latest>* > / AGI / see discussions <https://agi.topicbox.com/groups/agi> + > participants <https://agi.topicbox.com/groups/agi/members> + > delivery options <https://agi.topicbox.com/groups/agi/subscription> > Permalink > <https://agi.topicbox.com/groups/agi/Ta9b77fda597cc07a-M904a84e6d31f1836f0d5226b> > ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Ta9b77fda597cc07a-M17e4d747969660eb437493c2 Delivery options: https://agi.topicbox.com/groups/agi/subscription
