On Thu, Aug 28, 2025, 4:28 AM Rob Freeman <[email protected]>
wrote:

But where are you getting the idea that LLMs correlate closely with spike
> rate based neural models Matt? You make it sound like it is settled
> neuroscience.
>

What do you think the activation level in an artificial neuron represents?
In the top text compressors that I am familiar with, neurons represent
features, which in LLMs can be letters, tokens, or grammatical or semantic
categories. Activation levels are calculated as a weighted sum of inputs
and then clamped. The network is trained by adjusting the weights to reduce
prediction errors. It is essentially the model described by Rumelhart and
McClellan in the 1980s. Modern LLMs and the top compressors like NNCP use
transformers, which model lateral inhibition and feedback like real brains,
rather than just a feed forward network.

The evidence for biological plausibility is that all of this works, even
down to making the same kinds of mistakes as humans, and that the parts
that we do know about brains are consistent with our models. We still have
not directly observed synapse state changes like those proposed by Hebb in
1949. It might be that short term memory works by adjusting activation
thresholds and long term by physically growing or removing axon branches
and synapses. That's OK because you can train an ANN by adjusting anything
that's adjustable.

But I think it is well established that neurons can increase or decrease
the firing rate of other neurons at its outputs. Neurons fire at 0 to 300
spikes per second, which is faster than the time resolution of any
calculation we can do other than stereoscopic sound perception where spike
timing is important.


------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta9b77fda597cc07a-Me4e0b9d0356f4e81528fcf26
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to