Steve,

Picking one particular tiny illustrative detail of this - my realization
> that neurons MUST communicate derivatives like dP/dt rather than straight
> probabilities, to be capable of temporal learning without horrendous
> workarounds. I thoroughly explained it on this forum, and no one objected
> to any of it, yet it has changed nothing.
>

To those of us not working on neural net models, this sort of insight is
kinda irrelevant...

But still, this is an interesting observation.

It reminds me of work studying neural population coding using Fisher
information

http://prl.aps.org/abstract/PRL/v97/i9/e098102

[Fisher information being an average of the second derivative of a
probability density, it's kinda like the derivatives you reference...]

I'm curious: How would you modify, for instance, the Izhikevich neuron
equations

http://www.izhikevich.org/publications/spikes.htm

in accordance with your idea?  (I reference this just because it's the
neuron model I've worked with  most recently.)

Regarding your idea for a cross-disciplinary math/AI/neuro research
institute -- I wish I had the power to get something like that formed.
 Maybe I'll be able to do it in a few years time, in HK or China or
Singapore, we'll see...

-- ben



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to