Steve,

Picking one particular tiny illustrative detail of this - my realization
>>> that neurons MUST communicate derivatives like dP/dt rather than straight
>>> probabilities, to be capable of temporal learning without horrendous
>>> workarounds. I thoroughly explained it on this forum, and no one objected
>>> to any of it, yet it has changed nothing.
>>>
>>
>> To those of us not working on neural net models, this sort of insight is
>> kinda irrelevant...
>>
>
> I'm not so sure. Don't you use Bayesian methods to compute probabilities,
> that change with circumstances? If so, by converting the inputs to dP/dt
> notation, computation throughout remains in dP/dt. At the outputs, you will
> need to integrate to get back to P. The net result is that you no longer
> need past memory for temporal learning - all without a single
> neuron-equivalent in your code.
>


Certainly one can do probabilistic reasoning about rates of change of
probabilities.   But can you give an example to illustrate why this is a
profoundly better approach?


>> I'm curious: How would you modify, for instance, the Izhikevich neuron
>> equations
>>
>
> I wouldn't. This is SO simple, it is almost hard to even see it.
> Converting to/from dP/dt is something you do at the inputs and outputs.
> Within the NN/Bayesian "network" computations remain the SAME, only now
> things become naturally capable of temporal learning.
>


So, consider a standard firing rate model of a neuron.  There is a lot
evidence that, in various cases, the firing rate of certain neurons can be
normalized into a probability value, representing a probability of some
relevant event observed by the organism.  Are you suggesting  that, in
other cases, the firing rate can meaningfully be normalized into a number
representing the rate of change of some probability?  Can you point to some
neuroscience experiments validating this hypothesis?  Or if not, can you
suggest a specific experiment that you think could be done to
validate/refute the hypothesis?




>
>
> Maybe I'll be able to do it in a few years time, in HK or China or
>> Singapore, we'll see...
>>
>
> I wasn't really expecting a return message to report of China  8-:D>
>

Oh... well, I am living in Hong Kong now, and that's where the bulk of the
OpenCog project is now situated.

AGI researcher Wlodek Duch is now spending much of his time in Singapore...

Roboticist David Hanson is splitting most of his time btw Singapore,
Guangzhou (China) and Hong Kong

The AGI-13 conference will be in Beijing...

Seoul National University recently started a grad school of Convergence
Technology (nano-bio-info-cogno)

So, if I were going to try to start an interdisciplinary AGI/cognition
research center somewhere, it would probably
in in Asia, yeah...

-- Ben



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to