Re: [agi] The emergence of probabilistic inference from hebbian learning in neural nets

2003-12-25 Thread Brad Wyble
This is exactly backward, and which makes using it as an unqualified presumption a little odd. Fetching an object from true RAM is substantially more expensive than executing an instruction in the CPU, and the gap has only gotten worse with time. That wasn't my point, which you may have

Re: [agi] The emergence of probabilistic inference from hebbian learning in neural nets

2003-12-24 Thread deering
Ben, you haven't given us an update on how things are going with the Novamente A.I. engine lately. Is this because progress has been slow and there is nothing much to report, or you don't want to get peoples hopes up while you are still so far from being done, or that you want to surprise

RE: [agi] The emergence of probabilistic inference from hebbian learning in neural nets

2003-12-24 Thread Ben Goertzel
Brad, Hmmm... yeah, the problem you describe is actually an implementation issue, which is irrelevant to whether one does synchronoous or asynchronous updating. It's easy to use a software design where, when a neuron sends activation to another neuron, a check is done as to whether the target

RE: [agi] The emergence of probabilistic inference from hebbian learning in neural nets

2003-12-24 Thread Brad Wyble
Guess I'm too used to more biophysical models in which that approach won't work. In the models I've used (which I understand aren't relevant to your approach) you can't afford to ignore a neuron or its synapses because they are under threshold. Interesting dynamics are occurring even when the

RE: [agi] The emergence of probabilistic inference from hebbian learning in neural nets

2003-12-24 Thread Ben Goertzel
Yep, you're right of course. The trick I described is workable only for simplified formal NN models, and for formal-NN-like systems such as Webmind. It doesn't work for neural nets that more closely simulate physiology, and it also isn't relevant to systems like Novamente that are less NN-like

Re: [agi] The emergence of probabilistic inference from hebbian learning in neural nets

2003-12-23 Thread Pei Wang
Actually, in attractor neural nets it's well-known that using random asynchronous updating instead of deterministic synchronous updating does NOT change the dynamics of a neural network significantly. The attractors are the same and the path of approach to an attractor is about the same. The

RE: [agi] The emergence of probabilistic inference from hebbian learning in neural nets

2003-12-23 Thread Ben Goertzel
Hi, Actually, in attractor neural nets it's well-known that using random asynchronous updating instead of deterministic synchronous updating does NOT change the dynamics of a neural network significantly. The attractors are the same and the path of approach to an attractor is about the

Re: [agi] The emergence of probabilistic inference from hebbian learning in neural nets

2003-12-22 Thread Pei Wang
Ben, Some comments to this interesting article: *. S = space of formal synapses, each one of which is identified with a pair (x,y), with x Î N and y ÎNÈS. Why not x ÎNÈS? *. outgoing: N à S* and incoming: N - S* Don't you want them to cover higher-order synapses? *. standard neural net

RE: [agi] The emergence of probabilistic inference from hebbian learning in neural nets

2003-12-22 Thread Ben Goertzel
Pei, Thanks for your thoughtful comments! Here are some responses... - *. S = space of formal synapses, each one of which is identified with a pair (x,y), with x Î N and y ÎNÈS. Why not x ÎNÈS? - No strong reason -- but, I couldn't see a need for that degree of generality in

[agi] The emergence of probabilistic inference from hebbian learning in neural nets

2003-12-20 Thread Ben Goertzel
Hi, For those with the combination of technical knowledge and patience required to sift through some fairly mathematical and moderately speculative cog-sci arguments... some recent thoughts of mine have been posted at http://www.goertzel.org/dynapsyc/2003/HebbianLogic03.htm The topic is: **How