Re: [agi] The emergence of probabilistic inference from hebbian learning in neural nets

2003-12-24 Thread deering



Ben, you haven't given us an update on how things 
are going with the Novamente A.I. engine lately. Is this because progress 
has been slow and there is nothing much to report, or you don't want to get 
peoples hopes up while you are still so far from being done, or that you want to 
surprise us one day with, "Hey guys, guess what? The Singularity has 
arrived!"





To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




[agi] Update on Novamente progress

2003-12-24 Thread Ben Goertzel




Hi 
Mike,

About 
Novamente project progress... 

The 
reason I haven't given progress updates to this list lately is that I've been 
even more insanely busy than usual, due to a combination of AI work and 
(Novamente-related) business work and personal-life developments. So 
recreational emailing has fallen by the wayside of late. Now that 
Christmas vacation has come, I have a little time send emails! [Although 
I'm going on vacation for 4 days starting tomorrow, so I'll be offline for a 
little while]

Progress on Novamente hasactually stepped up considerably as of 
September, when afew new people were brought into the project (in 
connection with a commercial application of Novamente),some of them 
working on language processing and one working on more fundamental AI stuff 
(representation of complex knowledge; and evolutionary 
learning).

However, our recent progress has been of a technical nature, it hasn't 
yet yielded big milestones that are exciting to report the world at large. 


Regarding language processing, we have a system that "reads" English text 
and outputs semantic relationships contained in the text into Novamente. 
This is one way to fill the system's mind up with information, though from an 
AGI perspective it must be considered a complement to experiential interactive 
learning, rather than as the sole means of providing the system with 
knowledge. However, this "relationship extraction" software is far from 
complete; there's probably 6 months of work left on the "syntax" side, which 
will proceed in parallel with work on the semantic interpretation of the 
extracted relationships.

Regarding reasoning, we've made a lot of progress on probabilistic 
inference, in the context of experimenting with inference on biological data 
(quantitative experimental data and data from relational DB's), and (more 
recently) on linguistic knowledge. This has involved a lot of 
technicalmath work on my part, working out various details in the 
inference system. The results here are very interesting, although there's 
a lot more testing and tweaking required, particularly regarding the control of 
inference.

We've 
got the code in place for our own generalization of combinatory logic, which is 
the scheme we're using to represent complex knowledge in Novamente (the 
knowledge that would be represented using variables and quantifiers in a 
traditional logic-based system). And we've generalized the Bayesian 
Optimization Algorithm (an extension of genetic algorithms based on probability 
theory, invented by Martin Pelikan and David Goldberg) to learn complex 
combinatory logic expressions. Experimentation with this is ongoing 
actively, and during the first half of 2004 this code will be integrated with 
the probabilistic reasoning code. I spent a lot of time last month working 
out the nasty software-design details of integrating inferential processing with 
combinatory logic; that design will be implemented in 
January.

I 
redesigned our previous "attention allocation" subsystem, which used to use 
neural net based ideas, to use a different approach based on probabilistic 
inference, thus simplifying the system and (I hope) inducing more emergence 
among components. This hasn't been implemented yet 
though.

Yes, 
we STILL are not at the phase where we've hooked up Novamente to a "simulated 
body" in a simulated world and started teaching it experientially. We're 
very much looking forward to that day! As of this point, I can at least 
say there are no major components that are unengineered ... the coding of the 
generalized-combinatory-logic framework was the biggest beast AI-wise. 
However, there's a lot of integration, testing and tuning work ahead, and some 
moderate-sized beasts remain uncoded (like probabilistic attention allocation, 
and probabilistic logical unification). Also, there is some nitty-gritty 
work as yet undone, such as extending the Novamente core to run on multiple 
machines as a distributed system (the design was made to support extension to 
distributed processing, but the work hasn't been done yet). And, we've 
ordered our first 64-bit machine, and in early 2004 will undertake the task of 
porting the core to 64-bit Linux on 64-bit hardware Depending on how 
much attention we can give to AGI as opposed to commercial Novamente 
applications, we could get to the "teaching the baby" phase in mid or late 2004, 
or it might not be till 2005.

Next, 
some comments on funding.

Our 
strategy of funding AGI work via commercial applications of the in-progress AI 
system is working, in the sense that we're making progress implementing and 
testing the AI system, while getting paid for it at a reasonable level. 
Some of the commercial applications are also very interesting in themselves; for 
instance, in our bioinformatics work we've made some real progress in creating 
original diagnostic tools for a couple diseases (publications on this will come 
out in early 2004; 

RE: [agi] The emergence of probabilistic inference from hebbian learning in neural nets

2003-12-24 Thread Ben Goertzel


Brad,

Hmmm... yeah, the problem you describe is actually an implementation issue,
which is irrelevant to whether one does synchronoous or asynchronous
updating.

It's easy to use a software design where, when a neuron sends activation to
another neuron, a check is done as to whether the target neuron is over
threshold.  If it's over threshold, then it's put on the ready to fire
queue.  Rather than iterating through all neurons in each cycle, one simply
iterates through those neurons on the ready-to-fire queue.

Of course, one can use this approach with either synchronous or asynchronous
updating.

We used this design pattern in Webmind, which had a neural net aspect to its
design; Novamente is a bit different, so such a strategy isn't relevant.

-- Ben G


 While I haven't read any of the documents in question, I'd like
 to expound
 a bit here.

 While you are certainly correct, I think Pei was referring to the wasted
 computational power of updating synapses that are inactive and have no
 chance of being activated in the near future.  In our current Neumann
 architectures, memory is much cheaper than CPU cycles, which is
 not the case in the brain.

 So while the brain opts for minimal neurons, and keeps most of them active
 in any given situation, a silicon NN might have factors of 10 more
 neurons, but use very sparse encoding and a well optimized update
 algorithm.  This setup would emphasize only spending CPU time updating
 neurons that have a chance of being active.


 -Brad

 ---
 To unsubscribe, change your address, or temporarily deactivate
 your subscription,
 please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] The emergence of probabilistic inference from hebbian learning in neural nets

2003-12-24 Thread Brad Wyble


Guess I'm too used to more biophysical models in which that approach won't
work.  In the models I've used (which I understand aren't relevant to your
approach) you can't afford to ignore a neuron or its synapses because they
are under threshold.  Interesting dynamics are occurring even when the
neuron isn't firing.  You could ignore some neurons that are at rest and 
hadn't received any direct or modulatory input for some time, but
then you'd need some fancy optimizations to ensure you're not missing
anything.

But in the situation you're referring to with a more abstract (and 
therefore more useful to AGI) implementation, these details are 
irrelevant.  

I just wanted to chime in and ramble a bit :)

Very glad to hear things are going well with Novamente.  

Hope the holidays treat all of you well.

-Brad



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] The emergence of probabilistic inference from hebbian learning in neural nets

2003-12-24 Thread Ben Goertzel

Yep, you're right of course.  The trick I described is workable only for
simplified formal NN models, and for formal-NN-like systems such as Webmind.
It doesn't work for neural nets that more closely simulate physiology, and
it also isn't relevant to systems like Novamente that are less NN-like

ben

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
 Behalf Of Brad Wyble
 Sent: Wednesday, December 24, 2003 11:59 AM
 To: [EMAIL PROTECTED]
 Subject: RE: [agi] The emergence of probabilistic inference from hebbian
 learning in neural nets




 Guess I'm too used to more biophysical models in which that approach won't
 work.  In the models I've used (which I understand aren't relevant to your
 approach) you can't afford to ignore a neuron or its synapses because they
 are under threshold.  Interesting dynamics are occurring even when the
 neuron isn't firing.  You could ignore some neurons that are at rest and
 hadn't received any direct or modulatory input for some time, but
 then you'd need some fancy optimizations to ensure you're not missing
 anything.

 But in the situation you're referring to with a more abstract (and
 therefore more useful to AGI) implementation, these details are
 irrelevant.

 I just wanted to chime in and ramble a bit :)

 Very glad to hear things are going well with Novamente.

 Hope the holidays treat all of you well.

 -Brad



 ---
 To unsubscribe, change your address, or temporarily deactivate
 your subscription,
 please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]