Ben,

 

Thanks for your reply,  It was helpful.

 

Your answer causes me to ask in what brain-like thinking processes would
MOSES be a win over just having the hypergraph itself compute candidate
solutions?

 

Hofstader's Copycat has shown that: (a) various relaxations of a given
multiple constraint relaxation problem can give very different answers, (b)
purtibations of various degrees can be created from states of interest in a
solution space created by such relaxation, and (c) such relaxations all have
an inherent fitness function corresponding the energy (or fit) of the
relaxation, which fitness function can be applied to virtually any problem
solved with such relaxations.

 

Thus it would appear controlled parallel purtibation within the hypergraph
could have all the potential of a genetic algrorithm, --- it would appear to
be more generally applicable for most human like reasoning --- and it would
interface more seamlessly with the overall Novamente architecture.  

 

I assume there are many problem, particularly where efficient fitness
functions can be found, where MOSES would greatly outperform such
Copycat-like relaxations.  But it seems to me that in many AGI problems ---
at least those based on deriving solutions, or proposed solutions based on
grounded experiential learning --- Copycat-like relaxations in the
hypergraph would often greatly outperform MOSES. 

 

As I said, the Copycat-like approach would have a built in, universal,
fitness function.  The fitness function you proposed (i.e., the strength and
confidence of "Context & Procedure ==> Goal" implication)  would require a
relatively large sample size to be calculated with any accuracy, and each
calculation would have to be made using one of the following fitness
measuring methods you proposed: 

 

==========="

-- by direct evaluation, i.e. by executing P1 and seeing what happens

-- by simulation, i.e. by executing P1 in some internal simulation world
rather than in the real world, and seeing what happens

-- by inference (PLN)

"===========

 

Since presumably MOSES would be a relatively blind trial and error process
(which would require many trial candidates solutions and their fitness
evaluation for a problem before it would even have any probabilistic
distributions to guide its spawning of new candidate solutions) it would
seem that thousands to millions of candidates would have to be evaluated by
whatever fitness function it used, often at considerable computational
expense.  

 

(One would assume that a much higher percent of solutions derived by
Copycat-like relaxation within the hypergraph would be reasonable (provided
the system had enough relevant learned knowledge), since they would take
into account much more of the relevant learned knowledge, including
episodic, generalizational, analogical, and deeper semantic reasoning, and
thus the number of proposed solutions that would have to be evaluated once
generated would be much less.)

 

With regard to your first proposed fitness evaluation method --- direct
evaluation --- when applied to thousands or millions of trial solutions,
would often not be feasible.  Both because of the time required for acting
in the world and because of the considerable amount of AGI computation that
would often be required to perceive and evaluate the feedback from such a
trial solution.

 

Your second proposed fitness evaluation method --- Simulation --- in many
cases might be more efficient.  On really powerful supercomputers, thousands
or millions of trial evaluations via action in a simulated world might be
viable, but it would still be computationally expensive, and it would often
seem that perceiving the feedback in such a simulated world using perceptual
patterns learned by experience within the simulated would be computationally
expensive, unless the desired goal state was very easy to evaluate, or
unless their had been a hand programmed hack to speed things up.

 

Your third proposed fitness evaluation method, PLN inferencing would often
not be that much more efficient than merely attempting to solve the problem
by Copycat-like relaxation, since such PLN inferencing, itself, in any case
of reasonable complexity would require probabilistic constraint relaxation
for the evaluation of each of the many solutions spawned by MOSES.

 

Thus --- except in problems where MOSES's many candidate solutions could be
evaluated with much less total computation than would be required to produce
and evaluate the energy of the presumably smaller number of generally better
solutions proposed by a Copycat-like relaxation method --- it would seem
that such direct relaxations would be more efficient.  

 

But MOSES might be much more beneficial in exploring spaces for which the
system has no relevant experiences to guide a Copycat-like hypergraph
relaxation, or spaces, which because of their mathematical or logical nature
--- such as the space of computer programs --- may be more methodically or
efficiently explored by programming than hypergraph relaxation.  In
addition, MOSES could out perform hypergraph relaxation in the speed and
degree of difference with which it could generate novel patterns.

 

Thus, I can certainly imagine how MOSES could be used to supercharge a
hypergraph's ability to solve certain types of exploratory problems,
particularly ones humans are not particularly good at.  But it is not clear
to me that it is a win for a majority of the types of problems which the
human brain performs relatively well.

 

 

I would be interested in your thoughts (and those of any others on this
list) concerning the above.

 

Ed Porter

 

 

 

 

-----Original Message-----
From: open...@googlegroups.com [mailto:open...@googlegroups.com] On Behalf
Of Ben Goertzel
Sent: Tuesday, December 16, 2008 7:53 PM
To: open...@googlegroups.com
Cc: agi@v2.listbox.com
Subject: [OpenCog] Re: What is the role of MOSES in Novamente and Open
Cog?-----was---- internship opportunity at Google (Mountain View, CA)

 


Ed,

Consider a probabilistic implication of the general form

Context & Procedure ==> Goal

meaning

(if Context C is present) & (Procedure P is executed) ==> (Goal G is
satisfied)

Suppose that C and G are known but P is not known

Then, MOSES may be used to find P

That is, if the system knows what situation it is in (C) and knows what goal
it wants to achieve (G), then MOSES may be used to help find a procedure to
fulfill the goal in the context

Note that the goal may be derived via inferential subgoaling from other
goals.

Note that determining the right way to represent the current situation as a
predicate (a context) is a hard inference/concept-creation problem too

The fitness function may be, for instance, the

strength * confidence

of the implication link denoted ==> above.

The fitness of a candidate procedure P1 may be evaluated either

-- by direct evaluation, i.e. by executing P1 and seeing what happens

-- by simulation, i.e. by executing P1 in some internal simulation world
rather than in the real world, and seeing what happens

-- by inference (PLN)

All that is said plainly in the OpenCogPrime wikibook, but  maybe the above
explanation will be helpful due to its compactness?

ben



On Tue, Dec 16, 2008 at 7:23 PM, Ed Porter <ewpor...@msn.com> wrote:

Moshe and Ben,

 

I feel I understand much of Novamente, even parts that I haven't heard
explained, because it is quite similar to ideas I had developed before ever
hearing about Novamente.  

 

But I have never quite understood the role of MOSES in Novamene.

 

I do not question the power of genetic programming.  Ever since I attended a
1999 lecture by Koza on what he had managed to accomplish with genetic
programming --- and then spent about half an hour talking with him after the
lecture and at other times at the supercomputing conference at which it
occurred --- I have been very aware of GP's potential power.  I am quite
certain he said that in roughly a tera-opp (which could be done in one
thousandth of a second on the current fastest computer) he claimed it
derived a band-pass filter it took humanity almost 3 decades (until the
1940s) to develop after appearance of the earliest band-pass filters.

 

But I haven't been able to figure out exactly how MOSES is used in the
NOVAMENTE environment.  For example, Koza said a key to the success of his
genetic programming was having a task for which there was an appropriate
fitness function.  He said that in his experiments using GP to design new
electronic circuitry to operate as relatively optimal band-pass filters the
fitness function to determine how well each proposed solution functioned was
the electronic simulation software called SPICE.  He estimated roughly 99%
of the his network's compute time (i.e., the above mentioned 1 TeraOpps) was
spent evaluating this fitness function.  

 

>From a recent re-reading all the portion of a January 2007 version of Ben's
Novamente book (an earlier version of the Open Cog documentation) that
related to MOSES, I don't remember any clear explanation of what fitness
function would be used to evaluate the presumably many thousands, or
millions, of combo programs that would be generated in an attempt to solve a
single problem.  

 

For example, in a pet brain, the pet presumably would not get a chance to
try out each of the thousands of individual combo programs on a human user,
to see which received a proper feedback from the user, without thoroughly
exhausting the human user.

 

(?1) So what fitness function would be used to select combo programs or
direct the probabilistic distributions that are used to tune their spawning?
(If this is knowledge you intend to be in the public domain.) 

 

Also, I don't understand the relationship of combo programs to the
hypergraph.  Combo uses a functional language which presumably seeks to do
away with, or greatly restrict, side effects (obvioulsly a plus, if you are
somewhat blindly cutting and pasting program fragments together) --- whereas
it seems spreading activation in a hypergraph is largely all about side
effects. (The 1000 to 10K synapses per neuron, plus electromagnetic field
effects caused by neurons in the brain, sure sound like side effects up the
wazoo to me.) 

 

I understand (a) that a combo program could be associated with individual
nodes and be computed when they are activated, (b) that hypergraph nodes or
edge values can be variables in a combo expression, (c) that some subset of
hypergraph spreading-activation inferencing ( I didn't understand exactly
which) can be used as functions in combo expressions, and (d) that the
hypergraph can be used to record and generalize information about a combo
program to appropriately enable inference in the hypergraph to appropriately
active combo programs associated with particular hypergraph nodes.  

 

(?2) Am I correct in understanding that item (d) just listed could be quite
important as a general concept in AGI learning how to automatically program,
because it would allowed the non-combo aspects of Novamente to model and use
probabilistic inference and attention focusing to reason about combo
programs and when they should be used, combined, fed what input, or perhaps
even be modified?

 

 

(?3) Am I also correct in guessing that (b) and (c) would seem to enable
combo programs, to, in effect, create and try out (provided a proper fitness
function) novel hypergraph nodes, which would function in a manner similar
to non-combo nodes largely through spreading activation?

 

(?4) Other than what is explained above, how are combo programs and
hypergraphs synergistically used in Novamente.

 

Moshe and Ben, you are both very bright --- and you both place a lot of
importance on incorporating MOSES into Novamente --- so I assume there is
something important I am missing.  

 

I would appreciate it very much if you could tell me what it is that I am
missing.

 

Ed Porter

 

-----Original Message-----
From: Moshe Looks [mailto:madscie...@google.com] 
Sent: Monday, December 15, 2008 1:33 PM
To: agi@v2.listbox.com
Subject: [agi] internship opportunity at Google (Mountain View, CA)

 

Hi,

 

I am seeking an intern to work on the open-source probabilistic

learning of programs project over Summer 2009 at Google in Mountain

View, CA. Probabilistic learning of programs (plop) is a Common Lisp

framework for experimenting with meta-optimizing semantic evolutionary

search (MOSES) and related approaches to learning with probability

distributions over program spaces. Possible research topics to focus

on include:

 

 * Learning procedural abstractions

 * Adapting estimation-of-distribution algorithms to program evolution

 * Applying plop to various interesting data sets

 * Adapting plop to do natural language processing or image processing

 * Better mechanisms for exploiting background knowledge in program
evolution

 

This position is open to all students currently pursuing a BS, MS or

PhD in computer science or a related technical field. It is probably

better-suited to a grad student, but I'm open to considering an

advanced undergrad as well. The only hard and fast requirements for

consideration are a strong programming background (any language(s))

and some experience in AI and/or machine learning. Some pluses:

 

 * Functional programming experience (esp. Lisp, but ML, Haskell, or

even the functional style of C++ count too)

 * Experience with evolutionary computation or stochastic local search

(esp. estimation-of-distribution algorithms and/or genetic

programming)

 * Open-source contributor

 

More info on plop at http://code.google.com/p/plop/, more info on the

Google internship program at: http://www.google.com/jobs/students

 

Please contact me directly (off-list) if you are interested.

 

Thanks!

Moshe Looks

 

P.S. Disclaimer: I can't promise anyone an internship, you have to go

through the standard Google application & interview process for

interns, yada yada ...

 

 

-------------------------------------------

agi

Archives: https://www.listbox.com/member/archive/303/=now

RSS Feed: https://www.listbox.com/member/archive/rss/303/

Modify Your Subscription: https://www.listbox.com/member/?
<https://www.listbox.com/member/?&;
5> &

Powered by Listbox: http://www.listbox.com

 

 




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

"I intend to live forever, or die trying." 
-- Groucho Marx


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups
"OpenCog General & Scientific Discussion List" group. 
To post to this group, send email to open...@googlegroups.com 
To unsubscribe from this group, send email to
opencog+unsubscr...@googlegroups.com 
For more options, visit this group at
http://groups.google.com/group/opencog?hl=en
-~----------~----~----~----~------~----~------~--~---




-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com

Reply via email to