Re: [agi] Re: Can We Start P.S.

2008-07-13 Thread Lukasz Kaiser
Hi Steve,

> This will probably be my last post for a week or so, because I am off to
> WORLDCOMP. Also, I am in a BIG hurry here, so this will be all too brief...

No problem - as this diverges from AGI quite a lot, I'll send you
more comprehensive info and links to your private email next week.
One thing that might be interesting for other people using cnx.org:
you can click on names and get courses by the clicked-on author.
For example there are a few courses by Moshe Vardi which are
quite nice - and even if you know it all, this is a way to compare
the vocabulary and notations to make your things more
understandable to others.

Lukasz


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-13 Thread Jim Bromer
I have read about half of Shastri's 1999 paper "Advances in Shruti— A neurally 
motivated model of relational knowledge representation and rapid inference 
using temporal synchrony" and I see that it he is describing a method of 
encoding general information and then using it to do a certain kind of 
reasoning which is usually called inferential although he seems to have a novel 
way to do this using what he calls "neural circuits". And he does seem to touch 
on the multiple level issues that I am interested in.  The problem is that 
these kinds of systems, regardless of how interesting they are, are not able to 
achieve extensibility because they do not truly describe how the complexities 
of the antecedents would have themselves been achieved (learned) using the 
methodology described. The unspoken assumption behind these kinds of studies 
always seems to be that the one or two systems of reasoning used in the method 
should be sufficient to explain how learning
 takes place, but the failure to achieve intelligent-like behavior (as is seen 
in higher intelligence) gives us a lot of evidence that there must be more to 
it.

But, the real problem is just complexity (or complicatedity for Richard's sake) 
isn't it?  Doesn't that seem like it is the real problem?  If the program had 
the ability to try enough possibilities wouldn't it be likely to learn after a 
while?  Well another part of the problem is that it would have to get a lot of 
detailed information about how good its efforts were, and this information 
would have to be pretty specific using the methods that are common to most 
current thinking about AI.  So there seem to be two different kinds of 
problems.  But the thing is, I think they are both complexity (or 
complicatedity) problems.  Get a working solution for one, and maybe you'd have 
a working solution for the other.

I think a working solution is possible, once you get beyond the simplistic 
perception of seeing everything as if they were ideologically commensurate just 
because you have the belief that you can understand them.
Jim Bromer



  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-13 Thread Ed Porter
Jim, 

 

Thanks for your questions.  

 

Ben Goertzel is coming out with a book on Novamente soon and I assume it
will have a lot of good things to say on the topics you have mentioned.  

 

Below are some of my comments 

 

Ed Porter

 

JIM BROMER WROTE===>

Can you describe some of the kinds of systems that you think would be
necessary for complex inference problems.  Do you feel that all AGI problems
(other than those technical problems that would be common to a variety of
complicated programs that use large data bases) are essentially inference
problems?  Is your use of the term inference here intended to be inclusive
of the various kinds of problems that would have to be dealt with or are you
referring to a class of problems which are inferential in the more
restricted sense of the term?  (I feel that the two senses of the term are
both legitimate, I am just a little curious about what it was that you were
saying.)



ED PORTER>

I think complex inference involves inferring from remembered instances or
learned patterns of temporal correlations, including those where the things
inferred occurred before, after, and/or simultaneously with an activation
from which inference is to flow.  The events involved in such correlation
included not only sensory patterns but also emotional (i.e., value),
remembered, and/or imagined mental occurrences. I think complex inference
needs to be able to flow up and down compositional and generalization
hierarchies.  It needs to be sensitive to current context, and to prior
relevant memories.   Activations from prior activations should continue to
reside, in some form, at many nodes or node elements for various lengths of
times to provide a rich representation of context.  

 

The degree to which activation is spread at each hop as a result of a given
spreading activation could be a function --- not only of the original energy
allocated to origin of that spreading activation --- but also, the
probability and importance of a given node from which the next hop is being
considered, both a priori and given the current context of previous and
other current activation.  It should also be a function of the probability
and importance, both a priori and given the current context, of each link
from the current node with regard to which a determination is to be made
whether or not to activate such a link.   Also the spreading activation
should be controlled by some sort of measure of global gain control,
computational resource market, or other type of competitive measures used to
help focus the spreading activation on better scoring paths. 

 

As in Shurti, AGI inferencing needs to be able to mix both forward and
backward chaining, and mix inferencing up and down compositional and
generialization hierierachies.  Also AGIs need to learn over time which
inferencing patterns are most successful for what types of problems, and
learn to tune the parameters when applying one or more sets of inferencing
patterns to a given problem, based not only on experience learned from past
performances of the inferencing task, but also from feedback during a given
execution of such a task. 

 

Clearly something akin to a goal system is needed, and clearly something is
needed to focus attention on the patterns that currently appear most
relevant to current goals, sub-goals, or other things of importance.

 

Inferencing is clearly one of the major things AGI have to do.  Pattern
recognition can be viewed as a form of inferencing.  Even motor behavior can
be viewed as a type of inference.  For years there have been real world
control systems that have used if-then inference rules to control mechanical
outputs m. 

 

I don't know what you mean by the broad and narrow meaning of inferencing.
To me inferencing means implying or concluding one set of representations is
appropriate from another.  That's pretty broad.

 

I haven't thought about it enough to know if I would go so far as to say all
AGI is essentially inference problems, but clearly it is one of the major
things AGI is about.


JIM BROMER WROTE===>

I only glanced at a couple of papers about SHRUTI, and I may be looking at a
different paper than you were talking about, but looking at the website it
looks like you were talking about a connectionist model.  Do you think a
connectionist model (probabilistic or not) is necessary for AGI.  In other
words, I think a lot of us agree that some kind of complex (or complicated)
system of interrelated data is necessary for AGI and this does correspond to
a network of some kind, but these are not necessarily connectionist.

ED PORTER>

I don't know the exact definition of connectionist.  In its more strict
sense I think it tends to refer to systems where a high percent of the
knowledge has been learned automatically and is represented in automatically
learned weights and/or automatically learned graph nodes or connections, and
there are no human defined sy

RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-13 Thread Ed Porter
Richard,  

I think Wikipedia's definition of forward chaining (copied below) agrees
with my stated understanding as to what forward chaining means, i.e.,
reasoning from the "if" (i.e., conditions) to the "then" (i.e.,
consequences) in if-then statements.  

So, once again there is an indication you have unfairly criticized the
statements of another.

Ed Porter

==Wikipedia defines forward chaining as: ==

Forward chaining is one of the two main methods of reasoning when using
inference rules (in artificial intelligence). The other is backward
chaining.

Forward chaining starts with the available data and uses inference rules to
extract more data (from an end user for example) until an optimal goal is
reached. An inference engine using forward chaining searches the inference
rules until it finds one where the antecedent (If clause) is known to be
true. When found it can conclude, or infer, the consequent (Then clause),
resulting in the addition of new information to its data.

Inference engines will often cycle through this process until an optimal
goal is reached.

For example, suppose that the goal is to conclude the color of my pet Fritz,
given that he croaks and eats flies, and that the rule base contains the
following four rules:

If X croaks and eats flies - Then X is a frog 
If X chirps and sings - Then X is a canary 
If X is a frog - Then X is green 
If X is a canary - Then X is yellow 

This rule base would be searched and the first rule would be selected,
because its antecedent (If Fritz croaks and eats flies) matches our data.
Now the consequents (Then X is a frog) is added to the data. The rule base
is again searched and this time the third rule is selected, because its
antecedent (If Fritz is a frog) matches our data that was just confirmed.
Now the new consequent (Then Fritz is green) is added to our data. Nothing
more can be inferred from this information, but we have now accomplished our
goal of determining the color of Fritz.

Because the data determines which rules are selected and used, this method
is called data-driven, in contrast to goal-driven backward chaining
inference. The forward chaining approach is often employed by expert
systems, such as CLIPS.

One of the advantages of forward-chaining over backward-chaining is that the
reception of new data can trigger new inferences, which makes the engine
better suited to dynamic situations in which conditions are likely to
change.


-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: Saturday, July 12, 2008 7:42 PM
To: agi@v2.listbox.com
Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE
BINDING PROBLEM"?

Jim Bromer wrote:
> Ed Porter said:
> 
> It should be noted that Shruiti uses a mix of forward changing and
backward
> chaining, with an architecture for controlling when and how each is used.
> ...
> 
> My understanding that forward reasoning is reasoning from conditions to
> consequences, and backward reasoning is the opposite. But I think what is
a
> condition and what is a consequence is not always clear, since one can use
> if A then B rules to apply to situations where A occurs before B, B occurs
> before A, and A and B occur at the same time. Thus I think the notion of
> what is forward and backward chaining might be somewhat arbitrary, and
could
> be better clarified if it were based on temporal relationships. I see no
> reason that Shruiti's "?" activation should not run be spread across all
> those temporal relationships, and be distinguished from Shruiti's "+" and
> "-" probabilistic activation by not having a probability, but just a
> temporary attentional characteristic. Additional inference control
mechanism
> could then be added to control which directions in time to reason with in
> different circumstances, if activation pruning was necessary.
> 

This is not correct.

Forward chaining is when the inference engine starts with some facts and 
then uses its knowledge base to explore what consequences can be derived 
from those facts.  Going in this direction the inference engine does not 
know where it will end up.

Backward chaining is when a hypothetical conclusion is given, and the 
engine tries to see what possible deductions might lead to this 
conclusion.  In general, the candidates generated in this first pass are 
not themselves directly known to be true (their antecedents are not 
facts in the knowledge base), so the engine has to repeat the procedure 
to see what possible deductions might lead to the candidates being true. 
  The process is repeated until it bottoms out in known facts that are 
definitely true or false, or until the knowledge base is exhausted, or 
until the end of the universe, or until the engine imposes a cutoff 
(this one of the most common results).

The two procedures are quite fundamentally different.


Richard Loosemore





> Furthermore, Shruiti, does not use multi-level compositional hierarchies
for
> many of

RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-13 Thread Ed Porter
Jim,

 

In my prior posts I have listed some of the limitations of Shruiti.  The
lack of generalized generalizational and compositional hierarchies directly
relates to the problems of learning from experience generalized rules that
derived from learning in complex environements when the surface
representation of many high level concepts are virtually never the same.
This relates to your issue about failing to model the complexity of
antecedents.

 

But as the Serre paper I have cited  multiple times in this thread shows
that the type of gen/comp hierarchies need are very complex.  His system
model a 160x160 pixel greyscale image patch with 23 million models, probably
each having something like 256 inputs, for a total about 6 billion links,
and this is just to do very quick, feedforward, I-think-I-saw-a-lion
uncertain recognition for 1000 objects.  So for a Shruity system to capture
all the complexities involved in human level perception or semantic
reasoning would require much more in the way of computer resources than
Shastry had.

 

So although Shuiti's system is clearly very limited, it is amazing how much
it does considering how simple it is.

 

But the problem is not just complexity.  As I said, Shruiti has some severe
architectural limitations.  But again, it was smart for Shastri to get his
simplified system up and running first before he made all the architectural
fixes required to make it more capable of more generalized implication and
learning.

 

I have actually spend some time thinking about how to generalize Shruiti.
If they, or there equivalent, are not in Ben's new Novamente book I may take
the trouble to write them up,  but I am expecting a lot form Ben's new book.

 

I did not understand your last sentence

 

Ed Porter

 

-Original Message-
From: Jim Bromer [mailto:[EMAIL PROTECTED] 
Sent: Sunday, July 13, 2008 3:47 PM
To: agi@v2.listbox.com
Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE
BINDING PROBLEM"?

 

I have read about half of Shastri's 1999 paper "Advances in Shruti- A
neurally motivated model of relational knowledge representation and rapid
inference using temporal synchrony" and I see that it he is describing a
method of encoding general information and then using it to do a certain
kind of reasoning which is usually called inferential although he seems to
have a novel way to do this using what he calls "neural circuits". And he
does seem to touch on the multiple level issues that I am interested in.
The problem is that these kinds of systems, regardless of how interesting
they are, are not able to achieve extensibility because they do not truly
describe how the complexities of the antecedents would have themselves been
achieved (learned) using the methodology described. The unspoken assumption
behind these kinds of studies always seems to be that the one or two systems
of reasoning used in the method should be sufficient to explain how learning
takes place, but the failure to achieve intelligent-like behavior (as is
seen in higher intelligence) gives us a lot of evidence that there must be
more to it.

But, the real problem is just complexity (or complicatedity for Richard's
sake) isn't it?  Doesn't that seem like it is the real problem?  If the
program had the ability to try enough possibilities wouldn't it be likely to
learn after a while?  Well another part of the problem is that it would have
to get a lot of detailed information about how good its efforts were, and
this information would have to be pretty specific using the methods that are
common to most current thinking about AI.  So there seem to be two different
kinds of problems.  But the thing is, I think they are both complexity (or
complicatedity) problems.  Get a working solution for one, and maybe you'd
have a working solution for the other.

I think a working solution is possible, once you get beyond the simplistic
perception of seeing everything as if they were ideologically commensurate
just because you have the belief that you can understand them.
Jim Bromer

 

  _  


agi |   Archives
 |
 Modify Your Subscription

  

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-13 Thread Richard Loosemore

Ed Porter wrote:
Richard,  


I think Wikipedia's definition of forward chaining (copied below) agrees
with my stated understanding as to what forward chaining means, i.e.,
reasoning from the "if" (i.e., conditions) to the "then" (i.e.,
consequences) in if-then statements.  


So, once again there is an indication you have unfairly criticized the
statements of another.


But  ... nothing in what I said contradicted the wikipedia 
definition of forward chaining.


Jim's statement was a misunderstanding of the meaning of forward and 
backward chaining because he oversimplified the two ("forward reasoning 
is reasoning from conditions to consequences, and backward reasoning is 
the opposite" ... this is kind of true, if you stretch the word 
"reasoining" a little, but it misses the point), and then he went from 
this oversimplification to come to a completely incorrect conclusion 
("...Thus I think the notion of what is forward and backward chaining 
might be somewhat arbitrary...").


This last conclusion was sufficiently inaccurate that I decided to point 
that out.  It was not a criticism, just a clarification;  a pointer in 
the right direction.



Richard Loosemore







Ed Porter

==Wikipedia defines forward chaining as: ==

Forward chaining is one of the two main methods of reasoning when using
inference rules (in artificial intelligence). The other is backward
chaining.

Forward chaining starts with the available data and uses inference rules to
extract more data (from an end user for example) until an optimal goal is
reached. An inference engine using forward chaining searches the inference
rules until it finds one where the antecedent (If clause) is known to be
true. When found it can conclude, or infer, the consequent (Then clause),
resulting in the addition of new information to its data.

Inference engines will often cycle through this process until an optimal
goal is reached.

For example, suppose that the goal is to conclude the color of my pet Fritz,
given that he croaks and eats flies, and that the rule base contains the
following four rules:

If X croaks and eats flies - Then X is a frog 
If X chirps and sings - Then X is a canary 
If X is a frog - Then X is green 
If X is a canary - Then X is yellow 


This rule base would be searched and the first rule would be selected,
because its antecedent (If Fritz croaks and eats flies) matches our data.
Now the consequents (Then X is a frog) is added to the data. The rule base
is again searched and this time the third rule is selected, because its
antecedent (If Fritz is a frog) matches our data that was just confirmed.
Now the new consequent (Then Fritz is green) is added to our data. Nothing
more can be inferred from this information, but we have now accomplished our
goal of determining the color of Fritz.

Because the data determines which rules are selected and used, this method
is called data-driven, in contrast to goal-driven backward chaining
inference. The forward chaining approach is often employed by expert
systems, such as CLIPS.

One of the advantages of forward-chaining over backward-chaining is that the
reception of new data can trigger new inferences, which makes the engine
better suited to dynamic situations in which conditions are likely to
change.


-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: Saturday, July 12, 2008 7:42 PM

To: agi@v2.listbox.com
Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE
BINDING PROBLEM"?

Jim Bromer wrote:

Ed Porter said:

It should be noted that Shruiti uses a mix of forward changing and

backward

chaining, with an architecture for controlling when and how each is used.
...

My understanding that forward reasoning is reasoning from conditions to
consequences, and backward reasoning is the opposite. But I think what is

a

condition and what is a consequence is not always clear, since one can use
if A then B rules to apply to situations where A occurs before B, B occurs
before A, and A and B occur at the same time. Thus I think the notion of
what is forward and backward chaining might be somewhat arbitrary, and

could

be better clarified if it were based on temporal relationships. I see no
reason that Shruiti's "?" activation should not run be spread across all
those temporal relationships, and be distinguished from Shruiti's "+" and
"-" probabilistic activation by not having a probability, but just a
temporary attentional characteristic. Additional inference control

mechanism

could then be added to control which directions in time to reason with in
different circumstances, if activation pruning was necessary.



This is not correct.

Forward chaining is when the inference engine starts with some facts and 
then uses its knowledge base to explore what consequences can be derived 
from those facts.  Going in this direction the inference engine does not 
know where it will end up.


Backward chaining is when a h