Jim Bromer wrote:
Ed Porter said:
It should be noted that Shruiti uses a mix of forward changing and backward
chaining, with an architecture for controlling when and how each is used.
...
My understanding that forward reasoning is reasoning from conditions to
consequences, and backward reasoning is the opposite. But I think what is a
condition and what is a consequence is not always clear, since one can use
if A then B rules to apply to situations where A occurs before B, B occurs
before A, and A and B occur at the same time. Thus I think the notion of
what is forward and backward chaining might be somewhat arbitrary, and could
be better clarified if it were based on temporal relationships. I see no
reason that Shruiti's "?" activation should not run be spread across all
those temporal relationships, and be distinguished from Shruiti's "+" and
"-" probabilistic activation by not having a probability, but just a
temporary attentional characteristic. Additional inference control mechanism
could then be added to control which directions in time to reason with in
different circumstances, if activation pruning was necessary.
This is not correct.
Forward chaining is when the inference engine starts with some facts and
then uses its knowledge base to explore what consequences can be derived
from those facts. Going in this direction the inference engine does not
know where it will end up.
Backward chaining is when a hypothetical conclusion is given, and the
engine tries to see what possible deductions might lead to this
conclusion. In general, the candidates generated in this first pass are
not themselves directly known to be true (their antecedents are not
facts in the knowledge base), so the engine has to repeat the procedure
to see what possible deductions might lead to the candidates being true.
The process is repeated until it bottoms out in known facts that are
definitely true or false, or until the knowledge base is exhausted, or
until the end of the universe, or until the engine imposes a cutoff
(this one of the most common results).
The two procedures are quite fundamentally different.
Richard Loosemore
Furthermore, Shruiti, does not use multi-level compositional hierarchies for
many of its patterns, and it only uses generalizational hierarchies for slot
fillers, not for patterns. Thus, it does not many of the general reasoning
capabilities that are necessary for NL understanding.... Much of the
spreading
activation in a more general purpose AGI would be up and down compositional
and generaliztional hiearachies, which is not necessarily forward or
backward chaining, but which is important in NL understanding. So I agree
that simple forward and backward chaining are not enough to solve general
inferences problems of any considerable complexity.
-----------------------------------
Can you describe some of the kinds of systems that you think would be
necessary for complex inference problems. Do you feel that all AGI
problems (other than those technical problems that would be common to a
variety of complicated programs that use large data bases) are
essentially inference problems? Is your use of the term inference here
intended to be inclusive of the various kinds of problems that would
have to be dealt with or are you referring to a class of problems which
are inferential in the more restricted sense of the term? (I feel that
the two senses of the term are both legitimate, I am just a little
curious about what it was that you were saying.)
I only glanced at a couple of papers about SHRUTI, and I may be looking
at a different paper than you were talking about, but looking at the
website it looks like you were talking about a connectionist model. Do
you think a connectionist model (probabilistic or not) is necessary for
AGI. In other words, I think a lot of us agree that some kind of
complex (or complicated) system of interrelated data is necessary for
AGI and this does correspond to a network of some kind, but these are
not necessarily connectionist.
What were you thinking of when you talked about multi-level
compositional hierarchies that you suggested were necessary for general
reasoning?
If I understood what you were saying, you do not think that activation
synchrony is enough to create insightful binding given the complexities
that are necessary for higher level (or more sophisticated) reasoning.
On the other hand you did seem to suggest that temporal synchrony spread
across a rhythmic flux of relational knowledge of might be useful for
detecting some significant aspects during learning. What do you think?
I guess what I am getting at is I would like you to make some
speculations about the kinds of systems that could work with complicated
reasoning problems. How would you go about solving the binding problem
that you have been talking about? (I haven't read the paper that I
think you were referring to and I only glanced at one paper on SHRUTI
but I am pretty sure that I got enough of what was being discussed to
talk about it.)
Jim Bromer
------------------------------------------------------------------------
*agi* | Archives <https://www.listbox.com/member/archive/303/=now>
<https://www.listbox.com/member/archive/rss/303/> | Modify
<https://www.listbox.com/member/?&>
Your Subscription [Powered by Listbox] <http://www.listbox.com>
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com