Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-09 Thread Richard Loosemore


Ed,

I only have time to look at one small part of your post today...


Ed Porter wrote:
The “Does Mary own a book?” example, once the own relationship is 
activated with Mary in the owner slot and “a book” in the owned-object 
slot, spreads “?” activation, which asks if there any related 
relationships or instances or generalization related to them support the 
statement that Mary owns a book.  The activation causes instances of the 
“give” relationship in which Mary was a recipient and a book was the 
think given to be activated, since if Mary was given a book that would 
indicate she owned a book.  Such and instance is found, tending to 
confirm that Mary does own a book, called book-17 in the example, which 
was given to her by John.


The “John fell in the hallway” example, when told that (1) “John fell in 
the hallway”, (2) “Tom had cleaned it”, and (3) He was hurt, 
automatically implies that it was John who was hurt, and that the floor 
in the hallway was probably wet after Tom cleaned it and John slipped 
and feel when walking in the wet hallway. 

Tell me how you could perform they type of implication and cognition 
shown in these two Shruiti examples without some form of binding?


I for one cannot figure out how to do this with anything like Poggio’s 
type of binding that would fit into a human brain.


Okay, so the question is what happens if the system is asked "Does Mary 
own a book?", given that the system does in fact know, as a result of 
some previous situation, that Mary received a gift which was a book.


How does the system achieve the "binding" that links the books referred 
to in the two situations, so that the question can be answered?  This is 
what would be called a "binding problem".


First, you have to notice that there are two types of answer to this 
question.  One is (speaking very loosely) "deterministic" and one is 
(even more loosely) "emergent".


The determinstic answer would find some kind of mechanism that 
obviously, or clearly  results in a connecton being established between 
the two book instances - the book given as a gift, and the hypothetical 
book mentioned in the question about whether she owns a book.  A 
deterministic answer would *convince* us that the two instances must 
become connected, as a result of the semantic (or other) properties of 
the two pieces of knowledge.


Now I must repeat what I said before about some (perhaps many?) claimed 
solutions to the binding problem:  these claimed solutions often 
establish the *mechanism* by which a connection could be established IF 
THE TWO ITEMS WANT TO TALK TO EACH OTHER.  In other words, what these 
people (e.g. Shastri and Ajjannagadde) do is propose a two step 
solution:  (1) the two instances magically decide that they need to get 
hooked up, and (2) then, some mechanism must allow these two to make 
contact and set up a line to one another.  Think of it this way:  (1) 
You decide that at this moment that you need to call Britney Spears, and 
(2) You need some mechanism whereby you can actually establish a phone 
connection that goes from your place to Britney's place.


The crazy part of this "solution" to the binding problem is that people 
often make the quiet and invisible assumption that (1) is dealt with 
(the two items KNOW that they need to talk), and then they go on to work 
out a fabulously powerful way (e.g. using neural synchronisation) to get 
part (2) to happen.  The reason this is crazy is that the first part IS 
the binding problem, not the second part!  The second phase (the 
practical aspects of making the phone call get through) is just boring 
machinery.  By the time the two parties have decided that they need to 
hook up, the show is already over... the binding problem has been 
solved.  But if you look at papers describing these so-called solutions 
to the binding problem you will find that the first part is never talked 
about.


At least, that was true of the S & A paper, and at least some of the 
papers that followed it, so I gave up following that thread in utter 
disgust.


It is very important to break through this confusion and find out 
exactly why the two relevant entities would decide to talk to each 
other.  Solving any other aspect of the problem is not of any value.


Now, going back to your question about how it would happen:  if you look 
for a determinstic solution to the problem, I am not sure you can come 
up with a general answer.  Whereas there is a nice, obvious solution to 
the question "Is Socrates mortal?" given the facts "Socrates is a man" 
and "All men are mortal", it is not at all clear how to do more complex 
forms of binding without simply doing massive searches.  Or rather, it 
is not clear how you can *guarantee* the finding of a solution.


Basically, I think the best you can do is to use various heuristics to 
shorten the computational problem of proving that the two books can 
relate.  For example, the system can learn the general rule "If you

RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-10 Thread Ed Porter
ed more to inferencing in general and
does not address the issue of binding specifically]

## RICHARD LOOSEMORE WROTE #>>
Overall, then, I believe that any attempts to find a guaranteed 
solution, or an explicit mechanism, that causes bindings to be 
established is actually a folly:  guarantees are not possible, and in 
practice the people who offer this style of explanation never do suply 
the guarantees anyway, but just solve peripheral problems. 

 MY REPLY >>
[The possible search space involved in many inferencing problems have many
more possible states than there are particles in the observable universe ---
so, of course, inference searches in such large spaces are not going to be
guaranteed to always come up with the best answer.  But human reasoning is
full of errors and failures to make appropriate inferences, so we should not
expect human level AGI's to not also make somewhat similar mistakes.] 

## RICHARD LOOSEMORE WROTE #>>
That is my view of the binding problem.  It is a variant of the general 
idea that things happen because of complexity (although that is putting 
it so crudely as to almost confuse the issue).


-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, July 09, 2008 12:02 PM
To: agi@v2.listbox.com
Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE
BINDING PROBLEM"?


Ed,

I only have time to look at one small part of your post today...


Ed Porter wrote:
> The "Does Mary own a book?" example, once the own relationship is 
> activated with Mary in the owner slot and "a book" in the owned-object 
> slot, spreads "?" activation, which asks if there any related 
> relationships or instances or generalization related to them support the 
> statement that Mary owns a book.  The activation causes instances of the 
> "give" relationship in which Mary was a recipient and a book was the 
> think given to be activated, since if Mary was given a book that would 
> indicate she owned a book.  Such and instance is found, tending to 
> confirm that Mary does own a book, called book-17 in the example, which 
> was given to her by John.
> 
> The "John fell in the hallway" example, when told that (1) "John fell in 
> the hallway", (2) "Tom had cleaned it", and (3) He was hurt, 
> automatically implies that it was John who was hurt, and that the floor 
> in the hallway was probably wet after Tom cleaned it and John slipped 
> and feel when walking in the wet hallway. 
> 
> Tell me how you could perform they type of implication and cognition 
> shown in these two Shruiti examples without some form of binding?
> 
> I for one cannot figure out how to do this with anything like Poggio's 
> type of binding that would fit into a human brain.

Okay, so the question is what happens if the system is asked "Does Mary 
own a book?", given that the system does in fact know, as a result of 
some previous situation, that Mary received a gift which was a book.

How does the system achieve the "binding" that links the books referred 
to in the two situations, so that the question can be answered?  This is 
what would be called a "binding problem".

First, you have to notice that there are two types of answer to this 
question.  One is (speaking very loosely) "deterministic" and one is 
(even more loosely) "emergent".

The determinstic answer would find some kind of mechanism that 
obviously, or clearly  results in a connecton being established between 
the two book instances - the book given as a gift, and the hypothetical 
book mentioned in the question about whether she owns a book.  A 
deterministic answer would *convince* us that the two instances must 
become connected, as a result of the semantic (or other) properties of 
the two pieces of knowledge.

Now I must repeat what I said before about some (perhaps many?) claimed 
solutions to the binding problem:  these claimed solutions often 
establish the *mechanism* by which a connection could be established IF 
THE TWO ITEMS WANT TO TALK TO EACH OTHER.  In other words, what these 
people (e.g. Shastri and Ajjannagadde) do is propose a two step 
solution:  (1) the two instances magically decide that they need to get 
hooked up, and (2) then, some mechanism must allow these two to make 
contact and set up a line to one another.  Think of it this way:  (1) 
You decide that at this moment that you need to call Britney Spears, and 
(2) You need some mechanism whereby you can actually establish a phone 
connection that goes from your place to Britney's place.

The crazy part of this "solution" to the binding problem is that people 
often make the quiet and invisible assumption that (1) is dealt with 
(the two items KNOW that the

Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-10 Thread Mike Tintner
RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING 
PROBLEM"?Ed:it is precisely because the human brains can do such massive 
searches (averaging roughly 3 to 300 trillion/second in the cortex alone)  that 
lets us so often come up with the appropriate memory or reason at the 
appropriate time.  

Do you think the brain works by massive search in dealing with problems? Chess 
- a top master may consider consciously v. roughly 150 moves in a minute. Do 
you think his unconscious brain is considering a lot more? How many, roughly in 
what time?

"Name 10 famous Frenchmen". How many Frenchmen roughly do you think your brain 
is checking out and how fast as you deal with that?

Do you dispute Hawkins' "one hundred step rule"? He argues that the brain can 
recognize a face in 1/2 sec. - which can involve information traversing a chain 
of at most 100 neurons in that time. And "the largest conceivable parallel 
computer can't do anything useful in one hundred steps, no matter how large or 
how fast." [See "On Intelligence" pp 66-7] This rule would presumably severely 
limit the number  of associations that can be made with any idea in a given 
time, or no?


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-10 Thread Ed Porter
ld recognize.  To top this
all off, if a person is scanning a changing scene, this processes of many
million of activation could be repeating itself many times a second.

 

So I am not at all disputing Hawkins.  The type of visual recognition he is
talking about could easily involve 100 million to a billion messages a
second, and it is not necessariliy one of the most complex types of searches
that would be involved in human thinking.

 

 

-Original Message-
From: Mike Tintner [mailto:[EMAIL PROTECTED] 
Sent: Thursday, July 10, 2008 1:22 PM
To: agi@v2.listbox.com
Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE
BINDING PROBLEM"?

 

Ed:it is precisely because the human brains can do such massive searches
(averaging roughly 3 to 300 trillion/second in the cortex alone)  that lets
us so often come up with the appropriate memory or reason at the appropriate
time.  

 

Do you think the brain works by massive search in dealing with problems?
Chess - a top master may consider consciously v. roughly 150 moves in a
minute. Do you think his unconscious brain is considering a lot more? How
many, roughly in what time?

 

"Name 10 famous Frenchmen". How many Frenchmen roughly do you think your
brain is checking out and how fast as you deal with that?

 

Do you dispute Hawkins' "one hundred step rule"? He argues that the brain
can recognize a face in 1/2 sec. - which can involve information traversing
a chain of at most 100 neurons in that time. And "the largest conceivable
parallel computer can't do anything useful in one hundred steps, no matter
how large or how fast." [See "On Intelligence" pp 66-7] This rule would
presumably severely limit the number  of associations that can be made with
any idea in a given time, or no?

  _  


agi |  <https://www.listbox.com/member/archive/303/=now> Archives
<https://www.listbox.com/member/archive/rss/303/> |
<https://www.listbox.com/member/?&;
f> Modify Your Subscription

 <http://www.listbox.com> 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-10 Thread Richard Loosemore

Ed Porter wrote:

## RICHARD LOOSEMORE WROTE #>>
Now I must repeat what I said before about some (perhaps many?) claimed 
solutions to the binding problem:  these claimed solutions often 
establish the *mechanism* by which a connection could be established IF 
THE TWO ITEMS WANT TO TALK TO EACH OTHER.  In other words, what these 
people (e.g. Shastri and Ajjannagadde) do is propose a two step 
solution:  (1) the two instances magically decide that they need to get 
hooked up, and (2) then, some mechanism must allow these two to make 
contact and set up a line to one another.  Think of it this way:  (1) 
You decide that at this moment that you need to call Britney Spears, and 
(2) You need some mechanism whereby you can actually establish a phone 
connection that goes from your place to Britney's place.


The crazy part of this "solution" to the binding problem is that people 
often make the quiet and invisible assumption that (1) is dealt with 
(the two items KNOW that they need to talk), and then they go on to work 
out a fabulously powerful way (e.g. using neural synchronisation) to get 
part (2) to happen.  The reason this is crazy is that the first part IS 
the binding problem, not the second part!  The second phase (the 
practical aspects of making the phone call get through) is just boring 
machinery.  By the time the two parties have decided that they need to 
hook up, the show is already over... the binding problem has been 
solved.  But if you look at papers describing these so-called solutions 
to the binding problem you will find that the first part is never talked 
about.


At least, that was true of the S & A paper, and at least some of the 
papers that followed it, so I gave up following that thread in utter 
disgust. 


 MY REPLY >>
[Your description of Shastri's work is inaccurate --- at least from his
papers I have read, which include among others, " Advances in Shruti -- A
neurally motivated model of relational knowledge representation and rapid
inference using temporal synchrony" Applied Intelligence, 11: 79-108, 1999 (
http://www.icsi.berkeley.edu/~shastri/psfiles/shruti_adv_98.ps ); and
"Massively parallel knowledge representation and reasoning: Taking a cue
from the Brain", by Shastri and Mani.

It is obvious from reading Shasti that his notion of what should talk to
what (i.e., i.e., be searched by spreading activation) is determined by a
form of forward and/or backward chaining, which can automatically be learned
from temporal associations between pattern activations, and the bindings
involved can be learned by the occurrences of the same one or more pattern
element instances as a part or as an attribute in one or more of those
temporally related patterns.

Shruiti's representational scheme has limitations that make it ill suited
for use as the general representation scheme in an AGI (problems which I
think can be fixed with a more generalized architecture), but the particular
problem you are accusing his system of here --- i.e., that it provides no
guidance as to what should be searched for when to answer a given query ---
is not in fact a problem (other than the issue of possible exponential
explosion of the search tree, which is discussed in my answers below)]

>


## RICHARD LOOSEMORE WROTE #>>
It is very important to break through this confusion and find out 
exactly why the two relevant entities would decide to talk to each 
other.  Solving any other aspect of the problem is not of any value.


Now, going back to your question about how it would happen:  if you look 
for a determinstic solution to the problem, I am not sure you can come 
up with a general answer.  Whereas there is a nice, obvious solution to 
the question "Is Socrates mortal?" given the facts "Socrates is a man" 
and "All men are mortal", it is not at all clear how to do more complex 
forms of binding without simply doing massive searches.


 MY REPLY >>
[You often do have to do massive searches -- it is precisely because the
human brains can do such massive searches (averaging roughly 3 to 300
trillion/second in the cortex alone)  that lets us so often come up with the
appropriate memory or reason at the appropriate time.  But the massive
searches in a large Shruiti-like or Novamente-like system are not
totally-blind searches --- instead they are often massive search guided by
forward and/or backward chaining -- by previously learned and/or recently
activated probabilities and importances --- by relative scores of various
search threads or pattern activations --- by inference patterns that may
have proven successful in previous similar searches --- by similar episodic
memories --- and --- by interaction with the current context as represented
by the other current activations] 


Well, I will hold my fire until I get to your comments below, but I must 
insist that what I said was accurate:  his first major paper on this 
topic was a sleight of ha

Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-10 Thread Richard Loosemore
cing reason for why they are.  


Ben has said that when he gets back from Alaska he will try to refute many
of your claims that RL complexity prevents the engineering of AGI systems
because of the local-global disconnect.  I look forward to reading it.


 ED PORTERS LAST EMAIL >>
But this is not necessarily the mysterious and ill defined RL complexity,



## RICHARD LOOSEMORE LAST EMAIL #>>
Ahhh please don't take gratuitous pot shots:  you do not understand 
what I mean by "complexity".

..

And this where you demonstrate that you do not.  You are usiung the word 
"complexity" as if it just meant "complicated", and this is a 
misunderstanding.


#ED PORTERS CURRENT RESPONSE >
Correct, I do not understand what you mean by RL complexity, even though I
have read multiple articles by you on the subject.  


One thing you discuss that I do understand is the concept of the
local-global disconnect, although I tend to think of it be a matter of
degree rather than a sharp dichotomy between regimes where you have it and
those where you don't.  


Certainly one can build systems where the relationship between local
behavior and higher level behavior is so convoluted that humans cannot
design higher level behaviors from lower level ones, but it is not at all
clear that applies to all extremely complicated systems, nor that it will
apply to all human level AGIs.  


I do not think the local-global disconnect it is going to be that big a
problem for many useful and powerful AGI's that can be built.  Such systems
will be complex, and the models for thinking about them may need to change
as you deal with higher level organization in such system, but I think one
level can be reasonably engineered from the one below.

But I could be wrong, perhaps the local-global dichotomy is much more
important in AI than I think.  Time will tell


## RICHARD LOOSEMORE LAST EMAIL #>>
Binding happens because of a complex emergent consequence of the 
mechanisms that you are calling inference control (but which are more 
general and powerful than that  see previous comment).  Anyone who 
thinks that there is a binding problem that can be solved in some other 
(more deterministic way) is putting the cart before the horse.


#ED PORTERS CURRENT RESPONSE >
Talk about hand waving: "Binding happense because of a complex emergent
consequence of ..."

Binding IS part of inference control, but it is a special aspect of it,
because it requires that decisions made at a higher level of composition or
generalization take into account¸ either implicitly or explicitly, that
certain relationships existed between elements matched at a lower levels.
Where this binding information can be handled implicitly, it can be handled
by normal inferencing mechanism, but it often requires a much large number
of models to ensure the proper binding required for the matching of a higher
level pattern has occurred.  


In a prior response in this thread, you, yourself, said that in complex
spaces such as those used in certain semantic reasoning the number of models
required to do the necessary binding for some types of reasoning would be to
large to be practical.  But I have not received any description from you as
to how this would be performed without explicit binding encoding, such as by
synchrony, even though you have implied that by constraint and emergence it
can be handled without such explicit mechanisms.

You have provided no sound reason for believing your method of dealing with
the binding problem is any more promising that the combination of the type
of implicity binding Poggio's paper has shown can be provided by using many
models to encode bindings implicitly, and --- where such implicit binding is
not practical --- the use of explicit representations of binding, such as a
Shruiti-like synchrony or a numerical representations of binding information
in spreading activation.


-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: Thursday, July 10, 2008 3:00 PM

To: agi@v2.listbox.com
Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE
BINDING PROBLEM"?

Ed Porter wrote:

## RICHARD LOOSEMORE WROTE #>>
Now I must repeat what I said before about some (perhaps many?) claimed 
solutions to the binding problem:  these claimed solutions often 
establish the *mechanism* by which a connection could be established IF 
THE TWO ITEMS WANT TO TALK TO EACH OTHER.  In other words, what these 
people (e.g. Shastri and Ajjannagadde) do is propose a two step 
solution:  (1) the two instances magically decide that they need to get 
hooked up, and (2) then, some mechanism must allow these two to make 
contact and set up a line to one another.  Think of it this way:  (1) 
You decide that at this moment that you need to call Britney Spe

Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-11 Thread Jim Bromer
> #ED PORTERS CURRENT RESPONSE >
> Forward and backward chaining are not hacks.  They has been two of the most
> commonly and often successfully techniques in AI search for at least 30
> years.  They  are not some sort of wave of the hand.  They are much more
> concretely grounded in successful AI experience than many of your much more
> ethereal, and very arguably hand waving, statements about having many of the
> difficult problems in AI are to be cured by some as yet unclearly defined
> emergence from complexity.

Richard Loosemore's response:
Oh dear:  yet again I have to turn a blind eye to the ad hominem insults.
--

There were no ad hominem insults in Ed's response. His comment about Richard's 
ethereal hand waiving was clearly and unmistakably within the boundaries that 
Richard has set in his own criticisms again and again.  And Ed specified the 
target of the criticism when he spoke of the "difficult problems in AI 
...[which]... are to be cured by some as yet unclearly defined emergence from 
complexity."  All Richard had to do was to answer the question, and instead he 
ran for cover behind this bogus charge of being the victim of an ad hominem 
insult.

If upon reflection, Richard sincerely believes that Ed's comment was an ad 
hominem insult, then we can take this comment as a basis for detecting the true 
motivation behind those comments of Richard which are so similar in form.

For example, Richard said, " Understanding that they only have the status of 
hacks is a very  important sign of maturity as an AI researcher.  There is a 
very deep truth buried in that fact."

While I have some partial agreement with Richard's side on this one particular 
statement, I can only conclude that by using Richard's own measure of "ad 
hominem insults" that Richard must have intended this remark to have that kind 
of effect.  Similarly, I feel comfortable with the conclusion that every time 
Richard uses his "hand waiving" argument, there is a good chance that he is 
just using it as an all-purpose ad hominem insult.

It is too bad that Richard cannot discuss his complexity theory without running 
from the fact that his solution to the problem is based on his non-explanation 
that, 
"...in this "emergent" (or, to be precise, "complex system") answer to 
the question, there is no guarantee that binding will happen.  The 
binding problem in effect disappears - it does not need to be explicitly 
solved because it simply never arises.  There is no specific mechanism 
designed to construct bindings (although there are lots of small 
mechanisms that enforce constraints), there is only a general style of 
computation, which is the relaxation-of-constraints style."

>From reading Richard's postings I think that Richard does not believe there is 
>a problem because the nature of complexity itself will solve the problem - 
>once someone is lucky enough to find the right combination of initial rules.

For those who believe that problems are solved through study and 
experimentation, Richard has no response to the most difficult problems in 
contemporary AI research except to cry foul.  He does not even consider such 
questions to be valid.

Jim Bromer



  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-11 Thread Richard Loosemore

Jim Bromer wrote:

 > #ED PORTERS CURRENT RESPONSE >
 > Forward and backward chaining are not hacks. They has been two of the 
most

 > commonly and often successfully techniques in AI search for at least 30
 > years. They are not some sort of wave of the hand. They are much more
 > concretely grounded in successful AI experience than many of your 
much more
 > ethereal, and very arguably hand waving, statements about having many 
of the

 > difficult problems in AI are to be cured by some as yet unclearly defined
 > emergence from complexity.

Richard Loosemore's response:
Oh dear: yet again I have to turn a blind eye to the ad hominem insults.
--

There were no ad hominem insults in Ed's response. His comment about 
Richard's ethereal hand waiving was clearly and unmistakably within the 
boundaries that Richard has set in his own criticisms again and again.  
And Ed specified the target of the criticism when he spoke of the 
"difficult problems in AI ...[which]... are to be cured by some as yet 
unclearly defined emergence from complexity."  All Richard had to do was 
to answer the question, and instead he ran for cover behind this bogus 
charge of being the victim of an ad hominem insult.


Jim,

Take a more careful look, if you please.

Ed and I were talking about a particular *topic*, but then in the middle 
of the discussion about that topic, he suddenly declared that the 
techniques in question were "much more concretely grounded in successful 
AI experience than many of your much more ethereal, and very arguably 
hand waving, statements about having many of the difficult problems in 
AI are to be cured by some as yet unclearly defined emergence from 
complexity."   Instead of trying to make statements about the topic, he 
tries to denigrate some proposals that I have made.  Whether my 
proposals are or are not worthy of such criticism, that has nothing to 
do with the topic that was under discussion.  He just took a moment out 
to make a quick insult.


To make matters worse, what he actually says about my proposals is also 
a pretty bad misrepresentation of what I have said.  My central claim is 
that there is a problem at the heart of the current AI methodology.  I 
have said that there is a sickness there.  I have also given an outline 
of a possible cure - but I have been quite clear to everyone that this 
is just an outline of the cure, nothing more.  Now, do you really think 
that a physician should be criticised for IDENTIFYING a malady, because 
he did not, in the same breath, also propose a CURE for the malady?


Finally, you yourself say that I "ran for cover behind this bogus
charge of being the victim of an ad hominem insult"  but I did 
nothing of the sort.  I went on to ignore the insult, giving as full a 
reply to his point as I would have done if the insult had not been there.


As I said, I turned a blind eye to it, albeit after pointing it out.

Tut tut.



If upon reflection, Richard sincerely believes that Ed's comment was an 
ad hominem insult, then we can take this comment as a basis for 
detecting the true motivation behind those comments of Richard which are 
so similar in form.


For example, Richard said, " Understanding that they only have the 
status of hacks is a very  important sign of maturity as an AI 
researcher. There is a very deep truth buried in that fact."


While I have some partial agreement with Richard's side on this one 
particular statement, I can only conclude that by using Richard's own 
measure of "ad hominem insults" that Richard must have intended this 
remark to have that kind of effect.  Similarly, I feel comfortable with 
the conclusion that every time Richard uses his "hand waiving" argument, 
there is a good chance that he is just using it as an all-purpose ad 
hominem insult.


Excuse me?  "Ad hominem" means that the remarks were designed to win an 
argument by insulting the other person.  Ed is not an AI researcher, he 
admits, himself, that he has only an outsider's perspective on this 
field, that he is learning.  I was mostly directing that comment at 
people who claim to be far more experienced than he.



It is too bad that Richard cannot discuss his complexity theory without 
running from the fact that his solution to the problem is based on his 
non-explanation that,

"...in this "emergent" (or, to be precise, "complex system") answer to
the question, there is no guarantee that binding will happen. The
binding problem in effect disappears - it does not need to be explicitly
solved because it simply never arises. There is no specific mechanism
designed to construct bindings (although there are lots of small
mechanisms that enforce constraints), there is only a general style of
computation, which is the relaxation-of-constraints style."

 From reading Richard's postings I think that Richard does not believe 
there is a problem because the nature of complexity itself will solve 

Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-11 Thread Richard Loosemore
contrary to your implication, figure out how to make a Shruiti-like
system determine which things should talk to which for purposes of binding
in the class of problems it deals with.  The only problem being that in
really large knowledge bases, additional inference control mechanisms would
be required to prune the spreading activation to keep it within a realistic
budget.   But you original problem was not with how well Shruiti would
scale, but with how it could learn to work at all, which I think I have
shown is inaccurate.


I am sorry, but you have not shown that.  Perhaps you give me a hint of 
where it happened?


Solving the above problem is not about just doing the mechanics of the 
inference process, it is about finding a way to model the situation in 
order to map out the kinds of logical statements that might be relevant.


For example, should the system begin by loading all of the statements in 
its knowledge base that use the word [or numeral signified by] "one"? 
How is it going to restrict the set of statements to look at?


The fact is that these choices about how to control the inference 
process ARE the things which determine how well the inference process 
works.  Are you familiar with this idea?




So you are doing something you have done many times before in discussion
with me, which is when I show that a statement you make is ungrounded, you
pretend your original statement was other than that it was.

Nevertheless, your example, of the sentence with 12 "one"s is interesting.

It should be noted that Shruiti uses a mix of forward changing and backward
chaining, with an architecture for controlling when and how each is used.
It is more subtle than you seem willing to acknowledge.  


You are kidding, of course.  I am quite well aware of the mixtures of 
forward and backward chaining that occur in practice (I had to write 
systems like that as student exercises 20 years ago, so the practice is 
a familar one).


That makes no difference to the argument I set out.



Richard Loosemore



My understanding that forward reasoning is reasoning from conditions to
consequences, and backward reasoning is the opposite.  But I think what is a
condition and what is a consequence is not always clear, since one can use
if A then B rules to apply to situations where A occurs before B, B occurs
before A, and A and B occur at the same time.  Thus I think the notion of
what is forward and backward chaining might be somewhat arbitrary, and could
be better clarified if it were based on temporal relationships.  I see no
reason that Shruiti's "?" activation should not run be spread across all
those temporal relationships, and be distinguished from Shruiti's "+" and
"-" probabilistic activation by not having a probability, but just a
temporary attentional characteristic. Additional inference control mechanism
could then be added to control which directions in time to reason with in
different circumstances, if activation pruning was necessary.

Furthermore, Shruiti, does not use multi-level compositional hierarchies for
many of its patterns, and it only uses generalizational hierarchies for slot
fillers, not for patterns.  Thus, it does not many of the general reasoning
capabilities that are necessary for NL understanding, and could not be
expect to deal with you sentence with 12 "one"s.  Much of the spreading
activation in a more general purpose AGI would be up and down compositional
and generaliztional hiearachies, which is not necessarily forward or
backward chaining, but which is important in NL understanding.  So I agree
that simple forward and backward chaining are not enough to solve general
inferences problems of any considerable complexity.

Ed Porter

-Original Message-----
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: Thursday, July 10, 2008 8:13 PM

To: agi@v2.listbox.com
Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE
BINDING PROBLEM"?

Ed Porter wrote:

## RICHARD LOOSEMORE LAST EMAIL #>>
My preliminary response to your suggestion that other Shastri papers do 
describe ways to make binding happen correctly is as follows:  anyone 
can suggest ways that *might* cause correct binding to occur - anyone 
can wave their hands, write a program, and then say "backward chaining" 
- but there is a world of difference between suggesting mechanisms that 
*might* do it, and showing that those mechanisms actually do cause 
correct bindings to be established in practice.


What happens in practice is that the proposed mechanisms work for (a) 
toy cases for which they were specifically designed to work, and/or (b) 
a limited number of the more difficult cases, and that what we also find 
is that they (c) tend to screw up in all kinds of interesting ways when 
the going gets tough.  At the end of the day, these proposals don't 
solve the binding problem, they jus

RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-12 Thread Jim Bromer
Ed Porter said:

It should be noted that Shruiti uses a mix of forward changing and backward
chaining, with an architecture for controlling when and how each is used.
... 

My understanding that forward reasoning is reasoning from conditions to
consequences, and backward reasoning is the opposite.  But I think what is a
condition and what is a consequence is not always clear, since one can use
if A then B rules to apply to situations where A occurs before B, B occurs
before A, and A and B occur at the same time.  Thus I think the notion of
what is forward and backward chaining might be somewhat arbitrary, and could
be better clarified if it were based on temporal relationships.  I see no
reason that Shruiti's "?" activation should not run be spread across all
those temporal relationships, and be distinguished from Shruiti's "+" and
"-" probabilistic activation by not having a probability, but just a
temporary attentional characteristic. Additional inference control mechanism
could then be added to control which directions in time to reason with in
different circumstances, if activation pruning was necessary.

Furthermore, Shruiti, does not use multi-level compositional hierarchies for
many of its patterns, and it only uses generalizational hierarchies for slot
fillers, not for patterns.  Thus, it does not many of the general reasoning
capabilities that are necessary for NL understanding  Much of the spreading
activation in a more general purpose AGI would be up and down compositional
and generaliztional hiearachies, which is not necessarily forward or
backward chaining, but which is important in NL understanding.  So I agree
that simple forward and backward chaining are not enough to solve general
inferences problems of any considerable complexity.

---
Can you describe some of the kinds of systems that you think would be necessary 
for complex inference problems.  Do you feel that all AGI problems (other than 
those technical problems that would be common to a variety of complicated 
programs that use large data bases) are essentially inference problems?  Is 
your use of the term inference here intended to be inclusive of the various 
kinds of problems that would have to be dealt with or are you referring to a 
class of problems which are inferential in the more restricted sense of the 
term?  (I feel that the two senses of the term are both legitimate, I am just a 
little curious about what it was that you were saying.)

I only glanced at a couple of papers about SHRUTI, and I may be looking at a 
different paper than you were talking about, but looking at the website it 
looks like you were talking about a connectionist model.  Do you think a 
connectionist model (probabilistic or not) is necessary for AGI.  In other 
words, I think a lot of us agree that some kind of complex (or complicated) 
system of interrelated data is necessary for AGI and this does correspond to a 
network of some kind, but these are not necessarily connectionist.

What were you thinking of when you talked about multi-level compositional 
hierarchies that you suggested were necessary for general reasoning?

If I understood what you were saying, you do not think that activation 
synchrony is enough to create insightful binding given the complexities that 
are necessary for higher level (or more sophisticated) reasoning. On the other 
hand you did seem to suggest that temporal synchrony spread across a rhythmic 
flux ofrelational knowledge of might be useful for detecting some significant 
aspects during learning.  What do you think?

I guess what I am getting at is I would like you to make some speculations 
about the kinds of systems that could work with complicated reasoning problems. 
 How would you go about solving the binding problem that you have been talking 
about?  (I haven't read the paper that I think you were referring to and I only 
glanced at one paper on SHRUTI but I am pretty sure that I got enough of what 
was being discussed to talk about it.)

Jim Bromer



  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-12 Thread Richard Loosemore

Jim Bromer wrote:

Ed Porter said:

It should be noted that Shruiti uses a mix of forward changing and backward
chaining, with an architecture for controlling when and how each is used.
...

My understanding that forward reasoning is reasoning from conditions to
consequences, and backward reasoning is the opposite. But I think what is a
condition and what is a consequence is not always clear, since one can use
if A then B rules to apply to situations where A occurs before B, B occurs
before A, and A and B occur at the same time. Thus I think the notion of
what is forward and backward chaining might be somewhat arbitrary, and could
be better clarified if it were based on temporal relationships. I see no
reason that Shruiti's "?" activation should not run be spread across all
those temporal relationships, and be distinguished from Shruiti's "+" and
"-" probabilistic activation by not having a probability, but just a
temporary attentional characteristic. Additional inference control mechanism
could then be added to control which directions in time to reason with in
different circumstances, if activation pruning was necessary.



This is not correct.

Forward chaining is when the inference engine starts with some facts and 
then uses its knowledge base to explore what consequences can be derived 
from those facts.  Going in this direction the inference engine does not 
know where it will end up.


Backward chaining is when a hypothetical conclusion is given, and the 
engine tries to see what possible deductions might lead to this 
conclusion.  In general, the candidates generated in this first pass are 
not themselves directly known to be true (their antecedents are not 
facts in the knowledge base), so the engine has to repeat the procedure 
to see what possible deductions might lead to the candidates being true. 
 The process is repeated until it bottoms out in known facts that are 
definitely true or false, or until the knowledge base is exhausted, or 
until the end of the universe, or until the engine imposes a cutoff 
(this one of the most common results).


The two procedures are quite fundamentally different.


Richard Loosemore






Furthermore, Shruiti, does not use multi-level compositional hierarchies for
many of its patterns, and it only uses generalizational hierarchies for slot
fillers, not for patterns. Thus, it does not many of the general reasoning
capabilities that are necessary for NL understanding Much of the 
spreading

activation in a more general purpose AGI would be up and down compositional
and generaliztional hiearachies, which is not necessarily forward or
backward chaining, but which is important in NL understanding. So I agree
that simple forward and backward chaining are not enough to solve general
inferences problems of any considerable complexity.

---
Can you describe some of the kinds of systems that you think would be 
necessary for complex inference problems.  Do you feel that all AGI 
problems (other than those technical problems that would be common to a 
variety of complicated programs that use large data bases) are 
essentially inference problems?  Is your use of the term inference here 
intended to be inclusive of the various kinds of problems that would 
have to be dealt with or are you referring to a class of problems which 
are inferential in the more restricted sense of the term?  (I feel that 
the two senses of the term are both legitimate, I am just a little 
curious about what it was that you were saying.)


I only glanced at a couple of papers about SHRUTI, and I may be looking 
at a different paper than you were talking about, but looking at the 
website it looks like you were talking about a connectionist model.  Do 
you think a connectionist model (probabilistic or not) is necessary for 
AGI.  In other words, I think a lot of us agree that some kind of 
complex (or complicated) system of interrelated data is necessary for 
AGI and this does correspond to a network of some kind, but these are 
not necessarily connectionist.


What were you thinking of when you talked about multi-level 
compositional hierarchies that you suggested were necessary for general 
reasoning?


If I understood what you were saying, you do not think that activation 
synchrony is enough to create insightful binding given the complexities 
that are necessary for higher level (or more sophisticated) reasoning. 
On the other hand you did seem to suggest that temporal synchrony spread 
across a rhythmic flux of relational knowledge of might be useful for 
detecting some significant aspects during learning.  What do you think?


I guess what I am getting at is I would like you to make some 
speculations about the kinds of systems that could work with complicated 
reasoning problems.  How would you go about solving the binding problem 
that you have been talking about?  (I haven't read the paper that I 
think you were referring to and I only glanced at on

Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-13 Thread Jim Bromer
I have read about half of Shastri's 1999 paper "Advances in Shruti— A neurally 
motivated model of relational knowledge representation and rapid inference 
using temporal synchrony" and I see that it he is describing a method of 
encoding general information and then using it to do a certain kind of 
reasoning which is usually called inferential although he seems to have a novel 
way to do this using what he calls "neural circuits". And he does seem to touch 
on the multiple level issues that I am interested in.  The problem is that 
these kinds of systems, regardless of how interesting they are, are not able to 
achieve extensibility because they do not truly describe how the complexities 
of the antecedents would have themselves been achieved (learned) using the 
methodology described. The unspoken assumption behind these kinds of studies 
always seems to be that the one or two systems of reasoning used in the method 
should be sufficient to explain how learning
 takes place, but the failure to achieve intelligent-like behavior (as is seen 
in higher intelligence) gives us a lot of evidence that there must be more to 
it.

But, the real problem is just complexity (or complicatedity for Richard's sake) 
isn't it?  Doesn't that seem like it is the real problem?  If the program had 
the ability to try enough possibilities wouldn't it be likely to learn after a 
while?  Well another part of the problem is that it would have to get a lot of 
detailed information about how good its efforts were, and this information 
would have to be pretty specific using the methods that are common to most 
current thinking about AI.  So there seem to be two different kinds of 
problems.  But the thing is, I think they are both complexity (or 
complicatedity) problems.  Get a working solution for one, and maybe you'd have 
a working solution for the other.

I think a working solution is possible, once you get beyond the simplistic 
perception of seeing everything as if they were ideologically commensurate just 
because you have the belief that you can understand them.
Jim Bromer



  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-13 Thread Ed Porter
g
that can be done implicitly in semantic reasoning in a reasonably efficient
manner, I would be interested in hearing them

 

JIM BROMER WROTE===>

I guess what I am getting at is I would like you to make some speculations
about the kinds of systems that could work with complicated reasoning
problems.  How would you go about solving the binding problem that you have
been talking about?  (I haven't read the paper that I think you were
referring to and I only glanced at one paper on SHRUTI but I am pretty sure
that I got enough of what was being discussed to talk about it.)

 

ED PORTER>

I assume a Novamente system would be able to do most of the types of
implication I am interested in, or at least be modified to do so.  From my
reading six or more months ago of Novamente literature, I forget how Ben
handled binding, but I am sure he has some way, either implicit or explicit,
because Ben is a smart guy and binding, either implicit or explicit, is
necessary for any generalized complicated pattern matching capability.  

 

My Novamente-like approach uses binding numbers to produce something
equivalent to synchrony for the conveying of explicit binding information.
This allows a form of graph matching to take place.  

 

I envision a system where over the duration of short term memory (~100
seconds) there could be, say, a million different explicit bindings, with,
say, roughly 100 billion remnants of the spreading activation from these
million or so explicity bindings remaining in short term memory at any given
time.  This allows a substantial amount of parallel complex semantic
matching to proceed in parallel within a rich contextual representatio.  

 

Even though I have some hacks to speed the communication and matching of
binding information (such as graph matching), it still is expensive and
therefore I am interested in techniques that reduce the need for it.  

 

For example, one could use traditional bottom up pattern matching without
any binding, to activate a set of best scoring patterns, and then have
bottom down processes from the more activated patterns test whether the
binding required exists for the matching of each of the patterns most
activated by the bottom up actiavation.  In a recent phone conversation when
I described this hack to Dave Hart, he named it "binding on demand."   Such
binding on demand would tend to substantially limit the need for explicit
binding to case where there is already good reason to believe a pattern
including such binding might be matched, and it would limit the spreading of
such binding information to the limited implication paths along which it has
been requested.

 

 

-Original Message-
From: Jim Bromer [mailto:[EMAIL PROTECTED] 
Sent: Saturday, July 12, 2008 7:21 PM
To: agi@v2.listbox.com
Subject: RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE
BINDING PROBLEM"?

 

Ed Porter said:

It should be noted that Shruiti uses a mix of forward changing and backward
chaining, with an architecture for controlling when and how each is used.
.. 

My understanding that forward reasoning is reasoning from conditions to
consequences, and backward reasoning is the opposite. But I think what is a
condition and what is a consequence is not always clear, since one can use
if A then B rules to apply to situations where A occurs before B, B occurs
before A, and A and B occur at the same time. Thus I think the notion of
what is forward and backward chaining might be somewhat arbitrary, and could
be better clarified if it were based on temporal relationships. I see no
reason that Shruiti's "?" activation should not run be spread across all
those temporal relationships, and be distinguished from Shruiti's "+" and
"-" probabilistic activation by not having a probability, but just a
temporary attentional characteristic. Additional inference control mechanism
could then be added to control which directions in time to reason with in
different circumstances, if activation pruning was necessary.

Furthermore, Shruiti, does not use multi-level compositional hierarchies for
many of its patterns, and it only uses generalizational hierarchies for slot
fillers, not for patterns. Thus, it does not many of the general reasoning
capabilities that are necessary for NL understanding Much of the
spreading
activation in a more general purpose AGI would be up and down compositional
and generaliztional hiearachies, which is not necessarily forward or
backward chaining, but which is important in NL understanding. So I agree
that simple forward and backward chaining are not enough to solve general
inferences problems of any considerable complexity.

---
Can you describe some of the kinds of systems that you think would be
necessary for complex inference problems.  Do you feel that all AGI problems
(other than those technical problems that would 

RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-13 Thread Ed Porter
Richard,  

I think Wikipedia's definition of forward chaining (copied below) agrees
with my stated understanding as to what forward chaining means, i.e.,
reasoning from the "if" (i.e., conditions) to the "then" (i.e.,
consequences) in if-then statements.  

So, once again there is an indication you have unfairly criticized the
statements of another.

Ed Porter

==Wikipedia defines forward chaining as: ==

Forward chaining is one of the two main methods of reasoning when using
inference rules (in artificial intelligence). The other is backward
chaining.

Forward chaining starts with the available data and uses inference rules to
extract more data (from an end user for example) until an optimal goal is
reached. An inference engine using forward chaining searches the inference
rules until it finds one where the antecedent (If clause) is known to be
true. When found it can conclude, or infer, the consequent (Then clause),
resulting in the addition of new information to its data.

Inference engines will often cycle through this process until an optimal
goal is reached.

For example, suppose that the goal is to conclude the color of my pet Fritz,
given that he croaks and eats flies, and that the rule base contains the
following four rules:

If X croaks and eats flies - Then X is a frog 
If X chirps and sings - Then X is a canary 
If X is a frog - Then X is green 
If X is a canary - Then X is yellow 

This rule base would be searched and the first rule would be selected,
because its antecedent (If Fritz croaks and eats flies) matches our data.
Now the consequents (Then X is a frog) is added to the data. The rule base
is again searched and this time the third rule is selected, because its
antecedent (If Fritz is a frog) matches our data that was just confirmed.
Now the new consequent (Then Fritz is green) is added to our data. Nothing
more can be inferred from this information, but we have now accomplished our
goal of determining the color of Fritz.

Because the data determines which rules are selected and used, this method
is called data-driven, in contrast to goal-driven backward chaining
inference. The forward chaining approach is often employed by expert
systems, such as CLIPS.

One of the advantages of forward-chaining over backward-chaining is that the
reception of new data can trigger new inferences, which makes the engine
better suited to dynamic situations in which conditions are likely to
change.


-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: Saturday, July 12, 2008 7:42 PM
To: agi@v2.listbox.com
Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE
BINDING PROBLEM"?

Jim Bromer wrote:
> Ed Porter said:
> 
> It should be noted that Shruiti uses a mix of forward changing and
backward
> chaining, with an architecture for controlling when and how each is used.
> ...
> 
> My understanding that forward reasoning is reasoning from conditions to
> consequences, and backward reasoning is the opposite. But I think what is
a
> condition and what is a consequence is not always clear, since one can use
> if A then B rules to apply to situations where A occurs before B, B occurs
> before A, and A and B occur at the same time. Thus I think the notion of
> what is forward and backward chaining might be somewhat arbitrary, and
could
> be better clarified if it were based on temporal relationships. I see no
> reason that Shruiti's "?" activation should not run be spread across all
> those temporal relationships, and be distinguished from Shruiti's "+" and
> "-" probabilistic activation by not having a probability, but just a
> temporary attentional characteristic. Additional inference control
mechanism
> could then be added to control which directions in time to reason with in
> different circumstances, if activation pruning was necessary.
> 

This is not correct.

Forward chaining is when the inference engine starts with some facts and 
then uses its knowledge base to explore what consequences can be derived 
from those facts.  Going in this direction the inference engine does not 
know where it will end up.

Backward chaining is when a hypothetical conclusion is given, and the 
engine tries to see what possible deductions might lead to this 
conclusion.  In general, the candidates generated in this first pass are 
not themselves directly known to be true (their antecedents are not 
facts in the knowledge base), so the engine has to repeat the procedure 
to see what possible deductions might lead to the candidates being true. 
  The process is repeated until it bottoms out in known facts that are 
definitely true or false, or until the knowledge base is exhausted, or 
until the end of the universe, or until the engine imposes a cutoff 
(this one of the most common results).

The two procedures are quite 

RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-13 Thread Ed Porter
Jim,

 

In my prior posts I have listed some of the limitations of Shruiti.  The
lack of generalized generalizational and compositional hierarchies directly
relates to the problems of learning from experience generalized rules that
derived from learning in complex environements when the surface
representation of many high level concepts are virtually never the same.
This relates to your issue about failing to model the complexity of
antecedents.

 

But as the Serre paper I have cited  multiple times in this thread shows
that the type of gen/comp hierarchies need are very complex.  His system
model a 160x160 pixel greyscale image patch with 23 million models, probably
each having something like 256 inputs, for a total about 6 billion links,
and this is just to do very quick, feedforward, I-think-I-saw-a-lion
uncertain recognition for 1000 objects.  So for a Shruity system to capture
all the complexities involved in human level perception or semantic
reasoning would require much more in the way of computer resources than
Shastry had.

 

So although Shuiti's system is clearly very limited, it is amazing how much
it does considering how simple it is.

 

But the problem is not just complexity.  As I said, Shruiti has some severe
architectural limitations.  But again, it was smart for Shastri to get his
simplified system up and running first before he made all the architectural
fixes required to make it more capable of more generalized implication and
learning.

 

I have actually spend some time thinking about how to generalize Shruiti.
If they, or there equivalent, are not in Ben's new Novamente book I may take
the trouble to write them up,  but I am expecting a lot form Ben's new book.

 

I did not understand your last sentence

 

Ed Porter

 

-Original Message-
From: Jim Bromer [mailto:[EMAIL PROTECTED] 
Sent: Sunday, July 13, 2008 3:47 PM
To: agi@v2.listbox.com
Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE
BINDING PROBLEM"?

 

I have read about half of Shastri's 1999 paper "Advances in Shruti- A
neurally motivated model of relational knowledge representation and rapid
inference using temporal synchrony" and I see that it he is describing a
method of encoding general information and then using it to do a certain
kind of reasoning which is usually called inferential although he seems to
have a novel way to do this using what he calls "neural circuits". And he
does seem to touch on the multiple level issues that I am interested in.
The problem is that these kinds of systems, regardless of how interesting
they are, are not able to achieve extensibility because they do not truly
describe how the complexities of the antecedents would have themselves been
achieved (learned) using the methodology described. The unspoken assumption
behind these kinds of studies always seems to be that the one or two systems
of reasoning used in the method should be sufficient to explain how learning
takes place, but the failure to achieve intelligent-like behavior (as is
seen in higher intelligence) gives us a lot of evidence that there must be
more to it.

But, the real problem is just complexity (or complicatedity for Richard's
sake) isn't it?  Doesn't that seem like it is the real problem?  If the
program had the ability to try enough possibilities wouldn't it be likely to
learn after a while?  Well another part of the problem is that it would have
to get a lot of detailed information about how good its efforts were, and
this information would have to be pretty specific using the methods that are
common to most current thinking about AI.  So there seem to be two different
kinds of problems.  But the thing is, I think they are both complexity (or
complicatedity) problems.  Get a working solution for one, and maybe you'd
have a working solution for the other.

I think a working solution is possible, once you get beyond the simplistic
perception of seeing everything as if they were ideologically commensurate
just because you have the belief that you can understand them.
Jim Bromer

 

  _  


agi |  <https://www.listbox.com/member/archive/303/=now> Archives
<https://www.listbox.com/member/archive/rss/303/> |
<https://www.listbox.com/member/?&;
0> Modify Your Subscription

 <http://www.listbox.com> 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-13 Thread Richard Loosemore

Ed Porter wrote:
Richard,  


I think Wikipedia's definition of forward chaining (copied below) agrees
with my stated understanding as to what forward chaining means, i.e.,
reasoning from the "if" (i.e., conditions) to the "then" (i.e.,
consequences) in if-then statements.  


So, once again there is an indication you have unfairly criticized the
statements of another.


But  ... nothing in what I said contradicted the wikipedia 
definition of forward chaining.


Jim's statement was a misunderstanding of the meaning of forward and 
backward chaining because he oversimplified the two ("forward reasoning 
is reasoning from conditions to consequences, and backward reasoning is 
the opposite" ... this is kind of true, if you stretch the word 
"reasoining" a little, but it misses the point), and then he went from 
this oversimplification to come to a completely incorrect conclusion 
("...Thus I think the notion of what is forward and backward chaining 
might be somewhat arbitrary...").


This last conclusion was sufficiently inaccurate that I decided to point 
that out.  It was not a criticism, just a clarification;  a pointer in 
the right direction.



Richard Loosemore







Ed Porter

==Wikipedia defines forward chaining as: ==

Forward chaining is one of the two main methods of reasoning when using
inference rules (in artificial intelligence). The other is backward
chaining.

Forward chaining starts with the available data and uses inference rules to
extract more data (from an end user for example) until an optimal goal is
reached. An inference engine using forward chaining searches the inference
rules until it finds one where the antecedent (If clause) is known to be
true. When found it can conclude, or infer, the consequent (Then clause),
resulting in the addition of new information to its data.

Inference engines will often cycle through this process until an optimal
goal is reached.

For example, suppose that the goal is to conclude the color of my pet Fritz,
given that he croaks and eats flies, and that the rule base contains the
following four rules:

If X croaks and eats flies - Then X is a frog 
If X chirps and sings - Then X is a canary 
If X is a frog - Then X is green 
If X is a canary - Then X is yellow 


This rule base would be searched and the first rule would be selected,
because its antecedent (If Fritz croaks and eats flies) matches our data.
Now the consequents (Then X is a frog) is added to the data. The rule base
is again searched and this time the third rule is selected, because its
antecedent (If Fritz is a frog) matches our data that was just confirmed.
Now the new consequent (Then Fritz is green) is added to our data. Nothing
more can be inferred from this information, but we have now accomplished our
goal of determining the color of Fritz.

Because the data determines which rules are selected and used, this method
is called data-driven, in contrast to goal-driven backward chaining
inference. The forward chaining approach is often employed by expert
systems, such as CLIPS.

One of the advantages of forward-chaining over backward-chaining is that the
reception of new data can trigger new inferences, which makes the engine
better suited to dynamic situations in which conditions are likely to
change.


-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: Saturday, July 12, 2008 7:42 PM

To: agi@v2.listbox.com
Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE
BINDING PROBLEM"?

Jim Bromer wrote:

Ed Porter said:

It should be noted that Shruiti uses a mix of forward changing and

backward

chaining, with an architecture for controlling when and how each is used.
...

My understanding that forward reasoning is reasoning from conditions to
consequences, and backward reasoning is the opposite. But I think what is

a

condition and what is a consequence is not always clear, since one can use
if A then B rules to apply to situations where A occurs before B, B occurs
before A, and A and B occur at the same time. Thus I think the notion of
what is forward and backward chaining might be somewhat arbitrary, and

could

be better clarified if it were based on temporal relationships. I see no
reason that Shruiti's "?" activation should not run be spread across all
those temporal relationships, and be distinguished from Shruiti's "+" and
"-" probabilistic activation by not having a probability, but just a
temporary attentional characteristic. Additional inference control

mechanism

could then be added to control which directions in time to reason with in
different circumstances, if activation pruning was necessary.



This is not correct.

Forward chaining is when the inference engine starts with some facts and 
then uses its knowledge base to explore what consequences can be derived 
from those facts.  G

RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-14 Thread Ed Porter
Richard,

You just keep digging yourself in deeper.

Look at the original email in which you said "This is not correct."  The
only quoted text that precedes it is quoted from me.  So why are you saying
"Jim's statement was a misunderstanding"?

Furthermore, I think your criticisms of my statements are generally
unfounded.  

My choice of the word "reasoning" was not "not correct", as you imply, since
the Wikipedia definition says "Forward chaining is one of the two main
methods of REASONING when using inference rules." (Emphasis added.)

My statement made it clear I was describing the forward direction as being
from the if clause to the then clause, which matches the Wikipedia
definition, so what is "not correct" about that.

In addition, you said my statement that in the absence of a temporal
criteria "the notion of what is forward and backward chaining might be
somewhat arbitrary"  was a "completely incorrect conclusion."

Offensively strong language, considering it is unfounded. 

It is unfounded because in the absence of a temporal distinction, many
if-then rules, particularly if they are probabilistic, can viewed in a two
way form, with a probabilistic inference going both ways.  In this case it
becomes unclear which side is the "if" clause, and which the "then" clause,
and, thus, unclear which way is forward and which backward by the definition
contained in Wikipedia --- unless there is a temporal criteria.  This issue
becomes even more problematic when dealing with patterns based on temporal
simultaneity, as in much of object recognition, in which even a temporal
distinction, does not distinguish between what should be consider the if
clause and what should be considered the then clause. 

Enough of arguing about arguing.  You can have the last say if you want.  I
want to spend what time I have to spend on this list conversing with people
who are more concerned about truth than trying to sound like they know more
than others, particularly when they don't.  

Anyone who reads this thread will know who was being honest and reasonable
and who was not.

Ed Porter 

-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: Sunday, July 13, 2008 7:52 PM
To: agi@v2.listbox.com
Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE
BINDING PROBLEM"?

Ed Porter wrote:
> Richard,  
> 
> I think Wikipedia's definition of forward chaining (copied below) agrees
> with my stated understanding as to what forward chaining means, i.e.,
> reasoning from the "if" (i.e., conditions) to the "then" (i.e.,
> consequences) in if-then statements.  
> 
> So, once again there is an indication you have unfairly criticized the
> statements of another.

But  ... nothing in what I said contradicted the wikipedia 
definition of forward chaining.

Jim's statement was a misunderstanding of the meaning of forward and 
backward chaining because he oversimplified the two ("forward reasoning 
is reasoning from conditions to consequences, and backward reasoning is 
the opposite" ... this is kind of true, if you stretch the word 
"reasoining" a little, but it misses the point), and then he went from 
this oversimplification to come to a completely incorrect conclusion 
("...Thus I think the notion of what is forward and backward chaining 
might be somewhat arbitrary...").

This last conclusion was sufficiently inaccurate that I decided to point 
that out.  It was not a criticism, just a clarification;  a pointer in 
the right direction.


Richard Loosemore






> Ed Porter
> 
> ==Wikipedia defines forward chaining as: ==
> 
> Forward chaining is one of the two main methods of reasoning when using
> inference rules (in artificial intelligence). The other is backward
> chaining.
> 
> Forward chaining starts with the available data and uses inference rules
to
> extract more data (from an end user for example) until an optimal goal is
> reached. An inference engine using forward chaining searches the inference
> rules until it finds one where the antecedent (If clause) is known to be
> true. When found it can conclude, or infer, the consequent (Then clause),
> resulting in the addition of new information to its data.
> 
> Inference engines will often cycle through this process until an optimal
> goal is reached.
> 
> For example, suppose that the goal is to conclude the color of my pet
Fritz,
> given that he croaks and eats flies, and that the rule base contains the
> following four rules:
> 
> If X croaks and eats flies - Then X is a frog 
> If X chirps and sings - Then X is a canary 
> If X is a frog - Then X is green 
> If X is a canary - Then X is yellow 
> 
> This rule base woul

RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-14 Thread Mark Waser
Anyone who reads this thread will know who was being honest and 
reasonable

and who was not.

The question is not honest and reasonable but factually correct . . . .

The following statement of yours

 In this case it becomes unclear which side is the "if" clause, and which
the "then" clause, and, thus, unclear which way is forward and which
backward by the definition contained in Wikipedia --- unless there is a
temporal criteria.

is simply incorrect.  Temporal criteria are *NOT* necessarily relevant to
forward and backward chaining.

As far as I can tell, Richard is trying to gently correct you and you are 
both incorrect and unwilling to even attempt to interpret his words in the 
way he meant (i.e. an honest and reasonable fashion).


- Original Message - 
From: "Ed Porter" <[EMAIL PROTECTED]>

To: 
Sent: Monday, July 14, 2008 8:58 AM
Subject: **SPAM** RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND 
BY "THE BINDING PROBLEM"?



Richard,

You just keep digging yourself in deeper.

Look at the original email in which you said "This is not correct."  The
only quoted text that precedes it is quoted from me.  So why are you saying
"Jim's statement was a misunderstanding"?

Furthermore, I think your criticisms of my statements are generally
unfounded.

My choice of the word "reasoning" was not "not correct", as you imply, since
the Wikipedia definition says "Forward chaining is one of the two main
methods of REASONING when using inference rules." (Emphasis added.)

My statement made it clear I was describing the forward direction as being
from the if clause to the then clause, which matches the Wikipedia
definition, so what is "not correct" about that.

In addition, you said my statement that in the absence of a temporal
criteria "the notion of what is forward and backward chaining might be
somewhat arbitrary"  was a "completely incorrect conclusion."

Offensively strong language, considering it is unfounded.

It is unfounded because in the absence of a temporal distinction, many
if-then rules, particularly if they are probabilistic, can viewed in a two
way form, with a probabilistic inference going both ways.  In this case it
becomes unclear which side is the "if" clause, and which the "then" clause,
and, thus, unclear which way is forward and which backward by the definition
contained in Wikipedia --- unless there is a temporal criteria.  This issue
becomes even more problematic when dealing with patterns based on temporal
simultaneity, as in much of object recognition, in which even a temporal
distinction, does not distinguish between what should be consider the if
clause and what should be considered the then clause.

Enough of arguing about arguing.  You can have the last say if you want.  I
want to spend what time I have to spend on this list conversing with people
who are more concerned about truth than trying to sound like they know more
than others, particularly when they don't.

Anyone who reads this thread will know who was being honest and reasonable
and who was not.

Ed Porter

-----Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
Sent: Sunday, July 13, 2008 7:52 PM
To: agi@v2.listbox.com
Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE
BINDING PROBLEM"?

Ed Porter wrote:

Richard,

I think Wikipedia's definition of forward chaining (copied below) agrees
with my stated understanding as to what forward chaining means, i.e.,
reasoning from the "if" (i.e., conditions) to the "then" (i.e.,
consequences) in if-then statements.

So, once again there is an indication you have unfairly criticized the
statements of another.


But  ... nothing in what I said contradicted the wikipedia
definition of forward chaining.

Jim's statement was a misunderstanding of the meaning of forward and
backward chaining because he oversimplified the two ("forward reasoning
is reasoning from conditions to consequences, and backward reasoning is
the opposite" ... this is kind of true, if you stretch the word
"reasoining" a little, but it misses the point), and then he went from
this oversimplification to come to a completely incorrect conclusion
("...Thus I think the notion of what is forward and backward chaining
might be somewhat arbitrary...").

This last conclusion was sufficiently inaccurate that I decided to point
that out.  It was not a criticism, just a clarification;  a pointer in
the right direction.


Richard Loosemore







Ed Porter

==Wikipedia defines forward chaining as: ==

Forward chaining is one of the two main methods of reasoning when using
inference rules (in artificial intelligence). The other is backward
chaining.

Forward chaining starts with the available data and uses inference rul

RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-14 Thread Ed Porter
Mark,

Since your attack on my statement below is based on nothing but conclusory
statements and contains neither reasoning or evidence to support them, there
is little in your below email to respond to other than your personal spleen.


You have said my statement which your email quotes is "simply incorrect"
without giving any justification.  

Your statement that "Temporal criteria are *NOT* relevant to forward and
backward chaining" is itself a conclusory statement.  

Furthermore this statement about temporal criteria not being relevant is
more incorrect than correct.  If an if-then rule describes a situation where
one thing causes another, or comes before it time, the thing that comes
first is more commonly the if clause (although one can write the rule in the
reverse order).  The if clause is commonly called a condition, and the then
clause is sometimes called the consequence, implying a causal or temporal
relationship.  The notion of reasoning backward from a goal being backward
chaining, normally involves the notion of reasoning back in imagined time
from a desired goal state.  So often TEMPORAL CRITERIA *ARE* RELEVANT TO
WHICH DIRECTION IS FORWARD CHAINING AND WHICH IS BACKWARD.

Even if one were to make a reach, and try to justify your statement that
"Temporal criteria are *NOT* relevant to forward and backward chaining" as
being more than just conclusory by suggesting it was an implicit reference
to statements --- like that contained Richard's prior statements in this
thread or the Wikipedia quote in one of the posts below --- that the
definition of forward and backward chaining depended on whether the
reasoning was from if clause to then clause, or the reverse --- that would
still not correct the groundlessness of your criticism.  

This is because the rule that forward chaining is from if clause to then
clause and the reverse for backward chaining has no applicability to
situations where the implication goes both ways and there is no clear
indication of which pattern should be the if clause and which should be the
then clause --- which is precisely the situation I was describing in the
quote from me you unfairly criticized.  

Neither Richard's prior statement in this thread nor the Wikipedia
definition below define which direction is forward and which is backward in
many such situations.

In my quote which you attacked I was discussing exactly this situations when
it was not clear which part of an inference pattern should be considered the
if clause and which the then clause.  So it appears your criticism either
totally missed, or for other reasons, failed to deal with the issue I was
discussing.

Mark, in general I do not read your posts because, among other things, like
your email below, they are generally poorly reasoned and seemed more
concerned with issues of ego and personality than with learning and teaching
truthful information or insights.  I skip many of Richard's for the same
reason, but I do read some of Richard's because despite all his pompous BS
he does occasionally say something quite thoughtful and worth while.

If you care about improving your reputation on this list, it would make you
seem more like someone who cared about truth and reason, and less like
someone who cared more about petty squabbles and personal ego, if you gave
reasons for your criticisms, and if you took the time to ensure your
criticism actually addressed what you are criticizing.

In your post immediately below you did neither.

Ed Porter

-Original Message-
From: Mark Waser [mailto:[EMAIL PROTECTED] 
Sent: Monday, July 14, 2008 9:19 AM
To: agi@v2.listbox.com
Subject: RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE
BINDING PROBLEM"?

>> Anyone who reads this thread will know who was being honest and 
>> reasonable
and who was not.

The question is not honest and reasonable but factually correct . . . .

The following statement of yours
>>  In this case it becomes unclear which side is the "if" clause, and which
>> the "then" clause, and, thus, unclear which way is forward and which
>> backward by the definition contained in Wikipedia --- unless there is a
>> temporal criteria.
is simply incorrect.  Temporal criteria are *NOT* necessarily relevant to
forward and backward chaining.

As far as I can tell, Richard is trying to gently correct you and you are 
both incorrect and unwilling to even attempt to interpret his words in the 
way he meant (i.e. an honest and reasonable fashion).

- Original Message ----- 
From: "Ed Porter" <[EMAIL PROTECTED]>
To: 
Sent: Monday, July 14, 2008 8:58 AM
Subject: **SPAM** RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND

BY "THE BINDING PROBLEM"?


Richard,

You just keep digging yourself in deeper.

Look at the original email in which you said "This is not correct."  The
only quot

RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-14 Thread Mark Waser

Ed,

   Take the statements

   IF it croaks, THEN it is a frog.
   IF it is a frog, THEN it is green.

   Given an additional statement that it croaks, forward-chaining says that 
it is green.  There is nothing temporal involved.

   - OR -
   Given an additional statement that it is green, backward-chaining says 
that it MAY croak.  Again, nothing temporal involved.


   How do you see temporal criteria as being related to my example?

   Mark

- Original Message - 
From: "Ed Porter" <[EMAIL PROTECTED]>

To: 
Sent: Monday, July 14, 2008 10:40 AM
Subject: **SPAM** RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND 
BY "THE BINDING PROBLEM"?



Mark,

Since your attack on my statement below is based on nothing but conclusory
statements and contains neither reasoning or evidence to support them, there
is little in your below email to respond to other than your personal spleen.


You have said my statement which your email quotes is "simply incorrect"
without giving any justification.

Your statement that "Temporal criteria are *NOT* relevant to forward and
backward chaining" is itself a conclusory statement.

Furthermore this statement about temporal criteria not being relevant is
more incorrect than correct.  If an if-then rule describes a situation where
one thing causes another, or comes before it time, the thing that comes
first is more commonly the if clause (although one can write the rule in the
reverse order).  The if clause is commonly called a condition, and the then
clause is sometimes called the consequence, implying a causal or temporal
relationship.  The notion of reasoning backward from a goal being backward
chaining, normally involves the notion of reasoning back in imagined time
from a desired goal state.  So often TEMPORAL CRITERIA *ARE* RELEVANT TO
WHICH DIRECTION IS FORWARD CHAINING AND WHICH IS BACKWARD.

Even if one were to make a reach, and try to justify your statement that
"Temporal criteria are *NOT* relevant to forward and backward chaining" as
being more than just conclusory by suggesting it was an implicit reference
to statements --- like that contained Richard's prior statements in this
thread or the Wikipedia quote in one of the posts below --- that the
definition of forward and backward chaining depended on whether the
reasoning was from if clause to then clause, or the reverse --- that would
still not correct the groundlessness of your criticism.

This is because the rule that forward chaining is from if clause to then
clause and the reverse for backward chaining has no applicability to
situations where the implication goes both ways and there is no clear
indication of which pattern should be the if clause and which should be the
then clause --- which is precisely the situation I was describing in the
quote from me you unfairly criticized.

Neither Richard's prior statement in this thread nor the Wikipedia
definition below define which direction is forward and which is backward in
many such situations.

In my quote which you attacked I was discussing exactly this situations when
it was not clear which part of an inference pattern should be considered the
if clause and which the then clause.  So it appears your criticism either
totally missed, or for other reasons, failed to deal with the issue I was
discussing.

Mark, in general I do not read your posts because, among other things, like
your email below, they are generally poorly reasoned and seemed more
concerned with issues of ego and personality than with learning and teaching
truthful information or insights.  I skip many of Richard's for the same
reason, but I do read some of Richard's because despite all his pompous BS
he does occasionally say something quite thoughtful and worth while.

If you care about improving your reputation on this list, it would make you
seem more like someone who cared about truth and reason, and less like
someone who cared more about petty squabbles and personal ego, if you gave
reasons for your criticisms, and if you took the time to ensure your
criticism actually addressed what you are criticizing.

In your post immediately below you did neither.

Ed Porter

-Original Message-
From: Mark Waser [mailto:[EMAIL PROTECTED]
Sent: Monday, July 14, 2008 9:19 AM
To: agi@v2.listbox.com
Subject: RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE
BINDING PROBLEM"?


Anyone who reads this thread will know who was being honest and
reasonable

and who was not.

The question is not honest and reasonable but factually correct . . . .

The following statement of yours

 In this case it becomes unclear which side is the "if" clause, and which
the "then" clause, and, thus, unclear which way is forward and which
backward by the definition contained in Wikipedia --- unless there is a
temporal criteria.

is simply incorrect.  Temporal criteria are *NOT* necessa

Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-14 Thread Abram Demski
It is true that Mark Waser did not provide much justification, but I
think he is right. The if-then rules involved in forward/backward
chaining do not need to be causal, or temporal. A mutual implication
is still treaded differently by forward chaining and backward
chaining, so it does not cause ambiguity. For example, if we have "An
alarm sounds if and only if there is a fire", then a forward-chaining
algorithm would (1) conclude that there is an alarm sounding if it
learned that there was a fire, and (2) conclude that  there was a fire
if it learned that there was an alarm. A backwards-chainer would use
the rule differently, so that (1) it might look for a fire if it was
trying to determine if an alarm was sounding, and (2) it might look
for an alarm if it wanted to know about a fire. Even though the
implication goes in both directions, the meaning of forward chaining
and of backward chaining are quite different.

On Mon, Jul 14, 2008 at 10:40 AM, Ed Porter <[EMAIL PROTECTED]> wrote:
> Mark,
>
> Since your attack on my statement below is based on nothing but conclusory
> statements and contains neither reasoning or evidence to support them, there
> is little in your below email to respond to other than your personal spleen.
>
>
> You have said my statement which your email quotes is "simply incorrect"
> without giving any justification.
>
> Your statement that "Temporal criteria are *NOT* relevant to forward and
> backward chaining" is itself a conclusory statement.
>
> Furthermore this statement about temporal criteria not being relevant is
> more incorrect than correct.  If an if-then rule describes a situation where
> one thing causes another, or comes before it time, the thing that comes
> first is more commonly the if clause (although one can write the rule in the
> reverse order).  The if clause is commonly called a condition, and the then
> clause is sometimes called the consequence, implying a causal or temporal
> relationship.  The notion of reasoning backward from a goal being backward
> chaining, normally involves the notion of reasoning back in imagined time
> from a desired goal state.  So often TEMPORAL CRITERIA *ARE* RELEVANT TO
> WHICH DIRECTION IS FORWARD CHAINING AND WHICH IS BACKWARD.
>
> Even if one were to make a reach, and try to justify your statement that
> "Temporal criteria are *NOT* relevant to forward and backward chaining" as
> being more than just conclusory by suggesting it was an implicit reference
> to statements --- like that contained Richard's prior statements in this
> thread or the Wikipedia quote in one of the posts below --- that the
> definition of forward and backward chaining depended on whether the
> reasoning was from if clause to then clause, or the reverse --- that would
> still not correct the groundlessness of your criticism.
>
> This is because the rule that forward chaining is from if clause to then
> clause and the reverse for backward chaining has no applicability to
> situations where the implication goes both ways and there is no clear
> indication of which pattern should be the if clause and which should be the
> then clause --- which is precisely the situation I was describing in the
> quote from me you unfairly criticized.
>
> Neither Richard's prior statement in this thread nor the Wikipedia
> definition below define which direction is forward and which is backward in
> many such situations.
>
> In my quote which you attacked I was discussing exactly this situations when
> it was not clear which part of an inference pattern should be considered the
> if clause and which the then clause.  So it appears your criticism either
> totally missed, or for other reasons, failed to deal with the issue I was
> discussing.
>
> Mark, in general I do not read your posts because, among other things, like
> your email below, they are generally poorly reasoned and seemed more
> concerned with issues of ego and personality than with learning and teaching
> truthful information or insights.  I skip many of Richard's for the same
> reason, but I do read some of Richard's because despite all his pompous BS
> he does occasionally say something quite thoughtful and worth while.
>
> If you care about improving your reputation on this list, it would make you
> seem more like someone who cared about truth and reason, and less like
> someone who cared more about petty squabbles and personal ego, if you gave
> reasons for your criticisms, and if you took the time to ensure your
> criticism actually addressed what you are criticizing.
>
> In your post immediately below you did neither.
>
> Ed Porter
>
> -Original Message-
> From: Mark Waser [mailto:[EMAIL PROTECTED]
> Sent: Monday, July 14

Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-14 Thread Richard Loosemore

Ed Porter wrote:

Richard,

You just keep digging yourself in deeper.

Look at the original email in which you said "This is not correct."  The
only quoted text that precedes it is quoted from me.  So why are you saying
"Jim's statement was a misunderstanding"?


Okay, looks like some confusion here:  the structure of Jim's message 
was such that I thought the relevant comment came from him.  Turns out 
he was just quoting you.  That's fine (sorry Jim):  it just means that 
you made the misleading statement.



Furthermore, I think your criticisms of my statements are generally
unfounded.  


My choice of the word "reasoning" was not "not correct", as you imply, since
the Wikipedia definition says "Forward chaining is one of the two main
methods of REASONING when using inference rules." (Emphasis added.)


That is fair enough.  I think it is a matter of taste, to some extent, 
but I will take the rap for going against the Wikipedia gospel.




My statement made it clear I was describing the forward direction as being
from the if clause to the then clause, which matches the Wikipedia
definition, so what is "not correct" about that.


I did not say that this part of the text was incorrect.



In addition, you said my statement that in the absence of a temporal
criteria "the notion of what is forward and backward chaining might be
somewhat arbitrary"  was a "completely incorrect conclusion."

Offensively strong language, considering it is unfounded. 


Or, if it should turn out that it was well-founded, it would have been 
quite polite and matter-of-fact to say "completely incorrect"





It is unfounded because in the absence of a temporal distinction, many
if-then rules, particularly if they are probabilistic, can viewed in a two
way form, with a probabilistic inference going both ways.  In this case it
becomes unclear which side is the "if" clause, and which the "then" clause,
and, thus, unclear which way is forward and which backward by the definition
contained in Wikipedia --- unless there is a temporal criteria.  This issue
becomes even more problematic when dealing with patterns based on temporal
simultaneity, as in much of object recognition, in which even a temporal
distinction, does not distinguish between what should be consider the if
clause and what should be considered the then clause. 


Here is an example of backward chaining:

Start with a question:  Is it true that "Socrates is mortal"?

Start by looking for any knowledge that allows us to conclude that 
anything is or is not mortal.  We search the KB and come up with these 
candidates:


"If x is a plant, then x is mortal"
"If x is a rock, then x is not mortal"
"If x is a robot, then x is not mortal"
"If x lives in a post-singularity era, then x is not mortal"
"If x is a slug, then x is mortal"
"If x is a japanese beetle, then x is mortal"
"If x is a side of beef, then x is mortal"
"If x is a screwdriver, then x is not mortal"
"If x is a god, then x is not mortal"
"If x is a living creature, then x is mortal"
"If x is a goat, then x is mortal"
"If x is a parrot in a Dead Parrot Sketch, then x is mortal"

Now, before we go on to look at the second stage of this backward 
chaining example, could you perhaps explain to me how "the absence of a 
temporal distinction" applies or does not apply to any of these?  I do 
not believe that it is possible to reverse any of these rules, temporal 
distinctions or any other distinctions you cannot say "if x is 
mortal, then x is a plant", nor "if x is not mortal, then x lives in a 
post-singularity era", etc etc etc.


In the process of backward chaining, the next step is to see if the 
antecedents of any of these might allow us to connect up with Socrates 
in some way, so we start with the first one, "If x is a plant" and try 
to find out if anything allows us to conclude that Socrates is or is not 
a plant.  A search of the KB turns up these statements:


"If x contains chlorophyll, then x is a plant"
"If x is a dandelion, then x is a plant"
.. and on and on and on.

A couple of years later, after going several levels deep in its search, 
the system finally digs deep enough in its knowledge base to come up 
with the following chain of inference:


"Socrates contains blood"
"If x contains blood, then x will bleed when pricked"
"If x bleeds when pricked, then x is a man"
"If x is a man, then x owns footwear"
"If x owns footwear, then x is a living creature"
"If x is a living creature, then x is mortal"


And now, FINALLY, the backward chaining mechanism will be able to 
conclude that  "Socrates is mortal"


Please, Ed, could you explain to me how this typical case of backward 
chaining could be reversed so that it becomes just a variation on 
forward chaining?


The two mechanisms simply have different properties.  If you were to try 
to prove "Socrates is Mortal" by forward chaining, what would you do? 
Start from a random point in your KB and start proving facts in random 
order?  How would you

RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-14 Thread Ed Porter
elow define which direction is forward and which is backward
in
> many such situations.
>
> In my quote which you attacked I was discussing exactly this situations
when
> it was not clear which part of an inference pattern should be considered
the
> if clause and which the then clause.  So it appears your criticism either
> totally missed, or for other reasons, failed to deal with the issue I was
> discussing.
>
> Mark, in general I do not read your posts because, among other things,
like
> your email below, they are generally poorly reasoned and seemed more
> concerned with issues of ego and personality than with learning and
teaching
> truthful information or insights.  I skip many of Richard's for the same
> reason, but I do read some of Richard's because despite all his pompous BS
> he does occasionally say something quite thoughtful and worth while.
>
> If you care about improving your reputation on this list, it would make
you
> seem more like someone who cared about truth and reason, and less like
> someone who cared more about petty squabbles and personal ego, if you gave
> reasons for your criticisms, and if you took the time to ensure your
> criticism actually addressed what you are criticizing.
>
> In your post immediately below you did neither.
>
> Ed Porter
>
> -Original Message-
> From: Mark Waser [mailto:[EMAIL PROTECTED]
> Sent: Monday, July 14, 2008 9:19 AM
> To: agi@v2.listbox.com
> Subject: RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY
"THE
> BINDING PROBLEM"?
>
>>> Anyone who reads this thread will know who was being honest and
>>> reasonable
> and who was not.
>
> The question is not honest and reasonable but factually correct . . . 
>
> The following statement of yours
>>>  In this case it becomes unclear which side is the "if" clause, and
which
>>> the "then" clause, and, thus, unclear which way is forward and which
>>> backward by the definition contained in Wikipedia --- unless there is a
>>> temporal criteria.
> is simply incorrect.  Temporal criteria are *NOT* necessarily relevant to
> forward and backward chaining.
>
> As far as I can tell, Richard is trying to gently correct you and you are
> both incorrect and unwilling to even attempt to interpret his words in the
> way he meant (i.e. an honest and reasonable fashion).
>
> - Original Message -
> From: "Ed Porter" <[EMAIL PROTECTED]>
> To: 
> Sent: Monday, July 14, 2008 8:58 AM
> Subject: **SPAM** RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE
BOUND
>
> BY "THE BINDING PROBLEM"?
>
>
> Richard,
>
> You just keep digging yourself in deeper.
>
> Look at the original email in which you said "This is not correct."  The
> only quoted text that precedes it is quoted from me.  So why are you
saying
> "Jim's statement was a misunderstanding"?
>
> Furthermore, I think your criticisms of my statements are generally
> unfounded.
>
> My choice of the word "reasoning" was not "not correct", as you imply,
since
> the Wikipedia definition says "Forward chaining is one of the two main
> methods of REASONING when using inference rules." (Emphasis added.)
>
> My statement made it clear I was describing the forward direction as being
> from the if clause to the then clause, which matches the Wikipedia
> definition, so what is "not correct" about that.
>
> In addition, you said my statement that in the absence of a temporal
> criteria "the notion of what is forward and backward chaining might be
> somewhat arbitrary"  was a "completely incorrect conclusion."
>
> Offensively strong language, considering it is unfounded.
>
> It is unfounded because in the absence of a temporal distinction, many
> if-then rules, particularly if they are probabilistic, can viewed in a two
> way form, with a probabilistic inference going both ways.  In this case it
> becomes unclear which side is the "if" clause, and which the "then"
clause,
> and, thus, unclear which way is forward and which backward by the
definition
> contained in Wikipedia --- unless there is a temporal criteria.  This
issue
> becomes even more problematic when dealing with patterns based on temporal
> simultaneity, as in much of object recognition, in which even a temporal
> distinction, does not distinguish between what should be consider the if
> clause and what should be considered the then clause.
>
> Enough of arguing about arguing.  You can have the last say if you want.
I
> want to spend what time I have to spend on this list conversing with
people
> who a

Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-14 Thread Jim Bromer
I started reading a Riesenhuber and Poggio paper and there are some 
similarities to ideas that I have considered although my ideas were explicitly 
developed about computer programs that would use symbolic information and are 
not neural theories.  It is interesting that Risesnhuber and Poggio argued that 
"the binding problem seems to be a problem for only some models of object 
recognition."  In other words, it seems that they are claiming that the problem 
disappears with their model of neural cognition! 

The study of feature detectors in cats eyes is old news and I did incorporate 
that information into the development of my own theories.

I have often talked about the use of multi-level complex methods and I see some 
similarity to the ideas that they discussed to my ideas.  In my model an input 
would be scanned for different features using different kinds of analysis on 
the input.  So then a configuration of simple features would be derived from 
the scan and these could be associated with a number of complex groups of 
objects that have been previously associated with the features.  Because the 
complex groups of objects are complexes (in the general sense), and would be 
learned by previous experience, they are not insipidly modeled on one standard 
model. These complex objects are complex in that they are not all cut from one 
standard.  The older implementations that used operations that were taken from 
set theory on groups were set on object models that were very old-world and 
were not derived from learning.  For example they were non-experiential. (I 
cannot remember the term that I am
 looking for but experiential is the anthropomorphic term).  All of the 
groupings in old models that looked for intersections were of a few predefined 
kinds, and most significantly they did not recognize that ideologically 
incommensurable references could affect meaning (or effect) even if the 
references were strongly associated and functionally related.  The complex 
groupings of objects that I have in mind would have been derived using 
different methods of analysis and combination and when a group of them is 
called from an input analysis their use should tend to narrow the objects that 
might be expected given the detection by the feature detectors. Although I 
haven't expressed myself very clearly, this is very similar to what Riesenhuber 
and Poggio were suggesting that their methods would be capable of. So, yes,I 
think some similar methods can be used in NLP.

However, my model also includes the recognition that comparing apples and 
oranges is not always straight forward.  This gives you an idea of what I mean 
by ideologically incommensurable associations. If I were to give some examples, 
a reasonable person might simply assume that the problems illustrated by the 
examples could easily be resolved with more information, and that is true.  But 
the point that I am making is that this view of ideologically incommensurable 
references can be helpful in the analysis of the kinds of problems that can be 
expected from more ambitious AI models.

Jim Bromer



  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-14 Thread Ed Porter
Mark,

Still fails to deal with what I was discussing.  I will leave it up to you
to figure out why.

Ed Porter

-Original Message-
From: Mark Waser [mailto:[EMAIL PROTECTED] 
Sent: Monday, July 14, 2008 10:54 AM
To: agi@v2.listbox.com
Subject: RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE
BINDING PROBLEM"?

Ed,

Take the statements

IF it croaks, THEN it is a frog.
IF it is a frog, THEN it is green.

Given an additional statement that it croaks, forward-chaining says that

it is green.  There is nothing temporal involved.
- OR -
Given an additional statement that it is green, backward-chaining says 
that it MAY croak.  Again, nothing temporal involved.

How do you see temporal criteria as being related to my example?

Mark

- Original Message - 
From: "Ed Porter" <[EMAIL PROTECTED]>
To: 
Sent: Monday, July 14, 2008 10:40 AM
Subject: **SPAM** RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND

BY "THE BINDING PROBLEM"?


Mark,

Since your attack on my statement below is based on nothing but conclusory
statements and contains neither reasoning or evidence to support them, there
is little in your below email to respond to other than your personal spleen.


You have said my statement which your email quotes is "simply incorrect"
without giving any justification.

Your statement that "Temporal criteria are *NOT* relevant to forward and
backward chaining" is itself a conclusory statement.

Furthermore this statement about temporal criteria not being relevant is
more incorrect than correct.  If an if-then rule describes a situation where
one thing causes another, or comes before it time, the thing that comes
first is more commonly the if clause (although one can write the rule in the
reverse order).  The if clause is commonly called a condition, and the then
clause is sometimes called the consequence, implying a causal or temporal
relationship.  The notion of reasoning backward from a goal being backward
chaining, normally involves the notion of reasoning back in imagined time
from a desired goal state.  So often TEMPORAL CRITERIA *ARE* RELEVANT TO
WHICH DIRECTION IS FORWARD CHAINING AND WHICH IS BACKWARD.

Even if one were to make a reach, and try to justify your statement that
"Temporal criteria are *NOT* relevant to forward and backward chaining" as
being more than just conclusory by suggesting it was an implicit reference
to statements --- like that contained Richard's prior statements in this
thread or the Wikipedia quote in one of the posts below --- that the
definition of forward and backward chaining depended on whether the
reasoning was from if clause to then clause, or the reverse --- that would
still not correct the groundlessness of your criticism.

This is because the rule that forward chaining is from if clause to then
clause and the reverse for backward chaining has no applicability to
situations where the implication goes both ways and there is no clear
indication of which pattern should be the if clause and which should be the
then clause --- which is precisely the situation I was describing in the
quote from me you unfairly criticized.

Neither Richard's prior statement in this thread nor the Wikipedia
definition below define which direction is forward and which is backward in
many such situations.

In my quote which you attacked I was discussing exactly this situations when
it was not clear which part of an inference pattern should be considered the
if clause and which the then clause.  So it appears your criticism either
totally missed, or for other reasons, failed to deal with the issue I was
discussing.

Mark, in general I do not read your posts because, among other things, like
your email below, they are generally poorly reasoned and seemed more
concerned with issues of ego and personality than with learning and teaching
truthful information or insights.  I skip many of Richard's for the same
reason, but I do read some of Richard's because despite all his pompous BS
he does occasionally say something quite thoughtful and worth while.

If you care about improving your reputation on this list, it would make you
seem more like someone who cared about truth and reason, and less like
someone who cared more about petty squabbles and personal ego, if you gave
reasons for your criticisms, and if you took the time to ensure your
criticism actually addressed what you are criticizing.

In your post immediately below you did neither.

Ed Porter

-Original Message-
From: Mark Waser [mailto:[EMAIL PROTECTED]
Sent: Monday, July 14, 2008 9:19 AM
To: agi@v2.listbox.com
Subject: RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE
BINDING PROBLEM"?

>> Anyone who reads this thread will know who was being honest and
>> reasonable
and who was not.

The question is not honest and reasonable but factu

RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-14 Thread Ed Porter

With regard to your comments below, I don't think you have to be too
imaginative to think of how the direction of forward or backward chaining
across at least certain sets of rules could be reversed.  Abram Demski's
recent post gave an example of how both what he considers forward and
backward chaining can be performed in both directions across an inference
pattern.  

Plus, it should be noted that I never said all relationships involve a
before and after type of relationship.  In fact, I specifically said some
relationships involve simultaneity.  I do however think temporal
relationships are an important think to keep track of in inference, because
they are such and important part of reality, and predicting what is likely
to come next is such an important part of such reasoning, and reasoning
backward in imagined time from goals is such an important part of planning.

BY READING BOTH THE WIKIPEDIA ENTRIES FOR FORWARD AND BACKWARD CHAINING AND
READING ABRAM DEMSKI'S LAST POST, IT SEEMS THAT ONE OF THE DISTINCTIONS
COMMONLY GIVEN BETWEEN FORWARD AND BACKWARD CHAINING, IS WHETHER ONE IS
REASONING FROM DATA (IN THE CASE OF FORWARD CHAINING) OR FROM GOALS OR
HYPOTHESES (IN THE CASE OF BACKWARD CHAINING).   

According to this definition the distinction between forward and backward
chaining is not about direction the inference travels though an inference
network --- because as Abram show each can travel in both directions --- but
rather the purpose for which the inference is being performed.  According to
this definition, both bottom up and top down inference could each in certain
cases be considered both forward and backward chaining.  

This definition is probably more meaningful in an AGI context than having
the direction depend on which is the if clause and which is the then clause,
because in an AGI many of the rules would have been learned automatically
from correlations and there is often no reason to decide which of the
patterns that implies the other is the if clause pattern and which is the
then clause pattern.

But this definition of the distinction as depending on whether one is
reasoning from data on one hand or goal and hypotheses on the other, is
confused by the fact that both Wikipedia articles implying forward chaining
is from if clause to then clause, and the reverse for backward chaining.  

It is also confused by the fact that in AGIs the distinction between data,
evidence, probability, attention, and hypothesis are not always clear.  For
example, bottom-up feed forward inference from sensory input is often
considered to create perception hypotheses up the perception pathway, and
implication could be considered to be proceeding in a forward chaining way
from such each of such hypothesis.

For example, evidence may be derived from sensation, memory, cognition or
other means that a certain high level pattern should exist in roughly a
certain time and place, and the top down levels implication of what is
should expect to see could be considered forward chaining, but is could also
be considered backward chaining.

So I still find even this definition of forward and backward chaining would
be less than totally clear when applied in many possible situations in an
AGI.  But many definitions that are used every day are less than totally
clear.

Richard, you said below "If I had a penny for every time you have accused me
of being wrong, when later discussion showed that I was quite correct, I'd
have enough money to build an AGI tomorrow."

Yea, Richard, an AGI about about as powerful as the typical Phantom Decoder
Ring you are likely to be able to purchase for one or two cents from the
back of a comic book.

If however the same rule were applied to me, I would be able to buy an AGI
as powerful as Phantom Decoder Ring worth at least a buck.

Ed Porter

-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: Monday, July 14, 2008 11:54 AM
To: agi@v2.listbox.com
Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE
BINDING PROBLEM"?

Ed Porter wrote:
> Richard,
> 
> You just keep digging yourself in deeper.
> 
> Look at the original email in which you said "This is not correct."  The
> only quoted text that precedes it is quoted from me.  So why are you
saying
> "Jim's statement was a misunderstanding"?

Okay, looks like some confusion here:  the structure of Jim's message 
was such that I thought the relevant comment came from him.  Turns out 
he was just quoting you.  That's fine (sorry Jim):  it just means that 
you made the misleading statement.

> Furthermore, I think your criticisms of my statements are generally
> unfounded.  
> 
> My choice of the word "reasoning" was not "not correct", as you imply,
since
> the Wikipedia definition says "Forward chaining is one of the two main
> methods of REASONING when

Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-14 Thread Abram Demski
ed from sensation, memory, cognition or
> other means that a certain high level pattern should exist in roughly a
> certain time and place, and the top down levels implication of what is
> should expect to see could be considered forward chaining, but is could also
> be considered backward chaining.
>
> So I still find even this definition of forward and backward chaining would
> be less than totally clear when applied in many possible situations in an
> AGI.  But many definitions that are used every day are less than totally
> clear.
>
> Richard, you said below "If I had a penny for every time you have accused me
> of being wrong, when later discussion showed that I was quite correct, I'd
> have enough money to build an AGI tomorrow."
>
> Yea, Richard, an AGI about about as powerful as the typical Phantom Decoder
> Ring you are likely to be able to purchase for one or two cents from the
> back of a comic book.
>
> If however the same rule were applied to me, I would be able to buy an AGI
> as powerful as Phantom Decoder Ring worth at least a buck.
>
> Ed Porter
>
> -Original Message-
> From: Richard Loosemore [mailto:[EMAIL PROTECTED]
> Sent: Monday, July 14, 2008 11:54 AM
> To: agi@v2.listbox.com
> Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE
> BINDING PROBLEM"?
>
> Ed Porter wrote:
>> Richard,
>>
>> You just keep digging yourself in deeper.
>>
>> Look at the original email in which you said "This is not correct."  The
>> only quoted text that precedes it is quoted from me.  So why are you
> saying
>> "Jim's statement was a misunderstanding"?
>
> Okay, looks like some confusion here:  the structure of Jim's message
> was such that I thought the relevant comment came from him.  Turns out
> he was just quoting you.  That's fine (sorry Jim):  it just means that
> you made the misleading statement.
>
>> Furthermore, I think your criticisms of my statements are generally
>> unfounded.
>>
>> My choice of the word "reasoning" was not "not correct", as you imply,
> since
>> the Wikipedia definition says "Forward chaining is one of the two main
>> methods of REASONING when using inference rules." (Emphasis added.)
>
> That is fair enough.  I think it is a matter of taste, to some extent,
> but I will take the rap for going against the Wikipedia gospel.
>
>
>> My statement made it clear I was describing the forward direction as being
>> from the if clause to the then clause, which matches the Wikipedia
>> definition, so what is "not correct" about that.
>
> I did not say that this part of the text was incorrect.
>
>
>> In addition, you said my statement that in the absence of a temporal
>> criteria "the notion of what is forward and backward chaining might be
>> somewhat arbitrary"  was a "completely incorrect conclusion."
>>
>> Offensively strong language, considering it is unfounded.
>
> Or, if it should turn out that it was well-founded, it would have been
> quite polite and matter-of-fact to say "completely incorrect"
>
>
>>
>> It is unfounded because in the absence of a temporal distinction, many
>> if-then rules, particularly if they are probabilistic, can viewed in a two
>> way form, with a probabilistic inference going both ways.  In this case it
>> becomes unclear which side is the "if" clause, and which the "then"
> clause,
>> and, thus, unclear which way is forward and which backward by the
> definition
>> contained in Wikipedia --- unless there is a temporal criteria.  This
> issue
>> becomes even more problematic when dealing with patterns based on temporal
>> simultaneity, as in much of object recognition, in which even a temporal
>> distinction, does not distinguish between what should be consider the if
>> clause and what should be considered the then clause.
>
> Here is an example of backward chaining:
>
> Start with a question:  Is it true that "Socrates is mortal"?
>
> Start by looking for any knowledge that allows us to conclude that
> anything is or is not mortal.  We search the KB and come up with these
> candidates:
>
> "If x is a plant, then x is mortal"
> "If x is a rock, then x is not mortal"
> "If x is a robot, then x is not mortal"
> "If x lives in a post-singularity era, then x is not mortal"
> "If x is a slug, then x is mortal"
> "If x is a japanese beetle, then x is mortal"
> "If x is a side of beef, then x is mortal"

Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-14 Thread Mike Tintner
A tangential comment here. Looking at this and other related threads I can't 
help thinking: jeez, here are you guys still endlessly arguing about the 
simplest of syllogisms, seemingly unable to progress beyond them. (Don't you 
ever have that feeling?) My impression is that the fault lies with logic 
itself - as soon as you start to apply logic to the real world, even only 
tangentially with talk of "forward" and "backward" or "temporal" 
considerations, you fall into a quagmire of ambiguity, and no one is really 
sure what they are talking about. Even the simplest if p then q logical 
proposition is actually infinitely ambiguous. No?  (Is there a Godel's 
Theorem of logic?) 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-14 Thread Mark Waser
Still fails to deal with what I was discussing.  I will leave it up to 
you

to figure out why.


Last refuge when you realize you're wrong, huh?

I ask a *very* clear question in an attempt to move forward (i.e. How do you 
see temporal criteria as being related to my example?) and I get this "You 
have to guess what I'm thinking" answer.


How can you justify ranting on and on about Richard not being "honest and 
reasonable" when you won't even answer a simple, clear question?




- Original Message - 
From: "Ed Porter" <[EMAIL PROTECTED]>

To: 
Sent: Monday, July 14, 2008 1:43 PM
Subject: **SPAM** RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND 
BY "THE BINDING PROBLEM"?



Mark,

Still fails to deal with what I was discussing.  I will leave it up to you
to figure out why.

Ed Porter

-Original Message-
From: Mark Waser [mailto:[EMAIL PROTECTED]
Sent: Monday, July 14, 2008 10:54 AM
To: agi@v2.listbox.com
Subject: RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE
BINDING PROBLEM"?

Ed,

   Take the statements

   IF it croaks, THEN it is a frog.
   IF it is a frog, THEN it is green.

   Given an additional statement that it croaks, forward-chaining says that

it is green.  There is nothing temporal involved.
   - OR -
   Given an additional statement that it is green, backward-chaining says
that it MAY croak.  Again, nothing temporal involved.

   How do you see temporal criteria as being related to my example?

   Mark

- Original Message - 
From: "Ed Porter" <[EMAIL PROTECTED]>

To: 
Sent: Monday, July 14, 2008 10:40 AM
Subject: **SPAM** RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND

BY "THE BINDING PROBLEM"?


Mark,

Since your attack on my statement below is based on nothing but conclusory
statements and contains neither reasoning or evidence to support them, there
is little in your below email to respond to other than your personal spleen.


You have said my statement which your email quotes is "simply incorrect"
without giving any justification.

Your statement that "Temporal criteria are *NOT* relevant to forward and
backward chaining" is itself a conclusory statement.

Furthermore this statement about temporal criteria not being relevant is
more incorrect than correct.  If an if-then rule describes a situation where
one thing causes another, or comes before it time, the thing that comes
first is more commonly the if clause (although one can write the rule in the
reverse order).  The if clause is commonly called a condition, and the then
clause is sometimes called the consequence, implying a causal or temporal
relationship.  The notion of reasoning backward from a goal being backward
chaining, normally involves the notion of reasoning back in imagined time
from a desired goal state.  So often TEMPORAL CRITERIA *ARE* RELEVANT TO
WHICH DIRECTION IS FORWARD CHAINING AND WHICH IS BACKWARD.

Even if one were to make a reach, and try to justify your statement that
"Temporal criteria are *NOT* relevant to forward and backward chaining" as
being more than just conclusory by suggesting it was an implicit reference
to statements --- like that contained Richard's prior statements in this
thread or the Wikipedia quote in one of the posts below --- that the
definition of forward and backward chaining depended on whether the
reasoning was from if clause to then clause, or the reverse --- that would
still not correct the groundlessness of your criticism.

This is because the rule that forward chaining is from if clause to then
clause and the reverse for backward chaining has no applicability to
situations where the implication goes both ways and there is no clear
indication of which pattern should be the if clause and which should be the
then clause --- which is precisely the situation I was describing in the
quote from me you unfairly criticized.

Neither Richard's prior statement in this thread nor the Wikipedia
definition below define which direction is forward and which is backward in
many such situations.

In my quote which you attacked I was discussing exactly this situations when
it was not clear which part of an inference pattern should be considered the
if clause and which the then clause.  So it appears your criticism either
totally missed, or for other reasons, failed to deal with the issue I was
discussing.

Mark, in general I do not read your posts because, among other things, like
your email below, they are generally poorly reasoned and seemed more
concerned with issues of ego and personality than with learning and teaching
truthful information or insights.  I skip many of Richard's for the same
reason, but I do read some of Richard's because despite all his pompous BS
he does occasionally say something quite thoughtful and worth while.

If you care about improving your reputation on t

RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-14 Thread Ed Porter
Abram Demski wrote below:  "If the network is passing down an expectation
based on other data, informing the lower network of what to expect, then
this is forward chaining. But if the signal is not an expectation, but more
like a query "pay attention to data that might conform/contradict this
hypothesis, and notify me ASAP" then it is backwards chaining. And it seems
realistic that it can be both of these."

This is interesting.  The type of activation you claim would be backward
chaining in this above quote corresponds to the "?" activation described in
Shasti's Shruiti (which I have cited earlier in this thread).  In Shruite
any node that receives "?" activation spreads similar activation to other
nodes that that might supply feedback to it that might provide evidence of
an increase or decrease in probability of the asking node.  But receiving
"?" activation by itself does not change a nodes probability at all.
Interestingly increasing or decreasing a nodes activation tends to spread
"?" activation seeking feedback on whether the increased or decrease in
probability is supported or contradicted by other information in the
network.

Ed Porter 

-Original Message-
From: Abram Demski [mailto:[EMAIL PROTECTED] 
Sent: Monday, July 14, 2008 2:29 PM
To: agi@v2.listbox.com
Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE
BINDING PROBLEM"?

Ed Porter wrote:

"I am I correct that you are implying the distinction is independent
of direction, but instead is something like this: forward chaining
infers from information you have to implications you don't yet have,
and backward chaining infers from patterns you are interested in to
ones that might either imply or negate them, or which they themselves
might imply or negate."

"BY READING BOTH THE WIKIPEDIA ENTRIES FOR FORWARD AND BACKWARD
CHAINING AND READING ABRAM DEMSKI'S LAST POST, IT SEEMS THAT ONE OF
THE DISTINCTIONS COMMONLY GIVEN BETWEEN FORWARD AND BACKWARD CHAINING,
IS WHETHER ONE IS REASONING FROM DATA (IN THE CASE OF FORWARD
CHAINING) OR FROM GOALS OR HYPOTHESES (IN THE CASE OF BACKWARD
CHAINING)."

As I understand it, this is the proper definition. The reason it is
typically stated in terms of direction of inference over if/then
statements is because that is how it is implemented in rule-based
systems. However, reasoning from goals vs reasoning from data is the
more general definition.

Ed Porter also wrote:

"For example, evidence may be derived from sensation, memory,
cognition or other means that a certain high level pattern should
exist in roughly a certain time and place, and the top down levels
implication of what is should expect to see could be considered
forward chaining, but is could also be considered backward chaining."

Perhaps there is some real ambiguity here, arising from the
probabilistic setting. If the network is passing down an expectation
based on other data, informing the lower network of what to expect,
then this is forward chaining. But if the signal is not an
expectation, but more like a query "pay attention to data that might
conform/contradict this hypothesis, and notify me ASAP" then it is
backwards chaining. And it seems realistic that it can be both of
these.


On Mon, Jul 14, 2008 at 1:43 PM, Ed Porter <[EMAIL PROTECTED]> wrote:
>
> With regard to your comments below, I don't think you have to be too
> imaginative to think of how the direction of forward or backward chaining
> across at least certain sets of rules could be reversed.  Abram Demski's
> recent post gave an example of how both what he considers forward and
> backward chaining can be performed in both directions across an inference
> pattern.
>
> Plus, it should be noted that I never said all relationships involve a
> before and after type of relationship.  In fact, I specifically said some
> relationships involve simultaneity.  I do however think temporal
> relationships are an important think to keep track of in inference,
because
> they are such and important part of reality, and predicting what is likely
> to come next is such an important part of such reasoning, and reasoning
> backward in imagined time from goals is such an important part of
planning.
>
> BY READING BOTH THE WIKIPEDIA ENTRIES FOR FORWARD AND BACKWARD CHAINING
AND
> READING ABRAM DEMSKI'S LAST POST, IT SEEMS THAT ONE OF THE DISTINCTIONS
> COMMONLY GIVEN BETWEEN FORWARD AND BACKWARD CHAINING, IS WHETHER ONE IS
> REASONING FROM DATA (IN THE CASE OF FORWARD CHAINING) OR FROM GOALS OR
> HYPOTHESES (IN THE CASE OF BACKWARD CHAINING).
>
> According to this definition the distinction between forward and backward
> chaining is not about direction the inference travels though an inference
> network --- because as Abram show each can travel in both dir

RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-14 Thread Ed Porter
Jim,

 

In the Riesenhuber and Poggio paper the binding that were handled implicitly
involved spatial relationships, such as an observed roughly horizontal line
substantially touching an observed roughly vertical line at their respective
ends, even though their might be other horizontal and vertical lines not
having this relationship in the input pixel space.  It achieves such
implicit bindings by having enough separate models to be able to detect, by
direct mapping, such a touching relationship between a horizontal and
vertical lines at each of many different locations in the visual input
space.

 

But the Poggio paper deals with a relatively small number of relationships
in a relatively small (160x160) low dimensional (2d) space using 23 million
models.  You imply you have been able to accomplish a somewhat similar
implicit representation of bindings in a much higher dimensional and
presumably large semantic space.  Unfortunately I was unable to understand
from your description how you claimed to have accomplished this.

 

Could you please clarify you description with regard to this point.

 

Ed Porter 

 

-Original Message-
From: Jim Bromer [mailto:[EMAIL PROTECTED] 
Sent: Monday, July 14, 2008 1:38 PM
To: agi@v2.listbox.com
Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE
BINDING PROBLEM"?

 

I started reading a Riesenhuber and Poggio paper and there are some
similarities to ideas that I have considered although my ideas were
explicitly developed about computer programs that would use symbolic
information and are not neural theories.  It is interesting that Risesnhuber
and Poggio argued that "the binding problem seems to be a problem for only
some models of object recognition."  In other words, it seems that they are
claiming that the problem disappears with their model of neural cognition! 

The study of feature detectors in cats eyes is old news and I did
incorporate that information into the development of my own theories.

I have often talked about the use of multi-level complex methods and I see
some similarity to the ideas that they discussed to my ideas.  In my model
an input would be scanned for different features using different kinds of
analysis on the input.  So then a configuration of simple features would be
derived from the scan and these could be associated with a number of complex
groups of objects that have been previously associated with the features.
Because the complex groups of objects are complexes (in the general sense),
and would be learned by previous experience, they are not insipidly modeled
on one standard model. These complex objects are complex in that they are
not all cut from one standard.  The older implementations that used
operations that were taken from set theory on groups were set on object
models that were very old-world and were not derived from learning.  For
example they were non-experiential. (I cannot remember the term that I am
looking for but experiential is the anthropomorphic term).  All of the
groupings in old models that looked for intersections were of a few
predefined kinds, and most significantly they did not recognize that
ideologically incommensurable references could affect meaning (or effect)
even if the references were strongly associated and functionally related.
The complex groupings of objects that I have in mind would have been derived
using different methods of analysis and combination and when a group of them
is called from an input analysis their use should tend to narrow the objects
that might be expected given the detection by the feature detectors.
Although I haven't expressed myself very clearly, this is very similar to
what Riesenhuber and Poggio were suggesting that their methods would be
capable of. So, yes,I think some similar methods can be used in NLP.

However, my model also includes the recognition that comparing apples and
oranges is not always straight forward.  This gives you an idea of what I
mean by ideologically incommensurable associations. If I were to give some
examples, a reasonable person might simply assume that the problems
illustrated by the examples could easily be resolved with more information,
and that is true.  But the point that I am making is that this view of
ideologically incommensurable references can be helpful in the analysis of
the kinds of problems that can be expected from more ambitious AI models.

Jim Bromer

 

  _  


agi |  <https://www.listbox.com/member/archive/303/=now> Archives
<https://www.listbox.com/member/archive/rss/303/> |
<https://www.listbox.com/member/?&;
0> Modify Your Subscription

 <http://www.listbox.com> 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-14 Thread Richard Loosemore

Mike Tintner wrote:
A tangential comment here. Looking at this and other related threads I 
can't help thinking: jeez, here are you guys still endlessly arguing 
about the simplest of syllogisms, seemingly unable to progress beyond 
them. (Don't you ever have that feeling?) My impression is that the 
fault lies with logic itself - as soon as you start to apply logic to 
the real world, even only tangentially with talk of "forward" and 
"backward" or "temporal" considerations, you fall into a quagmire of 
ambiguity, and no one is really sure what they are talking about. Even 
the simplest if p then q logical proposition is actually infinitely 
ambiguous. No?  (Is there a Godel's Theorem of logic?)


Well, now you have me in a cleft stick, methinks.

I *hate* logic as a way to understand cognition, because I think it is a 
derivative process within a high-functional AGI system, not a foundation 
process that sits underneath everything else.


But, on the other hand, I do understand how it works, and it seems a 
shame for someone to trample on the concept of forward and backward 
chaining when these are really quite clear and simple processes (at 
least conceptually).


You are right that logic is as clear as mud outside the pristine 
conceptual palace within which it was conceived, but if you're gonna 
hang out inside the palace it is a bit of a shame to question its 
elegance...




Richard Loosemore



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-14 Thread Mike Tintner
I'm not questioning logic's elegance, merely its relevance - the intention 
is at some point to apply it to the real world in your various systems, no? 
Yet there seems to be such a lot of argument and confusion about the most 
basic of terms, when you begin to do that. That elegance seems to come at a 
big price.


RL:Mike Tintner wrote:
A tangential comment here. Looking at this and other related threads I 
can't help thinking: jeez, here are you guys still endlessly arguing 
about the simplest of syllogisms, seemingly unable to progress beyond 
them. (Don't you ever have that feeling?) My impression is that the fault 
lies with logic itself - as soon as you start to apply logic to the real 
world, even only tangentially with talk of "forward" and "backward" or 
"temporal" considerations, you fall into a quagmire of ambiguity, and no 
one is really sure what they are talking about. Even the simplest if p 
then q logical proposition is actually infinitely ambiguous. No?  (Is 
there a Godel's Theorem of logic?)


Well, now you have me in a cleft stick, methinks.

I *hate* logic as a way to understand cognition, because I think it is a 
derivative process within a high-functional AGI system, not a foundation 
process that sits underneath everything else.


But, on the other hand, I do understand how it works, and it seems a shame 
for someone to trample on the concept of forward and backward chaining 
when these are really quite clear and simple processes (at least 
conceptually).


You are right that logic is as clear as mud outside the pristine 
conceptual palace within which it was conceived, but if you're gonna hang 
out inside the palace it is a bit of a shame to question its elegance...







---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-14 Thread Brad Paulsen
I've been following this thread pretty much since the beginning.  I hope I 
didn't miss anything subtle.  You'll let me know if I have, I'm sure. ;=)


It appears the need for temporal dependencies or different levels of reasoning 
has been conflated with the terms "forward-chaining" (FWC) and 
"backward-chaining" (BWC), which are typically used to describe different rule 
base evaluation algorithms used by expert systems.


The terms “forward-chaining” and “backward-chaining” when used to refer to 
reasoning strategies have absolutely nothing to do with temporal dependencies or 
levels of reasoning.  These two terms refer simply, and only, to the algorithms 
used to evaluate “if/then” rules in a rule base (RB).  In the FWC algorithm, the 
“if” part is evaluated and, if TRUE, the “then” part is added to the FWC 
engine's output.  In the BWC algorithm, the “then” part is evaluated and, if 
TRUE, the “if” part is added to the BWC engine's output.  It is rare, but some 
systems use both FWC and BWC.


That's it.  Period.  No other denotations or connotations apply.

To help remove any mystery that may still surround these concepts, here is an 
FWC algorithm in pseudo-code (WARNING: I'm glossing over quite a few details 
here – I'll be happy to answer questions on list or off):


   0. set loop index to 0
   1. got next rule?
 no: goto 5
   2. is rule FIRED?
 yes: goto 1
   3. is key equal to rule's antecedent?
 yes: add consequent to output, mark rule as FIRED,
  output is new key, goto 0
   4. goto 1
   5. more input data?
 yes: input data is new key, goto 0
   6. done.

To turn this into a BWC algorithm, we need only modify Step #3 to read as 
follows:

   3. is key equal to rule's consequent?
 yes: add antecedent to output, mark rule as FIRED,
 output is new key, goto 0

If you need to represent temporal dependencies in FWC/BWC systems, you have to 
express them using rules.  For example, if washer-a MUST be placed on bolt-b 
before nut-c can be screwed on, the rule base might look something like this:


   1. if installed(washer-x) then install(nut-z)
   2. if installed(bolt-y) then install(washer-x)
   3. if notInstalled(bolt-y) then install(bolt-y)

In this case, rule #1 won't get fired until rule #2 fires (nut-z can't get 
installed until washer-x has been installed).  Rule #2 won't get fired until 
rule #3 has fired (washer-x can't get installed until bolt-y has been 
installed). NUT-Z!  (Sorry, couldn't help it.)


To kick things off, we pass in “bolt-y” as the initial key.  This triggers rule 
#3, which will trigger rule #2, which will trigger rule #1. These temporal 
dependencies result in the following assembly sequence: install bolt-y, then 
install washer-x, and, finally, install nut-z.


A similar thing can be done to implement rule hierarchies.

   1. if levelIs(0) and installed(washer-x) then install(nut-z)
   2. if levelIs(0) and installed(nut-z) goLevel(1)
   3. if levelIs(1) and notInstalled(gadget-xx) then install(gadget-xx)
   4. if levelIs(0) and installed(bolt-y) then install(washer-x)
   5. if levelIs(0) and notInstalled(bolt-y) then install(bolt-y)

Here rule #2 won't fire until rule #1 has fired.  Rule #1 won't fire unless rule 
#4 has fired.  Rule #4 won't fire until rule #5 has fired.  And, finally, Rule 
#3 won't fire until Rule #2 has fired. So, level 0 could represent the reasoning 
required before level 1 rules (rule #3 here) will be of any use. (That's not the 
case here, of course, just stretching my humble example as far as I can.)


Note, again, that the temporal and level references in the rules are NOT used by 
the BWC.  They probably will be used by the part of the program that does 
something with the BWC's output (the install(), goLevel(), etc. functions). 
And, again, the results should be completely unaffected by the order in which 
the RB rules are evaluated or fired.


I hope this helps.

Cheers,

Brad

Richard Loosemore wrote:

Mike Tintner wrote:
A tangential comment here. Looking at this and other related threads I 
can't help thinking: jeez, here are you guys still endlessly arguing 
about the simplest of syllogisms, seemingly unable to progress beyond 
them. (Don't you ever have that feeling?) My impression is that the 
fault lies with logic itself - as soon as you start to apply logic to 
the real world, even only tangentially with talk of "forward" and 
"backward" or "temporal" considerations, you fall into a quagmire of 
ambiguity, and no one is really sure what they are talking about. Even 
the simplest if p then q logical proposition is actually infinitely 
ambiguous. No?  (Is there a Godel's Theorem of logic?)


Well, now you have me in a cleft stick, methinks.

I *hate* logic as a way to understand cognition, because I think it is a 
derivative process within a high-functional AGI system, not a foundation 
process that sits underneath everything else.


But, on the other hand, I do understand 

Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-15 Thread Lukasz Stafiniak
On Tue, Jul 15, 2008 at 8:01 AM, Brad Paulsen <[EMAIL PROTECTED]> wrote:
>
> The terms "forward-chaining" and "backward-chaining" when used to refer to
> reasoning strategies have absolutely nothing to do with temporal
> dependencies or levels of reasoning.  These two terms refer simply, and
> only, to the algorithms used to evaluate "if/then" rules in a rule base
> (RB).  In the FWC algorithm, the "if" part is evaluated and, if TRUE, the
> "then" part is added to the FWC engine's output.  In the BWC algorithm, the
> "then" part is evaluated and, if TRUE, the "if" part is added to the BWC
> engine's output.  It is rare, but some systems use both FWC and BWC.
>
> That's it.  Period.  No other denotations or connotations apply.
>
Curiously, the definition put by Abram Demski is the only one I've
been aware of until yesterday (I believe it's the one used among
theorem proving people). Let's see what googling says on "forward
chaining":

1. (Wikipedia)

2. http://www.amzi.com/ExpertSystemsInProlog/05forward.htm
"A large number of expert systems require the use of forward chaining,
or data driven inference. [...]
Data driven expert systems are different from the goal driven, or
backward chaining systems seen in the previous chapters.
The goal driven approach is practical when there are a reasonable
number of possible final answers, as in the case of a diagnostic or
identification system. The system methodically tries to prove or
disprove each possible answer, gathering the needed information as it
goes.
The data driven approach is practical when combinatorial explosion
creates a seemingly infinite number of possible right answers, such as
possible configurations of a machine."

3. http://ai.eecs.umich.edu/cogarch0/common/prop/chain.html
"Forward-chaining implies that upon assertion of new knowledge, all
relevant inductive and deductive rules are fired exhaustively,
effectively making all knowledge about the current state explicit
within the state. Forward chaining may be regarded as progress from a
known state (the original knowledge) towards a goal state(s).
Backward-chaining by an architecture means that no rules are fired
upon assertion of new knowledge. When an unknown predicate about a
known piece of knowledge is detected in an operator's condition list,
all rules relevant to the knowledge in question are fired until the
question is answered or until quiescence. Thus, backward chaining
systems normally work from a goal state back to the original state."

4. http://www.ontotext.com/inference/reasoning_strategies.html
"* Forward-chaining: to start from the known facts and to perform
the inference in an inductive fashion. This kind of reasoning can have
diverse objectives, for instance: to compute the inferred closure; to
answer a particular query; to infer a particular sort of knowledge
(e.g. the class taxonomy); etc.
* Backward-chaining: to start from a particular fact or from a
query and by means of using deductive reasoning to try to verify that
fact or to obtain all possible results of the query. Typically, the
reasoner decomposes the fact into simpler facts that can be found in
the knowledge base or transforms it into alternative facts that can be
proven applying further recursive transformations. "


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-15 Thread Jim Bromer
Ed Porter said:
You imply you have been able to accomplish a somewhat
similar implicit representation of bindings in a much higher
dimensional and presumably large semantic space.  Unfortunately I was
unable to understand from your description how you claimed to have
accomplished this. 
 
-
I never implied that I have been able to accomplish a somewhat similar implicit 
representation of bindings in a much higher dimension and presumably large 
semantic space.

I clearly stated:

"I have often talked about the
use of multi-level complex methods and I see some similarity to the
ideas that they discussed to my ideas."
-and,
"The complex groupings of
objects that I have in mind would have been derived using different
methods of analysis and combination and when a group of them is called
from an input analysis their use should tend to narrow the objects that
might be expected given the detection by the feature detectors.
Although I haven't expressed myself very clearly, this is very similar
to what Riesenhuber and Poggio were suggesting that their methods would
be capable of. So, yes,I think some similar methods can be used in NLP."

I clearly used the expression "in mind" just to avoid the kind of  
misunderstanding that you made. I never made the exaggerated "claim" that I had 
accomplished it.


The difference between having an idea "in mind" and having "claimed to have 
accomplished" a goal, which the majority of participants in the group would 
acknowledge is elusive, should be obvious and easy to understand.


I am not claiming that I have a method that would work in all semantic space.  
I would be happy to claim that I do have a theory which I believe should show 
some limited extensibility in semantic space that goes beyond other current 
theories.  However, I will not know for sure until I test it and right now that 
looks like it would be years off.


I
would be happy to continue the dialog if it can be conducted in a less
confrontational and more genial manner than it has been during
the past week.

Jim Bromer




Jim,
 
In
the Riesenhuber and Poggio paper the binding that were handled
implicitly involved spatial relationships, such as an observed roughly
horizontal line substantially touching an observed roughly vertical
line at their respective ends, even though their might be other
horizontal and vertical lines not having this relationship in the input
pixel space.  It achieves such implicit bindings by having enough
separate models to be able to detect, by direct mapping, such a
touching relationship between a horizontal and vertical lines at each
of many different locations in the visual input space.
 
But
the Poggio paper deals with a relatively small number of relationships
in a relatively small (160x160) low dimensional (2d) space using 23
million models.  You imply you have been able to accomplish a somewhat
similar implicit representation of bindings in a much higher
dimensional and presumably large semantic space.  Unfortunately I was
unable to understand from your description how you claimed to have
accomplished this.
 
Could you please clarify you description with regard to this point.
 
Ed Porter
 
-Original Message-
From: Jim Bromer [mailto:[EMAIL PROTECTED] 
Sent: Monday, July 14, 2008 1:38 PM
To: agi@v2.listbox.com
Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE 
BINDING PROBLEM"?
 
I
started reading a Riesenhuber and Poggio paper and there are some
similarities to ideas that I have considered although my ideas were
explicitly developed about computer programs that would use symbolic
information and are not neural theories.  It is interesting that
Risesnhuber and Poggio argued that "the binding problem seems to be a
problem for only some models of object recognition."  In other words,
it seems that they are claiming that the problem disappears with their
model of neural cognition! 

The study of feature detectors in
cats eyes is old news and I did incorporate that information into the
development of my own theories.

I have often talked about the
use of multi-level complex methods and I see some similarity to the
ideas that they discussed to my ideas.  In my model an input would be
scanned for different features using different kinds of analysis on the
input.  So then a configuration of simple features would be derived
from the scan and these could be associated with a number of complex
groups of objects that have been previously associated with the
features.  Because the complex groups of objects are complexes (in the
general sense), and would be learned by previous experience, they are
not insipidly modeled on one standard model. These complex objects are
complex in that they are not all cut from one standard.  The older
implementations that used operations that were taken from set theory on
groups were set on object models th

Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-15 Thread Richard Loosemore

Brad Paulsen wrote:
I've been following this thread pretty much since the beginning.  I hope 
I didn't miss anything subtle.  You'll let me know if I have, I'm sure. ;=)


It appears the need for temporal dependencies or different levels of 
reasoning has been conflated with the terms "forward-chaining" (FWC) and 
"backward-chaining" (BWC), which are typically used to describe 
different rule base evaluation algorithms used by expert systems.


The terms “forward-chaining” and “backward-chaining” when used to refer 
to reasoning strategies have absolutely nothing to do with temporal 
dependencies or levels of reasoning.  These two terms refer simply, and 
only, to the algorithms used to evaluate “if/then” rules in a rule base 
(RB).  In the FWC algorithm, the “if” part is evaluated and, if TRUE, 
the “then” part is added to the FWC engine's output.  In the BWC 
algorithm, the “then” part is evaluated and, if TRUE, the “if” part is 
added to the BWC engine's output.  It is rare, but some systems use both 
FWC and BWC.


That's it.  Period.  No other denotations or connotations apply.


Whooaa there.  Something not right here.

Backward chaining is about starting with a goal statement that you would 
like to prove, but at the beginning it is just a hypothesis.  In BWC you 
go about proving the statement by trying to find facts that might 
support it.  You would not start from the statement and then add 
knowledge to your knowledgebase that is consistent with it.


So for example, if your goal is to prove that Socrates is mortal, then 
your above desciption of BWC would cause the following to occur


1) Does any rule allow us to conclude that x is/is not mortal?

2) Answer: yes, the following rules allow us to do that:

"If x is a plant, then x is mortal"
"If x is a rock, then x is not mortal"
"If x is a robot, then x is not mortal"
"If x lives in a post-singularity era, then x is not mortal"
"If x is a slug, then x is mortal"
"If x is a japanese beetle, then x is mortal"
"If x is a side of beef, then x is mortal"
"If x is a screwdriver, then x is not mortal"
"If x is a god, then x is not mortal"
"If x is a living creature, then x is mortal"
"If x is a goat, then x is mortal"
"If x is a parrot in a Dead Parrot Sketch, then x is mortal"

3) Ask the knowledge base if Socrates is a plant, if Socrates is a rock, 
etc etc . working through the above list.


3) [According to your version of BWC, if I understand you aright] Okay, 
if we cannot find any facts in the KB that say that Socrates is known to 
be one of these things, then add the first of these to the KB:


"Socrates is a plant"

[This is the bit that I question:  we don't do the opposite of forward 
chaining at this step].


4) Now repeat to find all rules that allow us to conclude that x is a 
plant".  For this set of " ... then x is a plant" rules, go back and 
repeat the loop from step 2 onwards.  Then if this does not work, 



Well, you can imagine the rest of the story: keep iterating until you 
can prove or disprove that Socrates is mortal.


I cannot seem to reconcile this with your statement above that backward 
chaining simply involves the opposite of forward chaining, namely adding 
antecedents to the KB and working backwards.







To help remove any mystery that may still surround these concepts, here 
is an FWC algorithm in pseudo-code (WARNING: I'm glossing over quite a 
few details here – I'll be happy to answer questions on list or off):


   0. set loop index to 0
   1. got next rule?
 no: goto 5
   2. is rule FIRED?
 yes: goto 1
   3. is key equal to rule's antecedent?
 yes: add consequent to output, mark rule as FIRED,
  output is new key, goto 0
   4. goto 1
   5. more input data?
 yes: input data is new key, goto 0
   6. done.

To turn this into a BWC algorithm, we need only modify Step #3 to read 
as follows:


   3. is key equal to rule's consequent?
 yes: add antecedent to output, mark rule as FIRED,
 output is new key, goto 0

If you need to represent temporal dependencies in FWC/BWC systems, you 
have to express them using rules.  For example, if washer-a MUST be 
placed on bolt-b before nut-c can be screwed on, the rule base might 
look something like this:


   1. if installed(washer-x) then install(nut-z)
   2. if installed(bolt-y) then install(washer-x)
   3. if notInstalled(bolt-y) then install(bolt-y)

In this case, rule #1 won't get fired until rule #2 fires (nut-z can't 
get installed until washer-x has been installed).  Rule #2 won't get 
fired until rule #3 has fired (washer-x can't get installed until bolt-y 
has been installed). NUT-Z!  (Sorry, couldn't help it.)


To kick things off, we pass in “bolt-y” as the initial key.  This 
triggers rule #3, which will trigger rule #2, which will trigger rule 
#1. These temporal dependencies result in the following assembly 
sequence: install bolt-y, then install washer-x, and, finally, install 
nut-z.


A s

RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-15 Thread Ed Porter
Lukasz,

 

Your post below was great.

 

Your clippings from Google confirm much of the understanding that Abram
Demski was helping me reach yesterday.

 

In one of his posts Abram was discussing my prior statement that top-down
activation could be either forward or backward chaining.  He said "If the
network is passing down an expectation based on other data, informing the
lower network of what to expect, then this is forward chaining. But if the
signal is not an expectation, but more like a query "pay attention to data
that might conform/contradict this hypothesis, and notify me ASAP" then it
is backwards chaining. And it seems realistic that it can be both of these.

 

I am interpreting this quoted statement as implying the purpose of backward
chaining is to search for forward chaining paths that either confirm or
contradict a pattern of interest or that provide a path or plan to a desired
goal.  In this view the backward part of backward chaining provides no
changes in probability, only changes in attention, and it is only the
forward chaining that is found by such backward chaining that changes
probabilities.

 

Am I correct in this interpretation of what Abram said, and is that
interpretation included in what your Google clippings indicate is the
generally understood meaning of the term backward chaining.

 

Ed Porter

 

P.S. I would appreciate answers for Abram or any else on this list who
understands the question and has some knowledge on the subject.

 

-Original Message-
From: Lukasz Stafiniak [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, July 15, 2008 3:05 AM
To: agi@v2.listbox.com
Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE
BINDING PROBLEM"?

 

On Tue, Jul 15, 2008 at 8:01 AM, Brad Paulsen <[EMAIL PROTECTED]>
wrote:

>

> The terms "forward-chaining" and "backward-chaining" when used to refer to

> reasoning strategies have absolutely nothing to do with temporal

> dependencies or levels of reasoning.  These two terms refer simply, and

> only, to the algorithms used to evaluate "if/then" rules in a rule base

> (RB).  In the FWC algorithm, the "if" part is evaluated and, if TRUE, the

> "then" part is added to the FWC engine's output.  In the BWC algorithm,
the

> "then" part is evaluated and, if TRUE, the "if" part is added to the BWC

> engine's output.  It is rare, but some systems use both FWC and BWC.

>

> That's it.  Period.  No other denotations or connotations apply.

>

Curiously, the definition put by Abram Demski is the only one I've

been aware of until yesterday (I believe it's the one used among

theorem proving people). Let's see what googling says on "forward

chaining":

 

1. (Wikipedia)

 

2. http://www.amzi.com/ExpertSystemsInProlog/05forward.htm

"A large number of expert systems require the use of forward chaining,

or data driven inference. [...]

Data driven expert systems are different from the goal driven, or

backward chaining systems seen in the previous chapters.

The goal driven approach is practical when there are a reasonable

number of possible final answers, as in the case of a diagnostic or

identification system. The system methodically tries to prove or

disprove each possible answer, gathering the needed information as it

goes.

The data driven approach is practical when combinatorial explosion

creates a seemingly infinite number of possible right answers, such as

possible configurations of a machine."

 

3. http://ai.eecs.umich.edu/cogarch0/common/prop/chain.html

"Forward-chaining implies that upon assertion of new knowledge, all

relevant inductive and deductive rules are fired exhaustively,

effectively making all knowledge about the current state explicit

within the state. Forward chaining may be regarded as progress from a

known state (the original knowledge) towards a goal state(s).

Backward-chaining by an architecture means that no rules are fired

upon assertion of new knowledge. When an unknown predicate about a

known piece of knowledge is detected in an operator's condition list,

all rules relevant to the knowledge in question are fired until the

question is answered or until quiescence. Thus, backward chaining

systems normally work from a goal state back to the original state."

 

4. http://www.ontotext.com/inference/reasoning_strategies.html

"* Forward-chaining: to start from the known facts and to perform

the inference in an inductive fashion. This kind of reasoning can have

diverse objectives, for instance: to compute the inferred closure; to

answer a particular query; to infer a particular sort of knowledge

(e.g. the class taxonomy); etc.

* Backward-chaining: to start from a particular fact or from a

query and by means of using deductive reasoning to try to ver

RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-15 Thread Ed Porter
Jim, Sorry.  Obviously I did not understand you. Ed Porter

 

-Original Message-
From: Jim Bromer [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, July 15, 2008 9:33 AM
To: agi@v2.listbox.com
Subject: RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE
BINDING PROBLEM"?

 

Ed Porter said:

You imply you have been able to accomplish a somewhat similar implicit
representation of bindings in a much higher dimensional and presumably large
semantic space.  Unfortunately I was unable to understand from your
description how you claimed to have accomplished this. 

 

-

I never implied that I have been able to accomplish a somewhat similar
implicit representation of bindings in a much higher dimension and
presumably large semantic space.

 

I clearly stated:

"I have often talked about the use of multi-level complex methods and I see
some similarity to the ideas that they discussed to my ideas."

-and,

"The complex groupings of objects that I have in mind would have been
derived using different methods of analysis and combination and when a group
of them is called from an input analysis their use should tend to narrow the
objects that might be expected given the detection by the feature detectors.
Although I haven't expressed myself very clearly, this is very similar to
what Riesenhuber and Poggio were suggesting that their methods would be
capable of. So, yes,I think some similar methods can be used in NLP."



I clearly used the expression "in mind" just to avoid the kind of
misunderstanding that you made. I never made the exaggerated "claim" that I
had accomplished it.

The difference between having an idea "in mind" and having "claimed to have
accomplished" a goal, which the majority of participants in the group would
acknowledge is elusive, should be obvious and easy to understand.

 

I am not claiming that I have a method that would work in all semantic
space.  I would be happy to claim that I do have a theory which I believe
should show some limited extensibility in semantic space that goes beyond
other current theories.  However, I will not know for sure until I test it
and right now that looks like it would be years off.

 

I would be happy to continue the dialog if it can be conducted in a less
confrontational and more genial manner than it has been during the past
week.

 

Jim Bromer

 

 

 

Jim,

 

In the Riesenhuber and Poggio paper the binding that were handled implicitly
involved spatial relationships, such as an observed roughly horizontal line
substantially touching an observed roughly vertical line at their respective
ends, even though their might be other horizontal and vertical lines not
having this relationship in the input pixel space.  It achieves such
implicit bindings by having enough separate models to be able to detect, by
direct mapping, such a touching relationship between a horizontal and
vertical lines at each of many different locations in the visual input
space.

 

But the Poggio paper deals with a relatively small number of relationships
in a relatively small (160x160) low dimensional (2d) space using 23 million
models.  You imply you have been able to accomplish a somewhat similar
implicit representation of bindings in a much higher dimensional and
presumably large semantic space.  Unfortunately I was unable to understand
from your description how you claimed to have accomplished this.

 

Could you please clarify you description with regard to this point.

 

Ed Porter

 

-Original Message-
From: Jim Bromer [mailto:[EMAIL PROTECTED] 
Sent: Monday, July 14, 2008 1:38 PM
To: agi@v2.listbox.com
Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE
BINDING PROBLEM"?

 

I started reading a Riesenhuber and Poggio paper and there are some
similarities to ideas that I have considered although my ideas were
explicitly developed about computer programs that would use symbolic
information and are not neural theories.  It is interesting that Risesnhuber
and Poggio argued that "the binding problem seems to be a problem for only
some models of object recognition."  In other words, it seems that they are
claiming that the problem disappears with their model of neural cognition! 

The study of feature detectors in cats eyes is old news and I did
incorporate that information into the development of my own theories.

I have often talked about the use of multi-level complex methods and I see
some similarity to the ideas that they discussed to my ideas.  In my model
an input would be scanned for different features using different kinds of
analysis on the input.  So then a configuration of simple features would be
derived from the scan and these could be associated with a number of complex
groups of objects that have been previously associated with the features.
Because the complex groups of objects are comple

Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-15 Thread Abram Demski
"Am I correct in this interpretation of what Abram said, and is that
interpretation included in what your Google clippings indicate is the
generally understood meaning of the term backward chaining.
Ed Porter"

It sounds to me like you are interpreting me correctly.

One important note. Lukasz quoted one source that claimed that forward
chaining can help to cut down the combinatorial explosion arising from
the huge search space in backwards-chaining. This is true in some
situations, but the opposite can also be the case; backwards-chaining
can help to focus inferences when it would be impossible to deduce
every fact that would follow by forward-chaining. It depends on the
forward and backwards branching factors. If every fact fires an
average of five rules forwards, but three backwards, then
backwards-chaining will be less expensive; 5^n vs 3^n, where n is the
length of the actual deductive chain being searched for. Simultaneous
backwards/forwards chaining that meets in the middle can be even less
expensive; with a branching factor of 2 in both directions, the search
time goes down from 2^n for forward or backward chaining to 2^(n/2 +
1).

On the other hand, what we want the system to do makes a big
difference. If we really do have a single goal-sentence we want to
prove or disprove, the above arguments hold. But if we want to deduce
all consequences of our current knowledge, we should use forward
chaining regardless of branching factors and so on.

Most of this stuff should be in any intro AI textbook.

--Abram

On Tue, Jul 15, 2008 at 11:08 AM, Ed Porter <[EMAIL PROTECTED]> wrote:
> Lukasz,
>
>
>
> Your post below was great.
>
>
>
> Your clippings from Google confirm much of the understanding that Abram
> Demski was helping me reach yesterday.
>
>
>
> In one of his posts Abram was discussing my prior statement that top-down
> activation could be either forward or backward chaining.  He said "If the
> network is passing down an expectation based on other data, informing the
> lower network of what to expect, then this is forward chaining. But if the
> signal is not an expectation, but more like a query "pay attention to data
> that might conform/contradict this hypothesis, and notify me ASAP" then it
> is backwards chaining. And it seems realistic that it can be both of these.
>
>
>
> I am interpreting this quoted statement as implying the purpose of backward
> chaining is to search for forward chaining paths that either confirm or
> contradict a pattern of interest or that provide a path or plan to a desired
> goal.  In this view the backward part of backward chaining provides no
> changes in probability, only changes in attention, and it is only the
> forward chaining that is found by such backward chaining that changes
> probabilities.
>
>
>
> Am I correct in this interpretation of what Abram said, and is that
> interpretation included in what your Google clippings indicate is the
> generally understood meaning of the term backward chaining.
>
>
>
> Ed Porter
>
>
>
> P.S. I would appreciate answers for Abram or any else on this list who
> understands the question and has some knowledge on the subject.
>
>
>
> -----Original Message-----
> From: Lukasz Stafiniak [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, July 15, 2008 3:05 AM
> To: agi@v2.listbox.com
> Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE
> BINDING PROBLEM"?
>
>
>
> On Tue, Jul 15, 2008 at 8:01 AM, Brad Paulsen <[EMAIL PROTECTED]>
> wrote:
>
>>
>
>> The terms "forward-chaining" and "backward-chaining" when used to refer to
>
>> reasoning strategies have absolutely nothing to do with temporal
>
>> dependencies or levels of reasoning.  These two terms refer simply, and
>
>> only, to the algorithms used to evaluate "if/then" rules in a rule base
>
>> (RB).  In the FWC algorithm, the "if" part is evaluated and, if TRUE, the
>
>> "then" part is added to the FWC engine's output.  In the BWC algorithm,
>> the
>
>> "then" part is evaluated and, if TRUE, the "if" part is added to the BWC
>
>> engine's output.  It is rare, but some systems use both FWC and BWC.
>
>>
>
>> That's it.  Period.  No other denotations or connotations apply.
>
>>
>
> Curiously, the definition put by Abram Demski is the only one I've
>
> been aware of until yesterday (I believe it's the one used among
>
> theorem proving people). Let's see what googling says on "forward
>
> chaining":
>
>
>
> 1. (Wikipedia)
>
>
>
> 2. http://www.amzi.com/ExpertSystemsInProl

RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-15 Thread Ed Porter
Abram, 

Thanks, for the info.  The concept that the only purpose of backward
chaining to find appropriate forward chaining paths, is an important
clarification of my understanding.

Ed Porter

-Original Message-
From: Abram Demski [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, July 15, 2008 11:38 AM
To: agi@v2.listbox.com
Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE
BINDING PROBLEM"?

"Am I correct in this interpretation of what Abram said, and is that
interpretation included in what your Google clippings indicate is the
generally understood meaning of the term backward chaining.
Ed Porter"

It sounds to me like you are interpreting me correctly.

One important note. Lukasz quoted one source that claimed that forward
chaining can help to cut down the combinatorial explosion arising from
the huge search space in backwards-chaining. This is true in some
situations, but the opposite can also be the case; backwards-chaining
can help to focus inferences when it would be impossible to deduce
every fact that would follow by forward-chaining. It depends on the
forward and backwards branching factors. If every fact fires an
average of five rules forwards, but three backwards, then
backwards-chaining will be less expensive; 5^n vs 3^n, where n is the
length of the actual deductive chain being searched for. Simultaneous
backwards/forwards chaining that meets in the middle can be even less
expensive; with a branching factor of 2 in both directions, the search
time goes down from 2^n for forward or backward chaining to 2^(n/2 +
1).

On the other hand, what we want the system to do makes a big
difference. If we really do have a single goal-sentence we want to
prove or disprove, the above arguments hold. But if we want to deduce
all consequences of our current knowledge, we should use forward
chaining regardless of branching factors and so on.

Most of this stuff should be in any intro AI textbook.

--Abram

On Tue, Jul 15, 2008 at 11:08 AM, Ed Porter <[EMAIL PROTECTED]> wrote:
> Lukasz,
>
>
>
> Your post below was great.
>
>
>
> Your clippings from Google confirm much of the understanding that Abram
> Demski was helping me reach yesterday.
>
>
>
> In one of his posts Abram was discussing my prior statement that top-down
> activation could be either forward or backward chaining.  He said "If the
> network is passing down an expectation based on other data, informing the
> lower network of what to expect, then this is forward chaining. But if the
> signal is not an expectation, but more like a query "pay attention to data
> that might conform/contradict this hypothesis, and notify me ASAP" then it
> is backwards chaining. And it seems realistic that it can be both of
these.
>
>
>
> I am interpreting this quoted statement as implying the purpose of
backward
> chaining is to search for forward chaining paths that either confirm or
> contradict a pattern of interest or that provide a path or plan to a
desired
> goal.  In this view the backward part of backward chaining provides no
> changes in probability, only changes in attention, and it is only the
> forward chaining that is found by such backward chaining that changes
> probabilities.
>
>
>
> Am I correct in this interpretation of what Abram said, and is that
> interpretation included in what your Google clippings indicate is the
> generally understood meaning of the term backward chaining.
>
>
>
> Ed Porter
>
>
>
> P.S. I would appreciate answers for Abram or any else on this list who
> understands the question and has some knowledge on the subject.
>
>
>
> -----Original Message-----
> From: Lukasz Stafiniak [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, July 15, 2008 3:05 AM
> To: agi@v2.listbox.com
> Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY
"THE
> BINDING PROBLEM"?
>
>
>
> On Tue, Jul 15, 2008 at 8:01 AM, Brad Paulsen <[EMAIL PROTECTED]>
> wrote:
>
>>
>
>> The terms "forward-chaining" and "backward-chaining" when used to refer
to
>
>> reasoning strategies have absolutely nothing to do with temporal
>
>> dependencies or levels of reasoning.  These two terms refer simply, and
>
>> only, to the algorithms used to evaluate "if/then" rules in a rule base
>
>> (RB).  In the FWC algorithm, the "if" part is evaluated and, if TRUE, the
>
>> "then" part is added to the FWC engine's output.  In the BWC algorithm,
>> the
>
>> "then" part is evaluated and, if TRUE, the "if" part is added to the BWC
>
>> engine's output.  It is rare, but some systems use both FWC and BWC.
>
>>
>
>> That's it.  Perio

Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-15 Thread Mike Archbold
>
> 4. http://www.ontotext.com/inference/reasoning_strategies.html
> "* Forward-chaining: to start from the known facts and to perform
> the inference in an inductive fashion. This kind of reasoning can have
> diverse objectives, for instance: to compute the inferred closure; to
> answer a particular query; to infer a particular sort of knowledge
> (e.g. the class taxonomy); etc.
> * Backward-chaining: to start from a particular fact or from a
> query and by means of using deductive reasoning to try to verify that
> fact or to obtain all possible results of the query. Typically, the
> reasoner decomposes the fact into simpler facts that can be found in
> the knowledge base or transforms it into alternative facts that can be
> proven applying further recursive transformations. "
>


A system like Clips is forward chaining but there is no induction going
on.  Whether fwd- or bkw- chaining it is deduction as far as I've ever
heard of.  With induction we are implying repeated observations that lead
to some new knowledge (ie., some new rule in this case).  That was my
understanding anyway, and I'm no PhD scientist.
Mike Archbold


>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-16 Thread Brad Paulsen



Richard Loosemore wrote:

Brad Paulsen wrote:
I've been following this thread pretty much since the beginning.  I 
hope I didn't miss anything subtle.  You'll let me know if I have, I'm 
sure. ;=)


It appears the need for temporal dependencies or different levels of 
reasoning has been conflated with the terms "forward-chaining" (FWC) 
and "backward-chaining" (BWC), which are typically used to describe 
different rule base evaluation algorithms used by expert systems.


The terms “forward-chaining” and “backward-chaining” when used to 
refer to reasoning strategies have absolutely nothing to do with 
temporal dependencies or levels of reasoning.  These two terms refer 
simply, and only, to the algorithms used to evaluate “if/then” rules 
in a rule base (RB).  In the FWC algorithm, the “if” part is evaluated 
and, if TRUE, the “then” part is added to the FWC engine's output.  In 
the BWC algorithm, the “then” part is evaluated and, if TRUE, the “if” 
part is added to the BWC engine's output.  It is rare, but some 
systems use both FWC and BWC.


That's it.  Period.  No other denotations or connotations apply.


Whooaa there.  Something not right here.

Backward chaining is about starting with a goal statement that you would 
like to prove, but at the beginning it is just a hypothesis.  In BWC you 
go about proving the statement by trying to find facts that might 
support it.  You would not start from the statement and then add 
knowledge to your knowledgebase that is consistent with it.




Richard,

I really don't know where you got the idea my descriptions or algorithm
added “...knowledge to your (the “royal” you, I presume) knowledgebase...”.
Maybe you misunderstood my use of the term “output.”  Another (perhaps
better) word for output would be “result” or “action.”  I've also heard
FWC/BWC engine output referred to as the “blackboard.”

By definition, an expert system rule base contains the total sum of the
knowledge of a human expert(s) in a particular domain at a given point in
time.  When you use it, that's what you expect to get.  You don't expect the
system to modify the rule base at runtime.  If everything you need isn't in
the rule base, you need to talk to the knowledge engineer. I don't know of
any expert system that adds rules to its rule base (i.e., becomes “more
expert”) at runtime.  I'm not saying necessarily that this couldn't be done,
but I've never seen it.

I have more to say about your counterexample below, but I don't want
this thread to devolve into a critique of 1980's classic AI models.

The main purpose I posted to this thread was that I was seeing
inaccurate conclusions being drawn based on a lack of understanding
of how the terms “backward” and “forward” chaining related to temporal
dependencies and hierarchal logic constructs.  There is no relation.
Using forward chaining has nothing to do with “forward in time” or
“down a level in the hierarchy.”  Nor does backward chaining have
anything to do with “backward in time” or “up a level in the hierarchy.”
These terms describe particular search algorithms used in expert system
engines (since, at least, the mid-1980s).  Definitions vary in emphasis,
such as the three someone posted to this thread, but they all refer to
the same critters.

If one wishes to express temporal dependencies or hierarchical levels of
logic in these types of systems, one needs to encode these in the rules.
I believe I even gave an example of a rule base containing temporal and
hierarchical-conditioned rules.

So for example, if your goal is to prove that Socrates is mortal, then 
your above desciption of BWC would cause the following to occur


1) Does any rule allow us to conclude that x is/is not mortal?

2) Answer: yes, the following rules allow us to do that:

"If x is a plant, then x is mortal"
"If x is a rock, then x is not mortal"
"If x is a robot, then x is not mortal"
"If x lives in a post-singularity era, then x is not mortal"
"If x is a slug, then x is mortal"
"If x is a japanese beetle, then x is mortal"
"If x is a side of beef, then x is mortal"
"If x is a screwdriver, then x is not mortal"
"If x is a god, then x is not mortal"
"If x is a living creature, then x is mortal"
"If x is a goat, then x is mortal"
"If x is a parrot in a Dead Parrot Sketch, then x is mortal"

3) Ask the knowledge base if Socrates is a plant, if Socrates is a rock, 
etc etc . working through the above list.


3) [According to your version of BWC, if I understand you aright] Okay, 
if we cannot find any facts in the KB that say that Socrates is known to 
be one of these things, then add the first of these to the KB:


"Socrates is a plant"

[This is the bit that I question:  we don't do the opposite of forward 
chaining at this step].


4) Now repeat to find all rules that allow us to conclude that x is a 
plant".  For this set of " ... then x is a plant" rules, go back and 
repeat the loop from step 2 onwards.  Then if this does not work, 



Well, yo

Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-16 Thread Mike Tintner
Brad: By definition, an expert system rule base contains the total sum of 
the

knowledge of a human expert(s) in a particular domain at a given point in
time.  When you use it, that's what you expect to get.  You don't expect the
system to modify the rule base at runtime.  If everything you need isn't in
the rule base, you need to talk to the knowledge engineer. I don't know of
any expert system that adds rules to its rule base (i.e., becomes “more
expert”) at runtime.  I'm not saying necessarily that this couldn't be done,
but I've never seen it.

In which case - (thanks BTW for a v. helpful post) - are we talking entirely 
here about narrow AI? Sorry if I've missed this, but has anyone been 
discussing how to provide a flexible, evolving set of rules for behaviour? 
That's the crux of AGI, isn't it? Something at least as flexible as a 
country's Constitution and  Body of Laws. What ideas are on offer here? 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-16 Thread Abram Demski
For what it is worth, I agree with Richard Loosemore in that your
first description was a bit ambiguous, and it sounded like you were
saying that backward chaining would add facts to the knowledge base,
which would be wrong. But you've cleared up the ambiguity.

On Wed, Jul 16, 2008 at 5:02 AM, Brad Paulsen <[EMAIL PROTECTED]> wrote:
>
>
> Richard Loosemore wrote:
>>
>> Brad Paulsen wrote:
>>>
>>> I've been following this thread pretty much since the beginning.  I hope
>>> I didn't miss anything subtle.  You'll let me know if I have, I'm sure. ;=)
>>>
>>> It appears the need for temporal dependencies or different levels of
>>> reasoning has been conflated with the terms "forward-chaining" (FWC) and
>>> "backward-chaining" (BWC), which are typically used to describe different
>>> rule base evaluation algorithms used by expert systems.
>>>
>>> The terms "forward-chaining" and "backward-chaining" when used to refer
>>> to reasoning strategies have absolutely nothing to do with temporal
>>> dependencies or levels of reasoning.  These two terms refer simply, and
>>> only, to the algorithms used to evaluate "if/then" rules in a rule base
>>> (RB).  In the FWC algorithm, the "if" part is evaluated and, if TRUE, the
>>> "then" part is added to the FWC engine's output.  In the BWC algorithm, the
>>> "then" part is evaluated and, if TRUE, the "if" part is added to the BWC
>>> engine's output.  It is rare, but some systems use both FWC and BWC.
>>>
>>> That's it.  Period.  No other denotations or connotations apply.
>>
>> Whooaa there.  Something not right here.
>>
>> Backward chaining is about starting with a goal statement that you would
>> like to prove, but at the beginning it is just a hypothesis.  In BWC you go
>> about proving the statement by trying to find facts that might support it.
>>  You would not start from the statement and then add knowledge to your
>> knowledgebase that is consistent with it.
>>
>
> Richard,
>
> I really don't know where you got the idea my descriptions or algorithm
> added "...knowledge to your (the "royal" you, I presume) knowledgebase...".
> Maybe you misunderstood my use of the term "output."  Another (perhaps
> better) word for output would be "result" or "action."  I've also heard
> FWC/BWC engine output referred to as the "blackboard."
>
> By definition, an expert system rule base contains the total sum of the
> knowledge of a human expert(s) in a particular domain at a given point in
> time.  When you use it, that's what you expect to get.  You don't expect the
> system to modify the rule base at runtime.  If everything you need isn't in
> the rule base, you need to talk to the knowledge engineer. I don't know of
> any expert system that adds rules to its rule base (i.e., becomes "more
> expert") at runtime.  I'm not saying necessarily that this couldn't be done,
> but I've never seen it.
>
> I have more to say about your counterexample below, but I don't want
> this thread to devolve into a critique of 1980's classic AI models.
>
> The main purpose I posted to this thread was that I was seeing
> inaccurate conclusions being drawn based on a lack of understanding
> of how the terms "backward" and "forward" chaining related to temporal
> dependencies and hierarchal logic constructs.  There is no relation.
> Using forward chaining has nothing to do with "forward in time" or
> "down a level in the hierarchy."  Nor does backward chaining have
> anything to do with "backward in time" or "up a level in the hierarchy."
> These terms describe particular search algorithms used in expert system
> engines (since, at least, the mid-1980s).  Definitions vary in emphasis,
> such as the three someone posted to this thread, but they all refer to
> the same critters.
>
> If one wishes to express temporal dependencies or hierarchical levels of
> logic in these types of systems, one needs to encode these in the rules.
> I believe I even gave an example of a rule base containing temporal and
> hierarchical-conditioned rules.
>
>> So for example, if your goal is to prove that Socrates is mortal, then
>> your above desciption of BWC would cause the following to occur
>>
>> 1) Does any rule allow us to conclude that x is/is not mortal?
>>
>> 2) Answer: yes, the following rules allow us to do that:
>>
>> "If x is a plant, then x is mortal"
>> "If x is a rock, then x is not mortal"
>> "If x is a robot, then x is not mortal"
>> "If x lives in a post-singularity era, then x is not mortal"
>> "If x is a slug, then x is mortal"
>> "If x is a japanese beetle, then x is mortal"
>> "If x is a side of beef, then x is mortal"
>> "If x is a screwdriver, then x is not mortal"
>> "If x is a god, then x is not mortal"
>> "If x is a living creature, then x is mortal"
>> "If x is a goat, then x is mortal"
>> "If x is a parrot in a Dead Parrot Sketch, then x is mortal"
>>
>> 3) Ask the knowledge base if Socrates is a plant, if Socrates is a rock,
>> etc etc . working through the above list.
>>
>> 3) [According 

Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-16 Thread Abram Demski
The way I see it, on the expert systems front, bayesian networks
replaced the algorithms being currently discussed. These are more
flexible, since they are probabilistic, and also have associated
learning algorithms. For nonprobabilistic systems, the resolution
algorithm is more generally applicable (it deals with any logical
statement it is given, rather than only with if-then rules).
Resolution subsumes both forward and backward chaining; to forward
chain, we simply resolve statements in the database, but to backwards
chain, we add the negation of the query to the database and try to
derive a contradiction by resolving statements (thus proving the query
statement by reductio ad absurdum).

The most agi-oriented remnant of the rule-based system period is SOAR
(http://sitemaker.umich.edu/soar). It does add new rules to its
system, but they are summaries of old rules (to speed later
inference). SOAR recently added rienforcement learning capability, but
it doesn't use it to generate new rules  as far as I know.

On Wed, Jul 16, 2008 at 7:16 AM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> Brad: By definition, an expert system rule base contains the total sum of
> the
> knowledge of a human expert(s) in a particular domain at a given point in
> time.  When you use it, that's what you expect to get.  You don't expect the
> system to modify the rule base at runtime.  If everything you need isn't in
> the rule base, you need to talk to the knowledge engineer. I don't know of
> any expert system that adds rules to its rule base (i.e., becomes "more
> expert") at runtime.  I'm not saying necessarily that this couldn't be done,
> but I've never seen it.
>
> In which case - (thanks BTW for a v. helpful post) - are we talking entirely
> here about narrow AI? Sorry if I've missed this, but has anyone been
> discussing how to provide a flexible, evolving set of rules for behaviour?
> That's the crux of AGI, isn't it? Something at least as flexible as a
> country's Constitution and  Body of Laws. What ideas are on offer here?
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-16 Thread Richard Loosemore

Abram Demski wrote:

For what it is worth, I agree with Richard Loosemore in that your
first description was a bit ambiguous, and it sounded like you were
saying that backward chaining would add facts to the knowledge base,
which would be wrong. But you've cleared up the ambiguity.


I concur:  I was simply trying to clear up an ambiguity in the phrasing.



Richard Loosemore








On Wed, Jul 16, 2008 at 5:02 AM, Brad Paulsen <[EMAIL PROTECTED]> wrote:


Richard Loosemore wrote:

Brad Paulsen wrote:

I've been following this thread pretty much since the beginning.  I hope
I didn't miss anything subtle.  You'll let me know if I have, I'm sure. ;=)

It appears the need for temporal dependencies or different levels of
reasoning has been conflated with the terms "forward-chaining" (FWC) and
"backward-chaining" (BWC), which are typically used to describe different
rule base evaluation algorithms used by expert systems.

The terms "forward-chaining" and "backward-chaining" when used to refer
to reasoning strategies have absolutely nothing to do with temporal
dependencies or levels of reasoning.  These two terms refer simply, and
only, to the algorithms used to evaluate "if/then" rules in a rule base
(RB).  In the FWC algorithm, the "if" part is evaluated and, if TRUE, the
"then" part is added to the FWC engine's output.  In the BWC algorithm, the
"then" part is evaluated and, if TRUE, the "if" part is added to the BWC
engine's output.  It is rare, but some systems use both FWC and BWC.

That's it.  Period.  No other denotations or connotations apply.

Whooaa there.  Something not right here.

Backward chaining is about starting with a goal statement that you would
like to prove, but at the beginning it is just a hypothesis.  In BWC you go
about proving the statement by trying to find facts that might support it.
 You would not start from the statement and then add knowledge to your
knowledgebase that is consistent with it.


Richard,

I really don't know where you got the idea my descriptions or algorithm
added "...knowledge to your (the "royal" you, I presume) knowledgebase...".
Maybe you misunderstood my use of the term "output."  Another (perhaps
better) word for output would be "result" or "action."  I've also heard
FWC/BWC engine output referred to as the "blackboard."

By definition, an expert system rule base contains the total sum of the
knowledge of a human expert(s) in a particular domain at a given point in
time.  When you use it, that's what you expect to get.  You don't expect the
system to modify the rule base at runtime.  If everything you need isn't in
the rule base, you need to talk to the knowledge engineer. I don't know of
any expert system that adds rules to its rule base (i.e., becomes "more
expert") at runtime.  I'm not saying necessarily that this couldn't be done,
but I've never seen it.

I have more to say about your counterexample below, but I don't want
this thread to devolve into a critique of 1980's classic AI models.

The main purpose I posted to this thread was that I was seeing
inaccurate conclusions being drawn based on a lack of understanding
of how the terms "backward" and "forward" chaining related to temporal
dependencies and hierarchal logic constructs.  There is no relation.
Using forward chaining has nothing to do with "forward in time" or
"down a level in the hierarchy."  Nor does backward chaining have
anything to do with "backward in time" or "up a level in the hierarchy."
These terms describe particular search algorithms used in expert system
engines (since, at least, the mid-1980s).  Definitions vary in emphasis,
such as the three someone posted to this thread, but they all refer to
the same critters.

If one wishes to express temporal dependencies or hierarchical levels of
logic in these types of systems, one needs to encode these in the rules.
I believe I even gave an example of a rule base containing temporal and
hierarchical-conditioned rules.


So for example, if your goal is to prove that Socrates is mortal, then
your above desciption of BWC would cause the following to occur

1) Does any rule allow us to conclude that x is/is not mortal?

2) Answer: yes, the following rules allow us to do that:

"If x is a plant, then x is mortal"
"If x is a rock, then x is not mortal"
"If x is a robot, then x is not mortal"
"If x lives in a post-singularity era, then x is not mortal"
"If x is a slug, then x is mortal"
"If x is a japanese beetle, then x is mortal"
"If x is a side of beef, then x is mortal"
"If x is a screwdriver, then x is not mortal"
"If x is a god, then x is not mortal"
"If x is a living creature, then x is mortal"
"If x is a goat, then x is mortal"
"If x is a parrot in a Dead Parrot Sketch, then x is mortal"

3) Ask the knowledge base if Socrates is a plant, if Socrates is a rock,
etc etc . working through the above list.

3) [According to your version of BWC, if I understand you aright] Okay, if
we cannot find any facts in the KB that say that Socra

Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-17 Thread Brad Paulsen

Mike,

If memory serves, this thread started out as a discussion about binding in an 
AGI context.  At some point, the terms "forward-chaining" and 
"backward-chaining" were brought up and, then, got used in a weird way (I 
thought) as the discussion turned to temporal dependencies and hierarchical 
logic constructs.  When it appeared no one else was going to clear up the 
ambiguities, I threw in my two cents.


I made a spectacularly good living in the late 1980's building expert system 
engines and knowledge engineering front-ends, so I think I know a thing or two 
about that "narrow AI" technology.  Funny thing, though, at that time, the trade 
press were saying expert systems were no longer "real AI."  They worked so well 
at what they did, the "mystery" wore off.  Ah, the price of success in AI. ;-)


What makes the algorithms used in expert system engines less than suitable for 
AGI is their static ("snapshot") nature and "crispness."  AGI really needs some 
form of dynamic programming, probabilistic (or fuzzy) rules (such as those built 
using Bayes nets or hidden Markov models), and runtime feedback.


Thanks for the kind words.

Cheers,

Brad

Mike Tintner wrote:
Brad: By definition, an expert system rule base contains the total sum 
of the

knowledge of a human expert(s) in a particular domain at a given point in
time.  When you use it, that's what you expect to get.  You don't expect 
the

system to modify the rule base at runtime.  If everything you need isn't in
the rule base, you need to talk to the knowledge engineer. I don't know of
any expert system that adds rules to its rule base (i.e., becomes “more
expert”) at runtime.  I'm not saying necessarily that this couldn't be 
done,

but I've never seen it.

In which case - (thanks BTW for a v. helpful post) - are we talking 
entirely here about narrow AI? Sorry if I've missed this, but has anyone 
been discussing how to provide a flexible, evolving set of rules for 
behaviour? That's the crux of AGI, isn't it? Something at least as 
flexible as a country's Constitution and  Body of Laws. What ideas are 
on offer here?




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&; 


Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com