Ed,

I only have time to look at one small part of your post today...


Ed Porter wrote:
The “Does Mary own a book?” example, once the own relationship is activated with Mary in the owner slot and “a book” in the owned-object slot, spreads “?” activation, which asks if there any related relationships or instances or generalization related to them support the statement that Mary owns a book. The activation causes instances of the “give” relationship in which Mary was a recipient and a book was the think given to be activated, since if Mary was given a book that would indicate she owned a book. Such and instance is found, tending to confirm that Mary does own a book, called book-17 in the example, which was given to her by John.

The “John fell in the hallway” example, when told that (1) “John fell in the hallway”, (2) “Tom had cleaned it”, and (3) He was hurt, automatically implies that it was John who was hurt, and that the floor in the hallway was probably wet after Tom cleaned it and John slipped and feel when walking in the wet hallway. Tell me how you could perform they type of implication and cognition shown in these two Shruiti examples without some form of binding?

I for one cannot figure out how to do this with anything like Poggio’s type of binding that would fit into a human brain.

Okay, so the question is what happens if the system is asked "Does Mary own a book?", given that the system does in fact know, as a result of some previous situation, that Mary received a gift which was a book.

How does the system achieve the "binding" that links the books referred to in the two situations, so that the question can be answered? This is what would be called a "binding problem".

First, you have to notice that there are two types of answer to this question. One is (speaking very loosely) "deterministic" and one is (even more loosely) "emergent".

The determinstic answer would find some kind of mechanism that obviously, or clearly results in a connecton being established between the two book instances - the book given as a gift, and the hypothetical book mentioned in the question about whether she owns a book. A deterministic answer would *convince* us that the two instances must become connected, as a result of the semantic (or other) properties of the two pieces of knowledge.

Now I must repeat what I said before about some (perhaps many?) claimed solutions to the binding problem: these claimed solutions often establish the *mechanism* by which a connection could be established IF THE TWO ITEMS WANT TO TALK TO EACH OTHER. In other words, what these people (e.g. Shastri and Ajjannagadde) do is propose a two step solution: (1) the two instances magically decide that they need to get hooked up, and (2) then, some mechanism must allow these two to make contact and set up a line to one another. Think of it this way: (1) You decide that at this moment that you need to call Britney Spears, and (2) You need some mechanism whereby you can actually establish a phone connection that goes from your place to Britney's place.

The crazy part of this "solution" to the binding problem is that people often make the quiet and invisible assumption that (1) is dealt with (the two items KNOW that they need to talk), and then they go on to work out a fabulously powerful way (e.g. using neural synchronisation) to get part (2) to happen. The reason this is crazy is that the first part IS the binding problem, not the second part! The second phase (the practical aspects of making the phone call get through) is just boring machinery. By the time the two parties have decided that they need to hook up, the show is already over... the binding problem has been solved. But if you look at papers describing these so-called solutions to the binding problem you will find that the first part is never talked about.

At least, that was true of the S & A paper, and at least some of the papers that followed it, so I gave up following that thread in utter disgust.

It is very important to break through this confusion and find out exactly why the two relevant entities would decide to talk to each other. Solving any other aspect of the problem is not of any value.

Now, going back to your question about how it would happen: if you look for a determinstic solution to the problem, I am not sure you can come up with a general answer. Whereas there is a nice, obvious solution to the question "Is Socrates mortal?" given the facts "Socrates is a man" and "All men are mortal", it is not at all clear how to do more complex forms of binding without simply doing massive searches. Or rather, it is not clear how you can *guarantee* the finding of a solution.

Basically, I think the best you can do is to use various heuristics to shorten the computational problem of proving that the two books can relate. For example, the system can learn the general rule "If you receive a gift of X, then you subsequently own X", and then it can work backwards from all facts that allow you to conclude ownership, to see if one fulfills the requirement. You then also have to deal with problems such as receiving a gift and then giving it away or losing it. The question is, do you search through all of those subjunctive worlds? A nightmare, in general, to do an exhaustive search.

So if a determinstic answer is not the way to go, what about the alternative, which I have called the "emergent" answer?

This is not so very different from finding a good heuristic, but the philosophy is very very different. If the system is continually building models of the world, using constraints among as many aspects of those models as possible, and applying as much pragmatic, real-world general knowledge as it can, then I believe that such a system would quickly home in on a model in which the question "Does Mary own a book?" was sitting alongside a model describing a recent fact such as "Mary got a book as a gift", and the two would gel.

How *exactly* would they gel? Now that is where the philosophical difference comes in. I cannot give any reason why the two models will find each other and make a strong mutual fit! I do not claim to be able to prove that the binding will take place. Instead, what I claim is that as a matter of pure, empirical fact, a system with a rich enough set of contextual facts, and with a rich enough model-building mechanism, will simply tend to build models in which (most of the time) the bindings will get sorted out.

There is a bit more to the story than that, but you have to understand that in this "emergent" (or, to be precise, "complex system") answer to the question, there is no guarantee that binding will happen. The binding problem in effect disappears - it does not need to be explicitly solved because it simply never arises. There is no specific mechanism designed to construct bindings (although there are lots of small mechanisms that enforce constraints), there is only a general style of computation, which is the relaxation-of-constraints style.

Overall, then, I believe that any attempts to find a guaranteed solution, or an explicit mechanism, that causes bindings to be established is actually a folly: guarantees are not possible, and in practice the people who offer this style of explanation never do suply the guarantees anyway, but just solve peripheral problems.



That is my view of the binding problem. It is a variant of the general idea that things happen because of complexity (although that is putting it so crudely as to almost confuse the issue).



Richard Loosemore





























-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to