Benjamin Goertzel wrote:


On Nov 13, 2007 2:37 PM, Richard Loosemore <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote:


    Ben,

    Unfortunately what you say below is tangential to my point, which is
    what happens when you reach the stage where you cannot allow any more
    vagueness or subjective interpretation of the qualifiers, because you
    have to force the system to do its own grounding, and hence its own
    interpretation.



I don't see why you talk about "forcing the system to do its own grounding" --
the probabilities in the system are grounded in the first place, as they
are calculated based on experience.

The system observes, records what it sees, abstracts from it, and chooses
actions that it guess will fulfill its goals. Its goals are ultimately grounded in in-built feeling-evaluation routines, measuring stuff like "amount of novelty observed",
"amount of food in system" etc.

So, the system sees and then acts ... and the concepts it forms and uses
are created/used based on their utility in deriving appropriate actions.

There is no symbol-grounding problem except in the minds of people who
are trying to interpret what the system does, and get confused.  Any symbol
used within the system, and any probability calculated by the system, are
directly grounded in the system's experience.

There is nothing vague about an observation like "Bob_Yifu was observed
at time-stamp 599933322", or a fact "Command 'wiggle ear' was sent
at time-stamp 544444".  These perceptions and actions are the root of the
probabilities the system calculated, and need no further grounding.

    What you gave below was a sketch of some more elaborate 'qualifier'
    mechanisms.  But I described the process of generating more and more
    elaborate qualifier mechanisms in the body of the essay, and said why
    this process was of no help in resolving the issue.


So, if a system can achieve its goals based on choosing procedures that
it thinks are likely to achieve its goals, based on the knowledge it gathered via its perceived experience -- why do you think it has a problem?
I don't really understand your point, I guess.  I thought I did -- I thought
your point was that precisely specifying the nature of a conditional probability
is a rats-nest of complexity.  And my response was basically that in
Novamente we don't need to do that, because we define conditional probabilities
based on the system's own knowledge-base, i.e.

Inheritance A B <.8>

means

"If A and B were reasoned about a lot, then A would (as measred by an weighted
average) have 80% of the relationships that B does"

But apparently you were making some other point, which I did not grok, sorry...

Anyway, though, Novamente does NOT require logical relations of escalating
precision and complexity to carry out reasoning, which is one thing you seemed
to be assuming in your post.

You are, in essence, using one of the trivial versions of what symbol grounding is all about.

The complaint is not "your symbols are not connected to experience". Everyone and their mother has an AI system that could be connected to real world input. The simple act of connecting to the real world is NOT the core problem.

If you have an AGI system in which the system itself is allowed to do all the work of building AND interpreting all of its symbols, I don't have any issues with it.

Where I do have an issue is with a system which is supposed to be doing the above experiential pickup, and where the symbols are ALSO supposed to be interpretable by human programmers who are looking at things like probability values attached to facts. When a programmer looks at a situation like

> ContextLink <.7,.8>
>      home
>      InheritanceLink Bob_Yifu friend

... and then follows this with a comment like:

> which suggests that Bob is less friendly at home than
> in general.

... they have interpreted the meaning of that statement using their human knowledge.

So here I am, looking at this situation, and I see:

  ---- AGI system intepretation (implicit in system use of it)
  ---- Human programmer intepretation

and I ask myself which one of these is the real interpretation?

It matters, because they do not necessarily match up. The human programmer's intepretation has a massive impact on the system because all the inference and other mechanisms are built around the assumption that the probabilities "mean" a certain set of things. You manipulate those p values, and your manipulations are based on assumptions about what they mean.

But if the system is allowed to pick up its own knowledge from the environment, the implicit "meaning" of those p values will not necessarily match the human interpretation. As I say, the meaning is then implicit in the way the system *uses* those p values (and other stuff).

It is a nontrivial question to ask whether the implicit system interpretation does indeed match the human intepretation built into the inference mechanisms.

In order to completely ground the system, you need to let the system build its own symbols, yes, but that is only half the story: if you still have a large component of the system that follows a programmer-imposed interpretation of things like probability values attached to facts, you have TWO sets of symbol-using mechanisms going on, and the system is not properly grounded (it is using both grounded and ungrounded symbols within one mechanism).



Richard Loosemore

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=64984744-c1520e

Reply via email to