Thanks for the critique. Replies follow...

On Sat, Sep 20, 2008 at 8:20 PM, Pei Wang <[EMAIL PROTECTED]> wrote:
> On Sat, Sep 20, 2008 at 2:22 PM, Abram Demski <[EMAIL PROTECTED]> wrote:
[...]
> The key, therefore, is whether NARS can be FULLY treated as an
> application of probability theory, by following the probability
> axioms, and only adding justifiable consistent assumptions when
> necessary.

Yes, that's the main question. Also, though, if the answer is no it is
potentially important to figure out why.

[...]
> I assume by "treat NARS as probability" you mean "to treat the
> Frequency in NARS as a measurement following the axioms of probability
> theory". I mentioned this because there is another measurement in
> NARS, Expectation (which is derived from Frequency and Confidence),
> which is also intuitively similar to probability.

Yes, you are right... at least so far, I've only been looking at
frequency + confidence. Getting expectation from that does not look
like it violates any laws.

[...]
>
>> Then, we define "probabilistic inheritance", which carries a
>> probability that a given property of B will be inherited by A and that
>> a given instance of A will be inherited by B.
>
> There is a tricky issue here. When evaluating the truth value of
> A-->B, NARS doesn't only check "properties" and "instances", but also
> check "supersets" and "subsets", intuitively speaking. For example,
> when the system is told that "Swans are birds" and "Swans fly", it
> derives "Birds fly" by induction. In this process "swan" is counted as
> one piece of evidence, rather than a set of instances. How many swans
> the system knows doesn't matter in this step. That is why in the
> definitions I use "extension/intension", not "instance/property",
> because the latter is just special cases of the former. Actually, the
> truth value of A-->B measures how often the two terms can substitute
> each other (in different ways), not how much one set is included in
> the other, which is the usual probabilistic reading of an inheritance.
>
> This is one reason why NARS does not define "node probability".

Yes, I understand this. I should have worded myself more carefully.

>
>> Probabilistic
>> inheritance behaves somewhat like the full NARS inheritance: if we
>> reason about likelihoods (the probability of the data assuming (A
>> prob_inh B) = x), the math is actually the same EXCEPT we can only use
>> primitive inheritance as evidence, so we can't spread evidence around
>> the network by (1) treating prob_inh with high evidence as if it were
>> primitive inh or (2) attempting to use deduction to accumulate
>> evidence as we might want to, so that evidence for "A prob_inh B" and
>> evidence for "B prob_inh C" gets combined to evidence for "A prob_inh
>> C".
>
> Beside the problem you mentioned, there are other issues. Let me start
> at the basic ones:
>
> (1) In probability theory, an event E has a constant probability P(E)
> (which can be unknown). Given the assumption of insufficient knowledge
> and resources, in NARS P(A-->B) would change over time, when more and
> more evidence is taken into account. This process cannot be treated as
> conditioning, because, among other things, the system can neither
> explicitly list all evidence as condition, nor update the probability
> of all statements in the system for each piece of new evidence (so as
> to treat all background knowledge as a default condition).
> Consequently, at any moment P(A-->B) and P(B-->C) may be based on
> different, though unspecified, data, so it is invalid to use them in a
> rule to calculate the "probability" of A-->C --- probability theory
> does not allow cross-distribution probability calculation.

This is not a problem the way I set things up. The likelihood of a
statement is welcome to change over time, as the evidence changes.

>
> (2) For the same reason, in NARS a statement might get different
> "probability" attached, when derived from different evidence.
> Probability theory does not have a general rule to handle
> inconsistency within a probability distribution.

The same statement holds for PLN, right?
[...]
>
>> My proposal is to add 2 regularity assumptions to this structure.
>> First, we assume that the prior over probability values for prob_inh
>> is even. This givens us some permission to act like the probability
>> and the likelihood are the same thing, which brings the math closer to
>> NARS.
>
> That is intuitively acceptable, if interpreted properly.
>
>> Second, assume that a "high" truth value on one level strongly
>> implies a high one on the next value, and similarly that low implies
>> low.
>
> The first half is fine, but the second isn't. As the previous example
> shows, in NARS a high Confidence does implies that the Frequency value
> is a good summary of evidence, but a low Confidence does implies that
> the Frequency is bad, just that it is not very stable.

But I'm not talking about confidence when I say "higher". I'm talking
about the system of levels I defined, for which it is perfectly OK.

Essentially what I'm claiming here is that the inferences of NARS are
implicitly justified by a tower of higher-order probabilities, which
are not explicitly manipulated, but if they were they would give us
permission to do some of the apparently sloppy things that NARS does
(specifically, the deduction rule).

[...]

> If you work out a detailed solution along your path, you will see that
> it will be similar to NARS when both are doing deduction with strong
> evidence. The difference will show up (1) in cases where evidence is
> rare, and (2) in non-deductive inferences, such as induction and
> abduction. I believe this is also where NARS and PLN differ most.

Guilty as charged! I have only tried to justify the deduction rule,
not any of the others. I seriously didn't think about the blind spot
until you mentioned it. I'll have to go back and take a closer look...

Thanks,

Abram


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com

Reply via email to