On Sun, Sep 21, 2008 at 10:43 PM, Abram Demski <[EMAIL PROTECTED]>wrote:

> The calculation in which I sum up a bunch of pairs is equivalent to
> doing NARS induction + abduction with a final big revision at the end
> to combine all the accumulated evidence. But, like I said, I need to
> provide a more explicit justification of that calculation...



As an example inference, consider

Ben is an author of a book on AGI <tv1>
This dude is an author of a book on AGI <tv2>
|-
This dude is Ben <tv3>

versus

Ben is odd <tv1>
This dude is odd <tv2>
|-
This dude is Ben <tv4>

(Here each of the English statements is a shorthand for a logical
relationship that in the AI systems in question is expressed in a formal
structure; and the notations like <tv1> indicate uncertain truth values
attached to logical relationships,  In both NARS and PLN, uncertain truth
values have multiple components, including a "strength" value that denotes a
frequency, and other values denoting confidence measures.  However, the
semantics of the strength values in NARS and PLN are not identical.)

Doing these two inferences in NARS you will get

tv3.strength = tv4.strength

whereas in PLN you will not, you will get

tv3.strength >> tv4.strength

The difference between the two inference results in the PLN case results
from the fact that

P(author of book on AGI) << P(odd)

and the fact that PLN uses Bayes rule as part of its approach to these
inferences.

So, the question is, in your probabilistic variant of NARS, do you get

tv3.strength = tv4.strength

in this case, and if so, why?

thx
ben



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com

Reply via email to