Pei,

I had downloaded NARS from your website and played with entering various
info.  I was wondering if you have an updated version you were planning to
put on the web.  I think the last version was from 1999...

--Kevin

-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Behalf Of Pei Wang
Sent: Monday, February 02, 2004 3:55 PM
To: [EMAIL PROTECTED]
Subject: Re: [agi] Bayes rule in the brain


> because you have built into NARS a certain inductive assumption about the
> way future experience will be related to past experience.
>
> These inductive assumptions, intuitively, represent an assertion that some
> possible experiences are MORE LIKELY than others.  So they are very
closely
> analogous to assuming some possible experiences are more probable than
> others in a probability-theory sense... though I understand that you  may
> interpret "more likely" in a different way from standard prob. theory.

Yes, NARS is built according to certain assumptions, and the system does
predict the future according to the past (as all adaptive systems do).
However, these assumptions are not about the probability distribution of
future experience or of the outside world, but about how the system should
behave, given available knowledge and resources.

> We've discussed this before, but I wonder if you could suggest any
> computational experiments suitable for comparing probabilistic inference
> systems to NARS?  Or do you think the only valid comparative experiments
> involve full-on AGI tasks like controlling robots in simulated worlds,
> chatting in human language, etc.

I've been thinking about it, but haven't got any good ideas. It's not easy
to find proper problems to compare paradigms based on different assumptions.

> PTL doesn't require an assumption of absolute consistency between beliefs

This is a topic to be argued at a future time --- what you said before
haven't convinced me yet. ;-)

> Sufficiency of knowledge and resources is a matter of
> degree.  I suppose you're intending to say that NARS, as opposed to PTL,
> requires less knowledge and resources to arrive at a given level of
> inferential accuracy.

I'd rather say that "sufficiency" is relative, not absolute --- that is why
I said "... with respect to PT and the problems". What is "sufficient" for
probability theory is not sufficient for binary logic. But for the current
discussion, I don't see it as a matter of degree. When I say that NARS works
with insufficient knowledge, I mean that it doesn't require any specific
knowledge to start with, and the system has to deal with all possible future
situations. It cannot treat a truth-value as fixed, even in the sense of
probability distribution.

When the term "accuracy" is used, an outside standard is assumed, by which
the system's conclusions can be checked for correctness. I don't judge the
systems in that way.  A "reasonable" conclusion is not necessarily correct,
because the former is judged according to past experience, while the latter
is judged according to future experience.

Pei


-------
To unsubscribe, change your address, or temporarily deactivate your
subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to