> Ben,
>
> We seem to agree that probability theory can/should be applied in certain
> situations, but not in certain others. Now the problem is the
> condition for
> the application.

Not exactly.  I think that probability theory is nearly always useful, but
that in some situations it can be used "straight out of the box" in a
straightforward way, whereas in other situations it needs to be used in
conjunction with a few auxiliary heuristic assumptions.  Much of my work
over the last year has been focused on figuring out exactly what these
auxiliary heuristic assumptions are; and this quest seems to have been quite
successful...

> Clearly, when there isn't much data, probability theory shouldn't be used.
> If the available data is rich, it is usually fine to do probabilistic
> inference among the data. However, to extend the conclusions to
> the outside
> of the data set, such as in predicting of the future, "rich data" is not
> enough --- it needs to be assumed that the future situations will be
> consistent with the probability distribution established from the
> past data
> (a variation of Hume's induction problem).

Sure, but NARS or any other uncertain inference system, when applied to
predicting the future, also falls prey to Hume's induction paradox.  There's
no way to avoid it.

Recall how Hume avoided it: he introduced the assumption of "human nature."
In modern terms, he argued that we have some hard-wired biases which end the
theoretical infinite regress of the induction problem.  In probabilistic
terms, this means that the human brain is biased to recognize certain
probability distributions, and the these distributions are reasonable ones
for modeling the world in which humans are embedded.  (Note that any finite
system is going to have a bias toward certain probability distributions,
namely those with an algorithmic information content bounded by some
function of the size of the system....  So there's no option of having an
unbiased inference system, it's just a question of what the bias is, and how
well the bias is adapted to the environment.)

> In my term, it is a "sufficient knowledge" assumption --- "Though the
> occurrence of future events is unknown, their probability distribution is
> known". This assumption is implicitly taken in the Nature paper, in the
> setting of their experiments. Without such an assumption,
> probability theory
> cannot be used. For example, no valid conclusion can be drawn unless there
> is a consistent probability distribution on the belief space at the very
> beginning.

Any uncertain inference system makes SOME kind of Humean-inductive
hypothesis, it's inevitable....

The NARS weight-of-evidence formula c=n/(n+k), for example [and there are
other similar examples in NARS], embodies an assumption about the regularity
of events in the world.  In a random world, this formula doesn't make any
sense, because the past is no guide to the future.

The kind of assumption in that paper is a very crude form of inductive
assumption.  For more sophisticated applications of probability theory we
can make more abstract assumptions, e.g. assuming a distribution over
probability distributions [a second-order pdf], or a third-order pdf, etc.
One is always making assumptions, but as Hume showed, that problem is not
restricted to probability theory...

> I think you are right to say that, compared to high-level cognition,
> sensorimotor domains are closer to the situation assumed by probability
> theory. However, even in sensorimotor domains, an intelligent
> system should
> be able to deal with novel situations that are inconsistent with previous
> beliefs --- this is where learning and adaptation become necessary. I'm
> afraid that if we assume that the input data to the system follow a hidden
> probability distribution (as in that paper), then the resulting
> system will
> more like animals (whose sensorimotor capacity is excellent if there is
> nothing novel in the environment) than like human beings ---
> usually (except
> in laboratory settings) we cannot assume that the input data
> follow a fixed
> probability distribution.

True, but this is a limitation of their simplistic application of
probability theory, not of probability theory itself.

Our Probabilistic Term Logic framework (with which you're loosely familiar,
though I suppose you haven't carefully followed the many recent
developments) is able to deal with novel situations that are inconsistent
with previous beliefs, although it's also founded on probability theory
(plus some auxiliary heuristics).

> BTW, to me, the psychological work on human bias, heuristics, and fallacy
> (including the well known work by Tversky and Kahneman) contains
> many wrong
> results --- the phenomena are correctly documented, but their analysis and
> conclusions are often based on implicit assumptions that are not
> justified.

Yes, I agree with you there.

-- Ben


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to