> 1) it has to assume that its *past experience* is a decent predictor of
its
> *future experience*

No. An adaptive system behaves according to its experience, because that is
the only guidance the system has --- I know my past experience is not a
decent predictor of my future, but I have no other help. ;-)

> 2) for deduction and revision to work OK on average, it has to assume that
> its future experiences will be chosen from some fairly random distribution
> of possible experiences, based on the assumption that past experience is a
> decent predictor of future experience
>
> These assumptions still exist, whether you call them assumptions about
> "experiences" or assumptions about "worlds"
>
> Similarly, you can cast the "possible worlds" assumptions of probability
> theory as assumptions about "possible experiences" of a system, intsead.
It
> doesn't change the fact that substantial assumptions are being made, as is
> necessary to circumvent the Humean induction problem.

Your arguments are all based on the claim that probability theory (PT) is
the right normative theory for intelligence/cognition/thinking.

To me, each normative theory has its own assumptions, and when they are not
satisfied, the theory cannot be applied. NARS and PT are based on different
assumptions, but it doesn't prevent them from achieving the same conclusions
here or there (but not everywhere).

If PT is taken as the right theory, the part of NARS that is consistent with
PT (such as the deduction rule and the revision rule) looks correct, though
the justification seems weird --- it should be treated as PT plus additional
assumptions about the world. On the other hand, the part of NARS that is
inconsistent with PT (such as the induction rule and the abduction rule)
looks simply wrong, and it conflicts with the results of experiments
designed according to PT.

The above inference is valid, but how can PT be justified as the right
normative theory for AI and preferred reference according to which the other
theories are judged? All the theoretical justifications that I know are
based on assumptions that are unacceptable in the current situation (such as
the consistence of the beliefs in a system), and all the practical successes
of PT that I know are in domains where the knowledge and resources are
sufficient, with respect to PT and the problems.

Pei

> -- Ben G
>
> > -----Original Message-----
> > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> > Behalf Of Pei Wang
> > Sent: Sunday, February 01, 2004 8:26 PM
> > To: [EMAIL PROTECTED]
> > Subject: Re: [agi] Bayes rule in the brain
> >
> >
> > Since confidence is defined as a function of the amount of
> > evidence (in past
> > experience), it is based on no assumption about the object world.
> > Of course,
> > I cannot prevent other people from interpreting it in other ways.
> >
> > I've made it clear in several places (such as
> >
http://www.cogsci.indiana.edu/farg/peiwang/PUBLICATION/wang.confidence.ps)
> > that the higher the confidence of a belief (in NARS) is, the
> > harder it will
> > be for the frequency to be changed by new evidence, but this does not
mean
> > that the belief is "more accurate", according to a "reality".
> >
> > An adaptive system behaves according to its past experience, but
> > it does not
> > have to treat its experience as an approximate description of the "real
> > world".
> >
> > Pei
> >
> > ----- Original Message -----
> > From: "Ben Goertzel" <[EMAIL PROTECTED]>
> > To: <[EMAIL PROTECTED]>
> > Sent: Sunday, February 01, 2004 6:45 PM
> > Subject: RE: [agi] Bayes rule in the brain
> >
> >
> > >
> > >
> > > > According to my "experience-grounded semantics", in NARS truth value
> > (the
> > > > frequency-confidence pair) measures the compatibility between a
> > statement
> > > > and available (past) experience, without assuming anything about the
> > "real
> > > > world" or the future experience of the system.
> > > >
> > > > I know you also accept a version of experience-grounded
> > > > semantics, but it is
> > > > closer to model-theoretic semantics. In this approach, it still
> > > > makes sense
> > > > to talk about the "real value" of a truth value, and to take
> > the current
> > > > value in the system as an approximation of it. In such a
> > system, you can
> > > > talk about possible worlds with different probability
> > > > distributions, and use
> > > > the current knowledge to choose among them.
> > > >
> > > > Probability theory is not compactable with the first semantics
above,
> > but
> > > > the second one.
> > > >
> > > > Pei
> > >
> > > Right -- PTL semantics is experience-grounded, but in the derivation
of
> > some
> > > of the truth value functions associated with the inference rules
> > (deduction
> > > and revision), we make an implicit assumption that "reality" is
> > drawn from
> > > some probability distribution over "possible worlds."  Among the
> > "heuristic
> > > assumptions" we use to make this work well in practice, are some
> > assumptions
> > > about the nature of this distribution over possible worlds
> > (i.e., we don't
> > > assume a uniform distribution; we assume a bias toward possible worlds
> > that
> > > are structured in a certain sense).  This kind of bias is a
> > more abstract
> > > form of Hume's assumption of a "human nature" providing a bias
> > that stops
> > > the infinite regress of the induction problem.
> > >
> > > However, I disagree that NARS doesn't assume anything about the
> > real world
> > > or the future experience of the system.  In NARS, you weight each
> > frequency
> > > estimate based on its confidence, c = n/(n+k), where n is the number
of
> > > observations on which the frequency estimate is based.  This
> > embodies the
> > > assumption that something which has been observed more times in
> > the past,
> > is
> > > more likely to occur in the future.  This assumption is precisely a
bias
> > on
> > > the space of possible worlds.  It is an assumption that
> > possible worlds in
> > > which the future resembles the past, are more likely than
> > possible worlds
> > in
> > > which the future is totally unrelated to the past.  I think
> > this is a very
> > > reasonable assumption to make, and that this assumption is part of the
> > > reason why NARS works (to the extent that it does ;).  However, I
think
> > you
> > > must admit that this DOES constitute an inductive assumption,
> > very similar
> > > to the assumption that possible worlds with temporal regularity are
more
> > > likely than possible worlds without.
> > >
> > > Also, I think that the reason the NARS deduction truth value
> > formula works
> > > reasonably well is that it resembles somewhat the rule one
> > obtains if one
> > > assumes a biased sum over possible worlds.  In your derivation of this
> > > formula you impose a condition at the endpoints, and then you choose a
> > > relatively simple rule that meets these endpoint conditions.  However,
> > there
> > > are many possible rules that meet these endpoint conditions.  The
reason
> > you
> > > chose the one you did is that it makes intuitive sense.  However, the
> > reason
> > > it makes intuitive sense is that it fairly closely matches what
> > you'd get
> > > from probability theory making a plausible assumption about the
> > probability
> > > distribution over possible worlds.
> > >
> > > Similarly, the NARS revision rule is very close to what you get if you
> > > assume au unbiased sum over all possible worlds (in the form of a
> > > probabilistic independence assumption between the relations
> > being revised)
> > >
> > > In short, I think that in NARS you secretly smuggle in
> > probability theory,
> > > by
> > >
> > > -- using a confidence estimator based on an assumption about the
> > probability
> > > distribution over possible worlds
> > > -- using "heuristically derived" deduction and revision rules that
just
> > > happen to moderately closely coincide with what one obtains by
reasoning
> > in
> > > terms of probability distributions over possible worlds
> > >
> > > On the other hand, the NARS induction and abduction rules do NOT
closely
> > > correspond to anything obtainable by reasoning about probabilities and
> > > possible worlds.  However, I think these are the weakest part
> > of NARS; and
> > > in playing with NARS in practice, these are the rules that,
> > when iterated,
> > > seem to frequently lead to intuitively implausible conclusions.
> > >
> > > -- Ben G
> > >
> > >
> > >
> > > -------
> > > To unsubscribe, change your address, or temporarily deactivate your
> > subscription,
> > > please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
> >
> >
> > -------
> > To unsubscribe, change your address, or temporarily deactivate
> > your subscription,
> > please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
> >
> >
>
>
> -------
> To unsubscribe, change your address, or temporarily deactivate your
subscription,
> please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to