Interestingly, in our system, we nearly always get an equilibrium even
without any kind of "rate of change decay factor."  It's just that if too
much "conclusion based premise revision" goes on, then the equilibrium may
reflect a too-much-revised illusory world.  Basically, the process of
revising premises based on conclusions is difficult (but possible) to
control and has a tendency to lead to chaotic inference trajectories, if
things aren't set up carefully.

We have a mechanism based on keeping track of "weight of evidence" that
works pretty much like your decay factor; and what we find is that it's a
bit fussy, that's all.

The philosophical conclusion, perhaps, is that "deviations from 'seeing is
believing' have to be handled with great care" ...

[I note that this "nearly always get an equilibrium" result is only the case
when NOTHING BUT first-order probabilistic inference is going on.  When
other processes are running in the system too, say the nonlinear-dynamical
attention-allocation function that drives the system's focus of attention,
or new-node-formation processes, etc. then convergence does not occur.

You mention new information being added into the system via GoalNodes, which
is one route, but in Novamente there are also other processes besides
GoalNodes and first-order prob. inference that may add knowledge to the
system as well.]

-- Ben

> -----Original Message-----
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
> Behalf Of Brad Wyble
> Sent: Thursday, February 20, 2003 3:26 PM
> To: [EMAIL PROTECTED]
> Subject: Re: [agi] A probabilistic/algorithmic puzzle...
>
>
> >
> > But anyway, using the weighted-averaging rule dynamically and
> iteratively
> > can lead to problems in some cases.  Maybe the mechanism you
> suggest -- a
> > nonlinear average of some sort -- would have better behavior, I'll think
> > about it.
>
> The part of the idea that guaranteed an eventual equilibrium was
> to add decay to the variables that can trigger internal
> probability adjustments(in my case, what I called them "truth").
> Eventually the system will stop self-modifying when the
> energy(truth) runs out.  The only way to add more truth to the
> system would be to acquire new information via adding goal nodes
> for that purpose.  You could say that the internal conistency
> checker "feeds on" the truth energy introduced into the system by
> the completion of data-acquisition goals(which are capable of
> incrementing truth values).
>
> This should guarantee the prevention of  infinite self-modification loops.
>
> -Brad
>
> -------
> To unsubscribe, change your address, or temporarily deactivate
> your subscription,
> please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
>

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to