Ben,
I have no problem with any of the points you made in the following.
However, the axioms of probability theory and interpretations of
probability (frequentist, logical, subjective) all take a consistent
probability distribution as precondition. Therefore, this assumption
is and will be
On Sun, 04 Feb 2007 07:52:02 -0500, Pei Wang [EMAIL PROTECTED] wrote:
However, the axioms of probability theory and interpretations of
probability (frequentist, logical, subjective) all take a consistent
probability distribution as precondition.
Also I think the meaning of 'probabilistic
Ben, this is also why I was wondering why your hypothesis is framed
in terms of both Cox and De Finetti. Unless I misunderstand Cox,
their interpretations are in some ways diametrically opposed. De
Finetti was a radical subjectivist while Cox is (epistemically) an
ardent
On 2/4/07, gts [EMAIL PROTECTED] wrote:
On Sun, 04 Feb 2007 07:52:02 -0500, Pei Wang [EMAIL PROTECTED] wrote:
However, the axioms of probability theory and interpretations of
probability (frequentist, logical, subjective) all take a consistent
probability distribution as precondition.
Also
The interpretation of probability is a different matter --- we have
been talking about consistency, which is largely independent to which
interpretation you subscribe to.
Correct
In my opinion, in the AGI context, each of the three traditional
interpretation is partly applicable but partly
On 2/4/07, Ben Goertzel [EMAIL PROTECTED] wrote:
As you know I feel differently. I think the traditional subjectivist
interpretation is
conceptually well-founded as far as it goes,
But it still requires consistency --- actually it is its only
requirement. If this requirement is dropped, then
I agree that each school has its own insights, and none has all the
insights...
I would say that the subjectivists are more fully correct and
consistent within the scope of what they say, however.
And their work leads to what I think is the right conclusion about
probability and AGI:
A mathematical test for objectivity/subjectivity might be whether
Novamente (or any AGI) could allow, in principle, for the possibility of
different posterior probabilities on bayes rule as can happen under
subjectivism. My thought is that a programmer is essentially forced for
practical
The definition of 'probabilistic consistency' that I was using
comes from
ET Jaynes' book _Probability Theory - The Logic of Science_, page
114.
These are Jaynes' three 'consistency desiderata' for a
probabilistic robot:
1. If a conclusion can be reasoned out in more than one way, then
On Sun, 04 Feb 2007 11:10:57 -0500, Pei Wang [EMAIL PROTECTED] wrote:
I don't think any intelligent system (human or machine) can achieve
any of the three desiderata, except in trivial cases.
I have no doubt you and Ben are correct on this point. Enormous resources
would be required for an
On 2/4/07, gts [EMAIL PROTECTED] wrote:
Personally I would be inclined to allow exceptions to Jaynes' second and
third desiderata.
Exceptions to the first desiderata is known as the reference class
problem --- when an individual is seen as instance of different
categories, different
On Sun, 04 Feb 2007 13:15:27 -0500, Pei Wang [EMAIL PROTECTED] wrote:
none of the existing AGI project is designed [according to the tenets of
objective/logical bayesianism]
Hmm. My impression is that to whatever extent AGI projects use bayesian
reasoning, they usually do so in a way that
Actually in AI the most influential version of Bayesianism is that of
Judea Pearl, which is subjective though not in the full sense of the
term as it means in history. All the other more traditional versions
have much less impact.
Pei
On 2/4/07, gts [EMAIL PROTECTED] wrote:
On Sun, 04 Feb 2007
On Feb 4, 2007, at 2:23 PM, gts wrote:
On Sun, 04 Feb 2007 13:15:27 -0500, Pei Wang [EMAIL PROTECTED]
wrote:
none of the existing AGI project is designed [according to the
tenets of objective/logical bayesianism]
Hmm. My impression is that to whatever extent AGI projects use
bayesian
Hi Russell,
OK, I'll try to specify my ideas in this regard more clearly. Bear
in mind though that there are many ways to formalize an intuition,
and the style of formalization I'm suggesting here may or may not be
the right one. With this sort of thing, you only know if the
On 2/4/07, Ben Goertzel [EMAIL PROTECTED] wrote:
Hi Russell,
OK, I'll try to specify my ideas in this regard more clearly. Bear
in mind though that there are many ways to formalize an intuition,
and the style of formalization I'm suggesting here may or may not be
the right one. With this
On 2/3/07, Ben Goertzel [EMAIL PROTECTED] wrote:
My desire in this context is to show that, for agents that are
optimal or near-optimal at achieving the goal G under resource
restrictions R, the set of important implicit abstract expectations
associated with the agent (in goal-context G as
Pei,
First, I agree that proving stuff like this is not the maximally
interesting aspect of AGI. Actually creating a thinking machine is a
lot more exciting, and that is what I'm devoting the bulk of my
attention to!!
However, the question of how much inconsistency is inevitable in an
Okay... let's say when an agent exhibits inconsistent implicit
preferences for acting in a particular situation, it is being
suboptimal, and the degree of suboptimality depends on the degree
of inconsistency.
Given:
S = a situation
I = importance (to G) of acting correctly in that
On 2/4/07, Ben Goertzel [EMAIL PROTECTED] wrote:
However, I'm not sure it helps with the quite hard task of coming up
with a proof of the hypothesis, or a fully rigorous formulation ;-)
I guess I'd better leave that to the professionals :)
-
This list is sponsored by AGIRI:
On 2/3/07, Ben Goertzel [EMAIL PROTECTED] wrote:
However, the question of how much inconsistency is inevitable in an
AGI is an interesting one.
I don't think an AGI system will try to answer this question. Instead,
it will try its best to resolve inconsistency, no matter how much it
runs
I must agree with Pei on this. Think of a reasonably large AI,
say, eight light hours across. Any belief frame guaranteed to be
globally consistent must be at least eight hours out of date. So
if you only act on globally consistent knowledge, your reaction
time is never less than your
Again, to take consistency as an ultimate goal (which is never fully
achievable) and as a precondition (even an approximate one) are two
very different positions. I hope you are not suggesting the latter ---
at least your posting makes me feel that way.
Hi,
In the Novamente system,
23 matches
Mail list logo