OK... I am over-busy today, but tomorrow I will try to frame my "betting game" suggestion more clearly.

It is different than Walley's approach but is related as you note...

ben

On Feb 6, 2007, at 1:55 PM, Pei Wang wrote:

Ben,

I read it again, but still cannot fully understand it.

Since you put it in a betting situation, and uses two numbers for
probability, maybe you can relate it to Walley's work? He started with
a similar setting.

I hope you are not really using second-order probability, because
unlike what many people assume, it is not a proper way to represent
ignorance about probability in general (though binary second-order
statement is OK).

Pei

On 2/5/07, Ben Goertzel <[EMAIL PROTECTED]> wrote:

Pei (and others),

I thought of a way to define two-component truth values in terms of
betting strategies (vaguely in the spirit of de Finetti).

I was originally trying to think of a way to define two-component
truth values a la Cox, but instead a betting strategy approach is
what I came up with.  This led me to a Cox-type approach as well.

First let's review de Finetti's basic idea. Consider a proposition S.

You must set the price of a promise to pay $1 if S is true, and $0 if
S is false. You know that your opponent will be able to choose either
to buy such a promise from you at the price you have set, or require
you to buy such a promise from your opponent, still at the same price.

The price p you set is your operational subjective probability for S.

Now consider an alternative betting scenario.  This scenario involves
two friends and an opponent who is a common opponent to the two
friends (or it could be two different opponents, that doesn't really
matter, but I'll use one opponent for simplicity).

At the start of the experiment, Friend 1 and Friend 2 and the
opponent are in the same room.

Friend 1 and the opponent play the above de Finetti betting game,
thus assessing Friend 1's subjective probability for S.

Then, the two friends are moved into two separate rooms.

In his room, Friend 1 is going to play the above de Finetti betting
game N more times, but inbetween each game, he is going to gather one
more piece of evidence about the statement S (i.e. make one more
relevant observation).

In his room, Friend 2 is going to play a different game.  He is asked
to set the price of a promise to pay $1 if Friend 1's subjective
probability for S (after making the N additional observations) is in
the range [L,U], and $0 if it is not.  Incidentally, this kind of bet
(betting that some quantity will lie within a certain range) is known
as a "short straddle" among options traders.

Thus ends the experiment.

The price Friend 2 sets, b, will fulfill the statement

"[L,U] is a credible interval with credibility level b, from Friend
2's subjective perspective, for (Friend 1's operational subjective
probability of S after N more observations have been gathered)."

The use of two betters in the experiment may seem odd, but in fact
it's only natural since we are talking about a second-order probability.

This exercise lets us derive a multi-component truth value (an
"indefinite probability", to be precise) via betting-type arguments,
similar to (but more complex than) de Finetti's arguments justifying
ordinary probabilities.

Next, to make a Cox-type justification of indefinite probabilities,
one would instead simply argue that Friend 2's beliefs of the form

"Friend 1's subjective probability for S will be > X, or < Y, after
he makes N more observation"

must have plausibilities assigned via some operation that obeys Cox's
axioms.  Assuming that Friend 2 obeys Cox's axioms when reasoning
about Friend 1, one arrives at the conclusion that Friend 2 must obey
the laws of probability, and hence that the numbers b in indefinite
truth values are actual Bayesian credible intervals.

All in all, it seems that the introduction of two betters or
(equivalently) second-order statements, allows the familiar betting
or Cox-axiom based justifications of probability to be used to
justify indefinite probabilities.

Thus, it seems that these standard justifications of probability
theory actually *do* lead to a justification for multiple-component
truth values, if one is wiling to do a little fiddling.  One simply
has to add an extra level of reflection, and then one can derive
rigorously grounded confidence assessments to go with one's
rigorously grounded probability estimates.

However, this doesn't solve the problem of finite resources making
true probabilistic accuracy impossible, of course.  AGI systems with
finite resources will in fact not be ideally rational betting
machines; they will not fully obey Cox's axioms; an ideal supermind
would be able to defeat them via clever betting taking advantage of
their weaknesses.

-- Ben



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to