A twist that interests me more is to broaden Cox's arguments to handle multiple-component probabilities.

I believe one can formulate an addition to Cox's axioms that places strong constraints on any method of measuring plausibility using intervals.

But this is work-in-progress on my part, so I won't speculate about it in further detail at the moment..

-- Ben

On Feb 2, 2007, at 4:44 PM, Pei Wang wrote:

Yes, that will work, though in the AGI context the condition is almost
never satisfied --- the beliefs cannot be assumed to be based on
"equivalent amounts of evidence", unless in special cases.

This is exactly the problem of probability theory, which works only
when all beliefs are evaluated against the same body of evidence.

Pei

On 2/2/07, Ben Goertzel <[EMAIL PROTECTED]> wrote:

Pei,

I wonder if Cox's Assumption 1 could be salvaged by replacing it
with, say, an assumption that

"Among a set of statements supported by equivalent amounts of
evidence, the relative plausibility of an individual
statement may be assessed by a single real number."

Based on this modified assumption, I think a variation on Cox's
arguments could probably be made to work.

Would this modification address your objection?

-- Ben

On Feb 2, 2007, at 4:13 PM, Pei Wang wrote:

> Ben,
>
> To me, not only Assumption 3 is too strong, but also Assumption 1,
> which does assume that a real number is enough for the "plausibility > of a statement". For this reason, these assumptions do not even "holds > approximately" in the AGI context --- using one number or two numbers
> makes a huge difference, which I'm sure you know well.
>
> The Halpern vs. Snow debate is largely irrelevant to this issue. I
> mentioned them just to show that Cox's work is well known to the UAI
> community.
>
> Pei
>
> On 2/2/07, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>>
>> The paper Pei forwarded claims that Cox's arguments don't work for
>> the discrete case, but the attached paper from Snow in 2002 [which
>> will come through if this listserver allows attachments...] presents >> a counterargument, suggesting that a variant of Cox's argument does
>> in fact work for the discrete case.
>>
>> However, my contention is that Cox's assumptions, while reasonable, >> are too strong to be viably assumed for a finite-resources AI system
>> (or a human brain).
>>
>> To see why, look at Assumption 3 in
>>
>> http://en.wikipedia.org/wiki/Cox's_theorem
>>
>> which states basically that
>>
>> "
>> Suppose [A & B] is equivalent to [C & D]. If we acquire new
>> information A and then acquire further new information B, and update
>> all probabilities each time, the updated probabilities will be the
>> same as if we had first acquired new information C and then acquired
>> further new information D.
>> "
>>
>> This is not exactly the case in Novamente, nor in the human brain.
>>
>> So one question is: If this assumption holds only to approximately in
>> an AI system (or other mind), how inaccurate is the ensuing
>> approximation of probabilistic correctness constituted by its
>> judgments?  I.e., how wide are the error bars on the conclusion of
>> Cox's Theorem, when its assumptions are approximately varied?
>>
>> -- Ben
>>
>>
>>
>>
>>
>> On Feb 2, 2007, at 2:39 PM, Pei Wang wrote:
>>
>> >> > I don't know of any work explicitly addressing this sort of
>> >> issue, do
>> >> > you?
>> >>
>> >> No, none that address Cox and AI directly, but I suspect one is
>> >> forthcoming perhaps from you. Yes? :)
>> >
>> > There is a literature on Cox and AI. For example,
>> > http://www.cs.cornell.edu/home/halpern/papers/cox1.pdf
>> >
>> > Pei
>> >
>> > -----
>> > This list is sponsored by AGIRI: http://www.agiri.org/email
>> > To unsubscribe or change your options, please go to:
>> > http://v2.listbox.com/member/?list_id=303
>>
>> -----
>> This list is sponsored by AGIRI: http://www.agiri.org/email
>> To unsubscribe or change your options, please go to:
>> http://v2.listbox.com/member/?list_id=303
>>
>>
>>
>>
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?list_id=303

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to