- Forwarded message from Alessandro Antonucci [EMAIL PROTECTED] -
From: Alessandro Antonucci [EMAIL PROTECTED]
Date: Tue, 6 Feb 2007 11:56:58 +0100
To: undisclosed-recipients: ;
Subject: [Comp-neuro] ISIPTA '07 Second Call for Papers
Reply-To: [EMAIL PROTECTED]
Russell,
I'm not suggesting that an omniscient player would not win over time as a
result of its superior knowledge.
I am suggesting that a non-omniscient player need not necessarily be
bilked in the sense meant by De Finetti; that is, it needn't be forced to
lose automatically due to
On Tue, 06 Feb 2007 09:56:10 -0500, Russell Wallace
[EMAIL PROTECTED] wrote:
I'm not talking about dutch book, I'm talking about the following quoted
from Ben's original post, emphasis added):
I think Ben is talking about dutch books, at least implicitly. I think he
wants to show that
Ben,
I read it again, but still cannot fully understand it.
Since you put it in a betting situation, and uses two numbers for
probability, maybe you can relate it to Walley's work? He started with
a similar setting.
I hope you are not really using second-order probability, because
unlike what
Eliezer,
Ellsberg paradox is aimed at decision theory, not probability theory.
Unlike psychologists (e.g., Tversky and Kahneman), Ellsberg didn't try
to show that human decision making is not optimal, but that decision
theory probably ignores a factor which should be considered.
I'm sure that
Pei Wang wrote:
... in this example, there are arguments supporting the
rationality of human, that is, even if two betting cases
corresponding to the same expected utility, there are
reasons for them to be treated differently in decision
making, because the probability in one betting is
OK... I am over-busy today, but tomorrow I will try to frame my
betting game suggestion more clearly.
It is different than Walley's approach but is related as you note...
ben
On Feb 6, 2007, at 1:55 PM, Pei Wang wrote:
Ben,
I read it again, but still cannot fully understand it.
Since
On Tue, 06 Feb 2007 11:18:09 -0500, Ben Goertzel [EMAIL PROTECTED] wrote:
The scenario I described was in fact a dutch book scenario...
The next step might then be to show how Novamente is constrained from
allowing dutch books to be made against it. This would prove Novamente's
Ben wrote:
Well, in fact, Novamente is **not** constrained from having Dutch
books made against it, because it is not a perfectly consistent
probabilistic reasoner.
It seeks to maintain probabilistic consistency, but balances this
with other virtues...
This is really a necessary
gts wrote:
I understand the resources problem, but to be coherent a
probabilistic reasoner need only be constrained in very
simple ways, for example from assigning a higher
probability to statement 2 than to statement 1 when
statement 2 is contingent on statement 1.
Is such basic
On Tue, 06 Feb 2007 16:27:22 -0500, Jef Allbright [EMAIL PROTECTED]
wrote:
You would have to assume that statement 2 is *entirely* contingent on
statement 1.
I don't believe so. If statement S is only partially contingent on some
other statement, or contingent on any number of other
gts wrote:
On Tue, 06 Feb 2007 16:27:22 -0500, Jef Allbright
[EMAIL PROTECTED]
wrote:
-
You would have to assume that statement 2 is *entirely*
contingent on statement 1.
-
I
Ah, the importance of semantic precision (still context-dependent, of
course). ;-)
- Jef
-Original Message-
From: gts [mailto:[EMAIL PROTECTED]
Sent: Tuesday, February 06, 2007 2:41 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Betting and multiple-component truth values
My last
I said
If you the programmer ('you' being an AI, I assume) already have the
concept of probability, and you can prove that a possible program will
estimate probabilities more accurately than you do, you should be able
to prove that it would provide an increase in utility, to a degree
depending
Consistency in the sense of de Finetti or Cox is out of reach for a
modest-resources AGI, in principle...
Sorry to be the one to break the news. But, don't blame the
messenger. It's a rough universe out there ;-)
Ben G
On Feb 6, 2007, at 4:10 PM, gts wrote:
I understand the resources
--- Mitchell Porter [EMAIL PROTECTED]
wrote:
I said
If you the programmer ('you' being an AI, I assume)
already have the
concept of probability, and you can prove that a
possible program will
estimate probabilities more accurately than you do,
you should be able
to prove that it
16 matches
Mail list logo