[agi] [EMAIL PROTECTED]: [Comp-neuro] ISIPTA '07 Second Call for Papers]

2007-02-06 Thread Eugen Leitl
- Forwarded message from Alessandro Antonucci [EMAIL PROTECTED] - From: Alessandro Antonucci [EMAIL PROTECTED] Date: Tue, 6 Feb 2007 11:56:58 +0100 To: undisclosed-recipients: ; Subject: [Comp-neuro] ISIPTA '07 Second Call for Papers Reply-To: [EMAIL PROTECTED]

Re: [agi] Betting and multiple-component truth values

2007-02-06 Thread gts
Russell, I'm not suggesting that an omniscient player would not win over time as a result of its superior knowledge. I am suggesting that a non-omniscient player need not necessarily be bilked in the sense meant by De Finetti; that is, it needn't be forced to lose automatically due to

Re: [agi] Betting and multiple-component truth values

2007-02-06 Thread gts
On Tue, 06 Feb 2007 09:56:10 -0500, Russell Wallace [EMAIL PROTECTED] wrote: I'm not talking about dutch book, I'm talking about the following quoted from Ben's original post, emphasis added): I think Ben is talking about dutch books, at least implicitly. I think he wants to show that

Re: [agi] Betting and multiple-component truth values

2007-02-06 Thread Pei Wang
Ben, I read it again, but still cannot fully understand it. Since you put it in a betting situation, and uses two numbers for probability, maybe you can relate it to Walley's work? He started with a similar setting. I hope you are not really using second-order probability, because unlike what

Re: [agi] Betting and multiple-component truth values

2007-02-06 Thread Pei Wang
Eliezer, Ellsberg paradox is aimed at decision theory, not probability theory. Unlike psychologists (e.g., Tversky and Kahneman), Ellsberg didn't try to show that human decision making is not optimal, but that decision theory probably ignores a factor which should be considered. I'm sure that

RE: [agi] Betting and multiple-component truth values

2007-02-06 Thread Jef Allbright
Pei Wang wrote: ... in this example, there are arguments supporting the rationality of human, that is, even if two betting cases corresponding to the same expected utility, there are reasons for them to be treated differently in decision making, because the probability in one betting is

Re: [agi] Betting and multiple-component truth values

2007-02-06 Thread Ben Goertzel
OK... I am over-busy today, but tomorrow I will try to frame my betting game suggestion more clearly. It is different than Walley's approach but is related as you note... ben On Feb 6, 2007, at 1:55 PM, Pei Wang wrote: Ben, I read it again, but still cannot fully understand it. Since

Re: [agi] Betting and multiple-component truth values

2007-02-06 Thread gts
On Tue, 06 Feb 2007 11:18:09 -0500, Ben Goertzel [EMAIL PROTECTED] wrote: The scenario I described was in fact a dutch book scenario... The next step might then be to show how Novamente is constrained from allowing dutch books to be made against it. This would prove Novamente's

RE: [agi] Consistency: Values versus goals

2007-02-06 Thread Jef Allbright
Ben wrote: Well, in fact, Novamente is **not** constrained from having Dutch books made against it, because it is not a perfectly consistent probabilistic reasoner. It seeks to maintain probabilistic consistency, but balances this with other virtues... This is really a necessary

RE: [agi] Betting and multiple-component truth values

2007-02-06 Thread Jef Allbright
gts wrote: I understand the resources problem, but to be coherent a probabilistic reasoner need only be constrained in very simple ways, for example from assigning a higher probability to statement 2 than to statement 1 when statement 2 is contingent on statement 1. Is such basic

Re: [agi] Betting and multiple-component truth values

2007-02-06 Thread gts
On Tue, 06 Feb 2007 16:27:22 -0500, Jef Allbright [EMAIL PROTECTED] wrote: You would have to assume that statement 2 is *entirely* contingent on statement 1. I don't believe so. If statement S is only partially contingent on some other statement, or contingent on any number of other

RE: [agi] Betting and multiple-component truth values

2007-02-06 Thread Jef Allbright
gts wrote: On Tue, 06 Feb 2007 16:27:22 -0500, Jef Allbright [EMAIL PROTECTED] wrote: - You would have to assume that statement 2 is *entirely* contingent on statement 1. - I

RE: [agi] Betting and multiple-component truth values

2007-02-06 Thread Jef Allbright
Ah, the importance of semantic precision (still context-dependent, of course). ;-) - Jef -Original Message- From: gts [mailto:[EMAIL PROTECTED] Sent: Tuesday, February 06, 2007 2:41 PM To: agi@v2.listbox.com Subject: Re: [agi] Betting and multiple-component truth values My last

[agi] Re: Optimality of using probability

2007-02-06 Thread Mitchell Porter
I said If you the programmer ('you' being an AI, I assume) already have the concept of probability, and you can prove that a possible program will estimate probabilities more accurately than you do, you should be able to prove that it would provide an increase in utility, to a degree depending

Re: [agi] Betting and multiple-component truth values

2007-02-06 Thread Ben Goertzel
Consistency in the sense of de Finetti or Cox is out of reach for a modest-resources AGI, in principle... Sorry to be the one to break the news. But, don't blame the messenger. It's a rough universe out there ;-) Ben G On Feb 6, 2007, at 4:10 PM, gts wrote: I understand the resources

[agi] Re: Optimality of using probability

2007-02-06 Thread Tom McCabe
--- Mitchell Porter [EMAIL PROTECTED] wrote: I said If you the programmer ('you' being an AI, I assume) already have the concept of probability, and you can prove that a possible program will estimate probabilities more accurately than you do, you should be able to prove that it