Re: [agi] Optimality of probabilistic consistency

2007-02-04 Thread Pei Wang
Ben, I have no problem with any of the points you made in the following. However, the axioms of probability theory and interpretations of probability (frequentist, logical, subjective) all take a consistent probability distribution as precondition. Therefore, this assumption is and will be

Re: [agi] Optimality of probabilistic consistency

2007-02-04 Thread gts
On Sun, 04 Feb 2007 07:52:02 -0500, Pei Wang [EMAIL PROTECTED] wrote: However, the axioms of probability theory and interpretations of probability (frequentist, logical, subjective) all take a consistent probability distribution as precondition. Also I think the meaning of 'probabilistic

Re: [agi] Optimality of probabilistic consistency

2007-02-04 Thread Ben Goertzel
Ben, this is also why I was wondering why your hypothesis is framed in terms of both Cox and De Finetti. Unless I misunderstand Cox, their interpretations are in some ways diametrically opposed. De Finetti was a radical subjectivist while Cox is (epistemically) an ardent

Re: [agi] Optimality of probabilistic consistency

2007-02-04 Thread Pei Wang
On 2/4/07, gts [EMAIL PROTECTED] wrote: On Sun, 04 Feb 2007 07:52:02 -0500, Pei Wang [EMAIL PROTECTED] wrote: However, the axioms of probability theory and interpretations of probability (frequentist, logical, subjective) all take a consistent probability distribution as precondition. Also

Re: [agi] Optimality of probabilistic consistency

2007-02-04 Thread Ben Goertzel
The interpretation of probability is a different matter --- we have been talking about consistency, which is largely independent to which interpretation you subscribe to. Correct In my opinion, in the AGI context, each of the three traditional interpretation is partly applicable but partly

Re: [agi] Optimality of probabilistic consistency

2007-02-04 Thread Pei Wang
On 2/4/07, Ben Goertzel [EMAIL PROTECTED] wrote: As you know I feel differently. I think the traditional subjectivist interpretation is conceptually well-founded as far as it goes, But it still requires consistency --- actually it is its only requirement. If this requirement is dropped, then

Re: [agi] Optimality of probabilistic consistency

2007-02-04 Thread Ben Goertzel
I agree that each school has its own insights, and none has all the insights... I would say that the subjectivists are more fully correct and consistent within the scope of what they say, however. And their work leads to what I think is the right conclusion about probability and AGI:

Re: [agi] Optimality of probabilistic consistency

2007-02-04 Thread gts
A mathematical test for objectivity/subjectivity might be whether Novamente (or any AGI) could allow, in principle, for the possibility of different posterior probabilities on bayes rule as can happen under subjectivism. My thought is that a programmer is essentially forced for practical

Re: [agi] Optimality of probabilistic consistency

2007-02-04 Thread Ben Goertzel
The definition of 'probabilistic consistency' that I was using comes from ET Jaynes' book _Probability Theory - The Logic of Science_, page 114. These are Jaynes' three 'consistency desiderata' for a probabilistic robot: 1. If a conclusion can be reasoned out in more than one way, then

Re: [agi] Optimality of probabilistic consistency

2007-02-04 Thread gts
On Sun, 04 Feb 2007 11:10:57 -0500, Pei Wang [EMAIL PROTECTED] wrote: I don't think any intelligent system (human or machine) can achieve any of the three desiderata, except in trivial cases. I have no doubt you and Ben are correct on this point. Enormous resources would be required for an

Re: [agi] Optimality of probabilistic consistency

2007-02-04 Thread Pei Wang
On 2/4/07, gts [EMAIL PROTECTED] wrote: Personally I would be inclined to allow exceptions to Jaynes' second and third desiderata. Exceptions to the first desiderata is known as the reference class problem --- when an individual is seen as instance of different categories, different

Re: [agi] Optimality of probabilistic consistency

2007-02-04 Thread gts
On Sun, 04 Feb 2007 13:15:27 -0500, Pei Wang [EMAIL PROTECTED] wrote: none of the existing AGI project is designed [according to the tenets of objective/logical bayesianism] Hmm. My impression is that to whatever extent AGI projects use bayesian reasoning, they usually do so in a way that

Re: [agi] Optimality of probabilistic consistency

2007-02-04 Thread Pei Wang
Actually in AI the most influential version of Bayesianism is that of Judea Pearl, which is subjective though not in the full sense of the term as it means in history. All the other more traditional versions have much less impact. Pei On 2/4/07, gts [EMAIL PROTECTED] wrote: On Sun, 04 Feb 2007

Re: [agi] Optimality of probabilistic consistency

2007-02-04 Thread Ben Goertzel
On Feb 4, 2007, at 2:23 PM, gts wrote: On Sun, 04 Feb 2007 13:15:27 -0500, Pei Wang [EMAIL PROTECTED] wrote: none of the existing AGI project is designed [according to the tenets of objective/logical bayesianism] Hmm. My impression is that to whatever extent AGI projects use bayesian

[agi] Optimality of probabilistic consistency

2007-02-03 Thread Ben Goertzel
Hi Russell, OK, I'll try to specify my ideas in this regard more clearly. Bear in mind though that there are many ways to formalize an intuition, and the style of formalization I'm suggesting here may or may not be the right one. With this sort of thing, you only know if the

Re: [agi] Optimality of probabilistic consistency

2007-02-03 Thread Russell Wallace
On 2/4/07, Ben Goertzel [EMAIL PROTECTED] wrote: Hi Russell, OK, I'll try to specify my ideas in this regard more clearly. Bear in mind though that there are many ways to formalize an intuition, and the style of formalization I'm suggesting here may or may not be the right one. With this

Re: [agi] Optimality of probabilistic consistency

2007-02-03 Thread Pei Wang
On 2/3/07, Ben Goertzel [EMAIL PROTECTED] wrote: My desire in this context is to show that, for agents that are optimal or near-optimal at achieving the goal G under resource restrictions R, the set of important implicit abstract expectations associated with the agent (in goal-context G as

Re: [agi] Optimality of probabilistic consistency

2007-02-03 Thread Ben Goertzel
Pei, First, I agree that proving stuff like this is not the maximally interesting aspect of AGI. Actually creating a thinking machine is a lot more exciting, and that is what I'm devoting the bulk of my attention to!! However, the question of how much inconsistency is inevitable in an

Re: [agi] Optimality of probabilistic consistency

2007-02-03 Thread Ben Goertzel
Okay... let's say when an agent exhibits inconsistent implicit preferences for acting in a particular situation, it is being suboptimal, and the degree of suboptimality depends on the degree of inconsistency. Given: S = a situation I = importance (to G) of acting correctly in that

Re: [agi] Optimality of probabilistic consistency

2007-02-03 Thread Russell Wallace
On 2/4/07, Ben Goertzel [EMAIL PROTECTED] wrote: However, I'm not sure it helps with the quite hard task of coming up with a proof of the hypothesis, or a fully rigorous formulation ;-) I guess I'd better leave that to the professionals :) - This list is sponsored by AGIRI:

Re: [agi] Optimality of probabilistic consistency

2007-02-03 Thread Pei Wang
On 2/3/07, Ben Goertzel [EMAIL PROTECTED] wrote: However, the question of how much inconsistency is inevitable in an AGI is an interesting one. I don't think an AGI system will try to answer this question. Instead, it will try its best to resolve inconsistency, no matter how much it runs

Re: [agi] Optimality of probabilistic consistency

2007-02-03 Thread Ben Goertzel
I must agree with Pei on this. Think of a reasonably large AI, say, eight light hours across. Any belief frame guaranteed to be globally consistent must be at least eight hours out of date. So if you only act on globally consistent knowledge, your reaction time is never less than your

Re: [agi] Optimality of probabilistic consistency

2007-02-03 Thread Ben Goertzel
Again, to take consistency as an ultimate goal (which is never fully achievable) and as a precondition (even an approximate one) are two very different positions. I hope you are not suggesting the latter --- at least your posting makes me feel that way. Hi, In the Novamente system,