Re: [agi] My proposal for an AGI agenda

2007-03-20 Thread gts

On Tue, 20 Mar 2007 15:29:06 -0400, Ben Goertzel [EMAIL PROTECTED] wrote:

Well, **anything** can be dealt with in C++, it's just a matter of how  
awkward it is.


nod :-)

I don't want to become deeply involved in these language wars, because I  
cannot say honestly that my very limited experience in AI or AGI gives me  
qualification to speak authoritatively on this subject, but I'd guess  
C++ still rocks in this context.


-gts


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Priors and indefinite probabilities

2007-02-15 Thread gts

On Wed, 14 Feb 2007 18:03:41 -0500, Ben Goertzel [EMAIL PROTECTED] wrote:

Indeed, that is a cleaner and simpler argument than the various more  
concrete PI paradoxes... (wine/water, etc.)


Yes.

It seems to show convincingly that the PI cannot be consistently applied  
across the board, but can be heuristically applied to certain cases but  
not others as judged contextually appropriate.


Cox addresses exactly what sort of cases in which it might be legitimately  
applied, and they are in his view rare and exceptional.


Such cases exist for example in certain games of chance in which the  
necessary conditions for applying the PI are prescribed by the rules of  
the game or result from the design of the equipment.


Those necessary conditions are in fact what the PI asks us to assume: not  
only must the possibilities be mutually exclusive and exhaustive, but they  
must also be *known a priori to be equiprobable*.


We can say with confidence for example that each card in a shuffled deck  
is equally likely, but this is because in this trivial case  
equiprobability is prescribed by the rules of the game or result from the  
design of the equipment. The rest of the world is seldom so accommodating.


The principle asks us to assume equiprobability when we have no a priori  
evidence of equiprobability -- that is its very function. So one might  
ask: what good is the PI if it can be invoked only when the possibilities  
are known a priori to be equiprobable?


Cox writes of it only in a rhetorical sense, as if to say, You can invoke  
the PI but only if you already know that which it prescribes is true.


-gts



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Priors and indefinite probabilities

2007-02-15 Thread gts

LEADING TO THE ONLY THING REALLY INTERESTING ABOUT THIS DISCUSSION:


What interests me is that the Principle of Indifference is taken for  
granted by so many people as a logical truth when in reality it is  
fraught with logical difficulties.


Gillies (2000) makes an analogy between the situation in probability  
theory concerning the Principle of Indifference and the situation that  
once existed in set theory concerning the Axiom of Comprehension.


Like the Principle of Indifference, the Axiom of Comprehension seemed  
logical and intuitively obvious. That axiom states that all things which  
share a property form a set. What could be more logical and intuitively  
obvious? But the Axiom of Comprehension led to the Russell Paradox, and a  
crisis in set theory.


Similarly the Principle of Indifference (and its predecessor the Principle  
of Insufficient Reason) led to numerous difficulties, (e.g., the Bertrand  
Paradoxes, and arguments such as Cox's). Subsequently we saw a schism in  
probability theory. The classical theory was discredited, including the  
classical interpretation of Bayes' Theorem, and replaced with at least  
four different alternative interpretations.


Among bayesians, one might say De Finetti and Ramsey and the subjectivists  
helped rescue bayesianism from the jaws of (philosophical) death, by  
separating bayesianism from that albatross around its neck which is the  
Principle of Indifference.


-gts

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Priors and indefinite probabilities

2007-02-15 Thread gts

On Thu, 15 Feb 2007 11:21:25 -0500, Ben Goertzel [EMAIL PROTECTED] wrote:


I think it's been a pretty long time since the PI was taken by any
serious thinkers as a logical truth, though...


Objective bayesianism stands or falls (vs subjective bayesianism) on this  
question of whether the PI is a valid logical principle. And as far as I  
can tell objective bayesians certainly try to defend it as such. The PI is  
a main tenet of objective bayesianism; perhaps even its defining  
characteristic.


Concerning physical entropy, the PI works well as a heuristic in certain  
related applications relevant to the physical sciences, which is why some  
physicists such as Jaynes were so fond of it. (Interestingly, though, Cox  
is a physicist and he is apparently not so fond of it.)


Jaynes points out accurately that physicists have used the PI on numerous  
occasions to make accurate predictions, but Gillies points out that this  
heuristic success in no way proves the PI as a logical principle; if that  
were true then no empirical measurements would be needed to establish the  
veracity of their related hypotheses.


One might ask why objective bayesianism is still attractive to many. This  
I think is a very interesting question. I believe it has something to do  
with the sociology of science, where pragmatic considerations often take  
precedence over philosophy. Scientists, especially natural scientists,  
have a strong need to communicate mathematical ideas in an objective  
manner. Objective bayesianism offers the hope that a scientist can show  
his colleagues that a hypothesis is true at some *objective* level of  
credibility. That hope of objectivity is not present under subjective  
bayesianism, even if subjective bayesianism might have a more solid  
philosophical footing.


For the same reason I think it's still true that most natural scientists  
eschew bayesianism whenever possible, preferring to think and communicate  
in terms of objectivist interpretations.


-gts

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Priors and indefinite probabilities

2007-02-15 Thread gts

On Thu, 15 Feb 2007 12:21:22 -0500, Ben Goertzel [EMAIL PROTECTED] wrote:

As I see it, science is about building **collective** subjective  
understandings among a group of rational individuals coping with a  
shared environment


That is consistent with the views of de Finetti and other subjectivists.  
In their view our posteriors all converge in the end anyway, so it  
shouldn't matter if there are no 'objective' probabilities.



However, my view is not the most common one, I would suppose...


I'm quite sure you're correct about that.

A minority subjectivist, attempting to communicating his bayesian  
conclusions to an non-subjectivist colleague in the majority, could be met  
with the disconcerting response that his numbers are mere statements about  
his psychology. :/ Thus there exists a strong disincentive to be  
subjectivist in the natural sciences, no matter the philosophical  
consequences.


-gts

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Priors and indefinite probabilities

2007-02-15 Thread gts

So none of this is very new ;-)


No. :)

Also your idea of collective subjective understandings sounds similar to  
something I read about an 'inter-subjective' interpretation of probability  
theory, which purports to stand somewhere between objective bayesianism  
and subjective bayesianism. Lots of people with different ideas...


By the way, did Lakatos take a stand on these questions? I.e., did he  
endorse any particular interpretation separate from any observations he  
may have made about their development?


PS I've been getting multiple copies of your posts. Not sure if the  
problem is here or there but thought I would bring it to your attention.


-gts

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Enumeration of useful genetic biases for AGI

2007-02-14 Thread gts

On Tue, 13 Feb 2007 21:28:53 -0500, Ben Goertzel [EMAIL PROTECTED] wrote:

Toward that end, it would be interesting to have a systematic list  
somewhere of the genetic biases that are thought to be mostimportant for  
structuring human cognition.


Does anyone know of a well-thought-out list of this sort.  Of course I  
could make one by surveying the cognitive psych literature,but why  
reinvent the wheel?


Your email acquaintance mentioned Kant. You may want to look at Kant's  
categories, in his Critique of Pure Reason.


These are the 'Categories of the Understanding' by which Kant thought the  
mind structures cognition:


Quantity
*Unity
*Plurality
*Totality

Quality
*Reality
*Negation
*Limitation

Relation
*Inherence and Subsistence (substance and accident)
*Causality and Dependence (cause and effect)
*Community (reciprocity)

Modality
*Possibility
*Existence
*Necessity

-gts

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Priors and indefinite probabilities

2007-02-14 Thread gts

Tying together recent threads on indefinite probabilities and prior
distributions (PI, maxent, Occam)...


For those who might not know, the PI (the principle of indifference)  
advises us, when confronted with n mutually exclusive and exhaustive  
possibilities, to assign probabilities of 1/n to each of them.


In his book _The Algebra of Probable Inference_, R.T. Cox presents a  
convincing disproof of the PI when n = 2. I'm confident his argument  
applies for greater values of n, though of course the formalism would be  
more complicated.


His argument is by reductio ad absurdum; Cox shows that the PI leads to an  
absurdity. (Not just an absurdity in his view, but a monstrous absurdity  
:-)


The following quote is verbatim from his book, except that in the interest  
of clarity I have used the symbol  to mean and instead of the dot  
used by Cox. The symbol v means or in the sense of and/or.


Also there is an axiom used in the argument, referred to as Eq. (2.8 I).  
That axiom is


(a v ~a)  b = b.

Cox writes, concerning two mutually exclusive and exhaustive propositions  
a and b...

==
...it is supposed that

a | a v ~a = 1/2

for arbitrary meanings of a.

In disproof of this supposition, let us consider the probability of the  
conjunction a  b on each of the two hypotheses, a v ~a and b v ~b. We have


a v b | a v ~a = (a | a v ~a)[b | (a v ~a)  a]

By Eq (2.8 I) (a v ~a)  a = a and therefore

a  b | a v ~a = (a | a v ~a) (b | a)

Similarly

a  b | b v ~b = (b | b v ~b) (a | b)

But, also by Eq. (2.8 I), a v ~a and b v ~b are each equal to (a v ~a)   
(b v ~b) and each is therefore equal to the other.


Thus

a  b | b v ~b = a  b | a v ~a

and hence

(a | a v ~a) (b | a) = (b | b v ~b) (a | b)

If then a | a v ~a and b | b v ~b were each equal to 1/2, it would follow  
that b | a = a | b for arbitrary meanings of  and b.


This would be a monstrous conclusion, because b | a and a | b can have any  
ratio from zero to infinity.


Instead of supposing that a | a v ~a = 1/2, we may more reasonably  
conclude, when the hypothesis is the truism, that all probabilities are  
entirely undefined except these of the truism itself and its  
contradictory, the absurdity.


This conclusion agrees with common sense and might perhaps have been  
reached without formal argument, because the knowledge of a probability,  
though it is knowledge of a particular and limited kind, is still  
knowledge, and it would be surprising if it could be derived from the  
truism, which is the expression of complete ignorance, asserting nothing.

===

-gts




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Betting and multiple-component truth values

2007-02-12 Thread gts
On Sat, 10 Feb 2007 21:40:28 -0500, Benjamin Goertzel [EMAIL PROTECTED]  
wrote:


Sorry it took me so long to get to this message...


About the Principle of Indifference and probability theory...

The question is what should an AGI system does when the data available
to it appears to support multiple contradictory conclusions.

It has to decide, somehow.


Yes, I understand.


The PI is one way to decide...


Yes, and there is nothing particularly wrong with the PI, I think,  
provided that one understands it is not some kind of a priori 'logical  
truth' handed down to us from the heavens. That is, I think there is no  
logical sin in using some other method, just as subjective bayesians have  
been telling the world since about 1926, much to the chagrin of objective  
bayesians.



I note that the Occam prior connects more closely to neuroscience than
the PI, in that there are plausible arguments the brain uses an
energy minimization heuristic in some cases.  Read Montague makes an
argument in this direction in:


I would need to learn more about Montague's idea to understand what he  
means about the neuroscience connection, but it sounds reasonable.



However, when multiple choices seem to have roughly equivalent
complexity, then the Occam prior basically degenerates to the PI.


This goes back to my earlier idea that equivalent complexity (or  
equivalent information) takes on a different practical meaning in the  
special case in which there is no information at all, i.e., when one is in  
a state of total ignorance, which is the case when the PI might be  
invoked. Under such special circumstances I think one might say All bets  
are off. Think as thou wilt, within the bounds of reason. This at least  
seems to me a reasonable position for humans to take here (and it is  
consistent with Cox, I think). What this idea might mean for AGI is a  
different question, of course, and I understand that is the question on  
your mind.


I'll need also to read the paper by Zurek and others... thanks.


And, just as with the PI, these more sophisticated approaches must be
applied correctly and intelligently to be useful.


Yes.

-gts

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] conjunction fallacy

2007-02-12 Thread gts
On Sun, 11 Feb 2007 11:41:31 -0500, Richard Loosemore [EMAIL PROTECTED]  
wrote:


P.S.  This isn't the first time this topic has come up.  For a now  
famous example, see my essay at http://sl4.org/archive/0605/14748.html  
and the follow-up at http://sl4.org/archive/0605/14773.html.


The link to your essay didn't work.

I read the thread at the second link. In general I found myself in  
agreement with your detractors, for example Eliezer and Ben.


-gts

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Betting and multiple-component truth values

2007-02-11 Thread gts
Yesterday I received from amazon.com a copy of Cox's book _The Algebra of  
Probable Inference_. (Thanks for the recommendation, Ben.)


In his preface Cox expresses his indebtedness to Keynes, and Keynes'  
influence is obvious throughout. For this reason I was expecting to find  
somewhere within the text a Keynesian-like attempt to rehabilitate the  
Principle of Indifference.


However in this respect Cox breaks clearly from Keynes. Cox offers a  
strong and clear argument against the principle, starting at the bottom of  
page 31 and extending to about the middle of page 33 (in my paperback  
edition).


Briefly, his argument is that the conditions necessary for applying the  
principle of indifference are exceptional and rare. They are present  
only for example in such trivial cases as certain games of chance in which  
the necessary conditions are prescribed by the rules of the game or  
result from the design of the equipment.


Cox offers a formal disproof of the principle in the case in which there  
exist two mutually exclusive outcomes and nothing else is known. In such  
situations the principle prescribes that we assign prior probabilities of  
.5 to each outcome. Cox shows this to be absurd and unfounded, and writes  
this about his own conclusion:


This conclusion agrees with common sense and might perhaps have been  
reached without formal argument, because the knowledge of a probability,  
though it is knowledge of a particular and limited kind, is still  
knowledge, and it would be surprising if it could be derived from the  
truism, which is the expression of complete ignorance, asserting nothing.


Indeed!

-gts


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] conjunction fallacy

2007-02-10 Thread gts

Eliezer offered this apparent example of the fallacy:

**
Two independent sets of professional analysts at the Second International  
Congress on Forecasting were asked to rate, respectively, the probability  
of A complete suspension of diplomatic relations between the USA and the  
Soviet Union, sometime in 1983 or A Russian invasion of Poland, and a  
complete suspension of diplomatic relations between the USA and the Soviet  
Union, sometime in 1983.  The second set of analysts responded with  
significantly higher probabilities.

**

I think it's worth noting here that while this example suggests very  
strongly that humans are susceptible to the conjunction fallacy (and I  
agree that they are) no individual analyst in this example can be *proved*  
to have actually committed the fallacy. This is another way of saying that  
no analyst was incoherent in the De Finetti sense, i.e., no analyst made  
himself susceptible to a dutch book.


I think it was Pei who pointed out that the situation would be much  
different if the analysts in either group were asked to assign  
probabilities to *both* hypotheses.


In that case some fraction of the analysts would likely have committed the  
fallacy in a provable way; some of them would have failed to satisfy the  
coherency constraint and thus made themselves vulnerable to a dutch book.


I fail to see why an AGI must be so vulnerable, even with modest  
resources. An AGI could atomize the second hypothesis into its two  
constituent hypotheses:


A: suspension of relations

and

B: invasion of Poland

and then apply coherent probabilistic reasoning in such a way that the  
constraint


P(A)  P(AB)  P(B)

is not violated.

-gts


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] conjunction fallacy

2007-02-10 Thread gts
On Sat, 10 Feb 2007 13:15:19 -0500, Benjamin Goertzel [EMAIL PROTECTED]  
wrote:



Look, the susceptibility of humans to dutch books is clear...


I was not arguing to the contrary, Ben.

As I wrote below: I think it's worth noting here that while this example  
suggests very strongly that humans are susceptible to the conjunction  
fallacy (and I agree that they are)...


I agree humans are susceptible to the conjunction fallacy.

-gts



You are correct that the specific protocol underlying the psych
experiment Eli cited did not show individuals being personally
incoherent, but there are many other psych experiments that do.  I
don't feel like digging up the references, but they are there in the
heuristics and biases literature.

-- Ben

On 2/10/07, gts [EMAIL PROTECTED] wrote:

Eliezer offered this apparent example of the fallacy:

**
Two independent sets of professional analysts at the Second  
International
Congress on Forecasting were asked to rate, respectively, the  
probability
of A complete suspension of diplomatic relations between the USA and  
the

Soviet Union, sometime in 1983 or A Russian invasion of Poland, and a
complete suspension of diplomatic relations between the USA and the  
Soviet

Union, sometime in 1983.  The second set of analysts responded with
significantly higher probabilities.
**

I think it's worth noting here that while this example suggests very
strongly that humans are susceptible to the conjunction fallacy (and I
agree that they are) no individual analyst in this example can be  
*proved*
to have actually committed the fallacy. This is another way of saying  
that

no analyst was incoherent in the De Finetti sense, i.e., no analyst made
himself susceptible to a dutch book.

I think it was Pei who pointed out that the situation would be much
different if the analysts in either group were asked to assign
probabilities to *both* hypotheses.

In that case some fraction of the analysts would likely have committed  
the

fallacy in a provable way; some of them would have failed to satisfy the
coherency constraint and thus made themselves vulnerable to a dutch  
book.


I fail to see why an AGI must be so vulnerable, even with modest
resources. An AGI could atomize the second hypothesis into its two
constituent hypotheses:

A: suspension of relations

and

B: invasion of Poland

and then apply coherent probabilistic reasoning in such a way that the
constraint

P(A)  P(AB)  P(B)

is not violated.

-gts


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303





-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] conjunction fallacy

2007-02-10 Thread gts
On Sat, 10 Feb 2007 13:41:33 -0500, Richard Loosemore [EMAIL PROTECTED]  
wrote:


The meat of this argument is all in what exact type of AGI you claim is  
the best, of the two suggested above.


The best AGI in this context would be one capable of avoiding the  
conjunction fallacy, of course, but neither of those you described even  
addressed the question of whether the two outcomes together have a  
greater, lesser, or equal probability than either of them separately.


The conjunction fallacy is a sort of mental illusion, brought about by our  
mistaken use of certain heuristics. Heuristics are all very well and good,  
but I should think any sophisticated AGI would not take them as gospel in  
situations in which they contradict of the axioms of probability.



-gts

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Betting and multiple-component truth values

2007-02-10 Thread gts
On Sat, 10 Feb 2007 13:59:27 -0500, Jef Allbright [EMAIL PROTECTED]  
wrote:



gts wrote:


I'm not expecting essentially perfect coherency in AGI.
I understand perfection is out of reach.


Thanks for quoting me here, Jef. I think Ben may have thought I believe  
something differently.


I understand and agree with everyone here that perfect coherency is not  
feasible in AGI.



My question to you was whether, as a professed C++ developer, you are
familiar with the well-known impracticality of certifying a non-trivial
software product to be essentially free of unexpected failure modes, and
if so, do you see a similarity to your question of coherent reasoning by
machine intelligence?


Sure, an analogy can be made.


In a similar vein, do you think you understand Ben's comment about the
problem being NP-hard?


Sure...

...our differences here seem to be a matter of degree

I am less optimistic about the possibility of developing a smart,  
accurate, probabilistic AGI than I am about developing one that totally  
*smokes* humanity in measures of probabilistic (De Finetti) coherency.



By the way, De Finetti used the word coherent in the very standard
sense meaning that all the pieces must fit together from all possible
points of view (within all possible contexts.)


I was explaining that here yesterday.

This same concept of coherence is the basis of the axioms of  
probability...


Yes.


... and the principle of indifference.


No.


Understand this underlying concept and you may understand the others.


I understand it, Jef. But do you? The principle of indifference is not  
derived from or implied in any way by De Finetti coherency. De Finetti had  
no use for the idea. Neither do I.


-gts

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Correction: Betting and multiple-component truth values

2007-02-10 Thread gts

Correction: Needed to add [the idea that] below.
- Jef


Got it.


I understand it, Jef. But do you? The principle of indifference
is not derived from or implied in any way by De Finetti
coherency. De Finetti had no use for the idea. Neither do I.


That's like saying you have no use for [the idea that] a balance scale
reads zero when both pans are empty.


Your beef is not just with me; it is with Bruno De Finetti and Frank P.  
Ramsey and their modern followers in the subjectivist school of  
probability theory, most of whom call themselves subjective bayesians.


At the risk of mixing metaphors:

A subjectivist has no use for the idea of an empty scale no matter how it  
might balance, because after all there is nothing there to weigh.


-gts

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Correction: Betting and multiple-component truth values

2007-02-10 Thread gts
On Sat, 10 Feb 2007 15:26:01 -0500, Benjamin Goertzel [EMAIL PROTECTED]  
wrote:



This is true, but, subjective Bayesianism does not give you any
suggestion as to what prior distribution to use in place of the
maximum-entropy prior.

It just says that you can use any prior you want so long as you use it
consistently...


Yes.


So, for AGI purposes, the subjective Bayesian approach is not enough...


Seems that way, but on the other hand, subjective bayesianism seems to me  
to be closer to the way humans actually think.


Subjective bayesians are not constrained under some supposed force of  
logic to make their probabilistic judgements conform to an idealized  
objective standard.


-gts




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Betting and multiple-component truth values

2007-02-10 Thread gts
On Sat, 10 Feb 2007 15:27:23 -0500, Jef Allbright [EMAIL PROTECTED]  
wrote:



On the contrary, a subjectivist understands that even to pose a
question, one must have some prior.


That observation does not speak to the question at hand concerning the  
principle of indifference.


The principle of indifference is seen as a 'logical principle' only under  
objective bayesianism. Under subjective bayesianism it is at most a  
heuristic device.


Subjectivists know better than to believe they are bound by some  
'universal principle of logic' (to use your term) to invoke the principle  
of indifference under conditions of total ignorance about the true state  
of nature, which is of course the only condition under which it can be  
invoked.


You were wrong to suggest earlier that the principle of indifference can  
be derived from De Finetti coherence. The axioms of probability can be  
derived from coherence but the principle of indifference is certainly not  
one of them.



You're also confusing zero with nothing.


Nope.

-gts

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Betting and multiple-component truth values

2007-02-10 Thread gts

Jef,


I understand it, Jef. But do you? The principle of indifference
is not derived from or implied in any way by De Finetti
coherency. De Finetti had no use for the idea. Neither do I.


That's like saying you have no use for [the idea that] a balance scale
reads zero when both pans are empty.


Here is something Frank P. Ramsey wrote about the principle of  
indifference after discovering (independently of Bruno De Finetti) that  
coherence was sufficient to derive the axioms of probability:


Secondly, the Principle of Indifference can now be altogether dispensed  
with; we do not regard it as
belonging to formal logic to say what should be a man's expectation of  
drawing a white or a black
ball from an urn; his original expectations may within the limits of  
consistency be any he likes; all
we have to point out is that if he has certain expectations he is bound in  
consistency to have certain
others. This is simply bringing probability into line with ordinary formal  
logic, which does not
criticize premisses but merely declares that certain conclusions are the  
only ones consistent with
them. To be able to turn the Principle of Indifference out of formal logic  
is a great advantage; for it
is fairly clearly impossible to lay down purely logical conditions for its  
validity, as is attempted by

Mr Keynes.

-F.P. Ramsey (1926) Truth and Probability

Ramsey was a great genius, in my opinion. I suggest you read his paper  
above. It's available on the net if you look for it. I provided a link in  
some earlier message here.


-gts

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Betting and multiple-component truth values

2007-02-09 Thread gts

On Wed, 07 Feb 2007 18:37:52 -0500, Ben Goertzel [EMAIL PROTECTED] wrote:

This is simply a re-post of my prior post, with corrected terminology,  
but unchanged substance:


Thanks! Very helpful.

Now that you have a better understanding of dutch books, I wonder if you  
still feel the De Finetti coherence constraint is as formidable as you may  
have first thought. I haven't seen your code but I would be surprised if  
Novamente is really incoherent.


Probably you can show that the prices of the bets set by Gambler and  
Meta-gambler respectively are consistent and related in such a way that  
the House cannot make a dutch book against the Gambler and Meta-Gambler  
seen as a team; that is, that the House cannot force Novamente to lose  
automatically no matter what is true.


The House might attempt, for example, buying the operational subjective  
probability bet (p) from Gambler while simultaneously selling the g bet to  
Meta-Gambler, or vice versa, in such a way as to force the team of  
Gamblers to lose money. These transactions could perhaps take place at  
separate times. For example the House might attempt to buy one bet after n  
observations of S and sell the other after n+x observations of S.


This doesn't really add anything practical to the indefinite  
probabilities framework as already formulated, it
just makes clearer the interpretation of the indefinite probabilities in  
terms of de Finetti style betting games.


Yes, thanks for the illustration.

Note that coherency does not constrain one to be especially accurate in  
one's judgemental probabilities. A coherent entity needn't be very smart  
about the true state of nature. The coherency constraint merely defines  
the outer limits of what one may rationally consider possible.


-gts

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Betting and multiple-component truth values

2007-02-09 Thread gts

On Fri, 09 Feb 2007 11:19:52 -0500, Ben Goertzel [EMAIL PROTECTED] wrote:

Note that coherency does not constrain one to be especially accurate in  
one'sjudgemental probabilities. A coherent entity needn't be very smart  
about thetrue state of nature. The coherency constraint merely defines  
the outer limitsof what one may rationally consider possible.


This is incorrect, I believe.  Coherency requires one to be reasonably  
consistent in one's assignment of probabilities to various interdependent 
outcomes, otherwise a dutch book can be made against one.


That would depend on the meaning of reasonably consistent but in any  
case I believe this is at the root of our differences of opinion about De  
Finetti coherence.


You may mean something else by coherence, but as I understand De Finetti  
it does not entail anything like in-depth knowledge or omniscience about  
the world of complex interdependences. To be coherent one need only avoid  
self-contradiction.


Here is a quote from a source I've found very helpful in understanding De  
Finetti coherence:


Naturally, coherence does not determine a single degree of rational  
belief but leaves open a wide variety of choices... The idea here is that  
we have to make sure our various degrees of belief fit together so to  
avoid the 'contradiction' of a Dutch book being made against us. The term  
'coherence' is now generally preferred... [1]


Thus, to be coherent, we need to ensure that our beliefs fit together  
(logically). This is separate from considerations about whether those  
beliefs are actually true.


This coherency constraint is entirely subjective, a sort of first order  
rational constraint which comes before other logical constraints which  
might be related to what is actually true 'out there' in the world of  
complex interdependencies, which I certainly do not deny exists.


Guaranteed losses to dutch books in De Finetti-style arguments are not  
evidence of a lack of knowledge about the complex interdependencies in the  
world --- they are evidence of self-contradiction, evidence of incoherent  
thinking on the part of the better no matter his degree of knowledge about  
the world.


To avoid a dutch book, an entity need only check first before acting to  
make sure its relevant assumptions are logically compatible. And in the  
case where it has no relevant assumptions then no book can be made against  
it.


[Concerning the interesting conjunction fallacy post by Eliezer, I should  
read it again but under the assumptions given, (concerning Kolmogorov  
complexity and so forth), it seemed to me that the example as stated was  
not actually an example of fallacious reasoning.]


1. D. Gillies (2000)_Philosophical Theories of Probability_, pg 59

-gts















-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Betting and multiple-component truth values

2007-02-09 Thread gts
Well, although I am not an AI developer, I am a C++ application developer  
and I know I or any reasonably skilled developer could write task-specific  
applications that would be extremely coherent in the De Finetti sense  
(applicable to making probabilistic judgements in horse-racing, casinos,  
the stockmarket, whatever). These applications would make mincemeat of  
humans in any test of coherence. Such applications already exist, come to  
think of it.


So I think people should be optimistic about coherence in AGI, not  
pessimistic.


-gts


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Betting and multiple-component truth values

2007-02-08 Thread gts
On Wed, 07 Feb 2007 20:40:27 -0500, Charles D Hixson  
[EMAIL PROTECTED] wrote:


I suspect you of mis-analyzing the goals and rewards of casino  
gamblers I'm not sure whether or not this speaks to the points that  
you are attempting to raise, but it certainly calls into question  
comments about stupid bets.   Well, the lottery isn't a casino, so  
perhaps you are correct, but I would be suspicious about calculating  
values based solely on the money.


The point I was making, and it applies equally well to lottery bets as it  
does to casino bets, is that such bets are not evidence of incoherence  
where incoherence is defined (by De Finetti) as vulnerability to dutch  
books.


A dutch book occurs when an incoherent thinker is forced to lose as a  
result of his inconsistent judgmental probabilities, no matter the  
outcome. Such bets are worse than stupid. :)


I gave an example of a Dutch book in a post to Russell in which an  
incoherent thinker assigns a higher probability to intelligent life on  
Mars than to mere life on Mars. Since the first hypothesis can be true  
only if the second is true, it is incoherent to assign a higher  
probability to the first than to the second.


Coherence is basically just common sense applied to probabilistic  
reasoning. I'm dismayed to learn from Ben that coherence is so difficult  
to achieve in AGI.


-gts

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Betting and multiple-component truth values

2007-02-08 Thread gts

On Wed, 07 Feb 2007 16:51:18 -0500, Ben Goertzel [EMAIL PROTECTED] wrote:

In fact I have been thinking about how one might attempt a dutch book  
against Novamente involving your multiple component values, but I do  
not yet fully understand b. My impression at the moment is that b is  
similar to 'power' in conventional statistics -- a real number from 0  
to 1 that roughly speaking acts as a measure of the robustness of the  
analysis. Fair comparison?




The power of a statistical hypothesis test measures the test's ability  
to reject the null hypothesis when it is actually false ... this has  
very little to do with indefinite or imprecise probabilities...


Let me ask you in a different way:

Can b be regarded as a measure of Novamente's confidence in p?

All other things being equal, does b increase with N?

-gts


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Betting and multiple-component truth values

2007-02-08 Thread gts

On Thu, 08 Feb 2007 09:26:28 -0500, Pei Wang [EMAIL PROTECTED] wrote:


In simple cases like the above one, an AGI should achieve coherence
with little difficulty. What an AGI cannot do is to guarantee
coherence in all situations, which is impossible for human beings,
neither --- think about situations where the incoherence of a bet
setting needs many steps of inference, as well as necessary domain
knowledge, to reveal.


Yes, but as I wrote to Ben yesterday, it is not possible to make a dutch  
book against an AGI that does not pretend to have knowledge it does not  
have.


So an AGI can be perfectly coherent, to *some* degree of knowledge,  
provided it knows its own bounds. And such a modest AGI would certainly be  
more trustworthy, especially if it were employed in such fields as  
national defense, where incoherent reasoning could lead to disaster.


-gts




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Betting and multiple-component truth values

2007-02-08 Thread gts

On Thu, 08 Feb 2007 10:22:19 -0500, Ben Goertzel [EMAIL PROTECTED] wrote:

Well, if the scope of a mind is narrowed enough, then it can be more  
coherent.


Right, I understand there is a definite trade-off here between knowledge  
or scope and coherency, due mainly to resource limitations. The best we  
can hope for is that an AGI might be more coherent than us, but this is by  
no means assured.


On a slightly different but closely related subject...

Last night I was out having pizza with some others, trying to pretend to  
be interested in the conversation, while actually thinking about the posts  
we had exchanged earlier in the day. :) While munching on onions and  
pepperoni it occurred to me that the problem of achieving complete or  
near-complete coherency in AGI is closely related to the epistemological  
problem of obtaining knowledge where knowledge is defined as 'justified  
true belief'. Karl Popper's arguments against that possibility strike me  
as similar to and closely related to your arguments against the  
possibility of complete probabilistic coherency in AGI: any such attempt  
must lead to an infinite regress.


So then I wondered to myself how Popper's alternative,  
non-justificationist epistemology might be applicable to AGI. Any thoughts  
on that subject? (I won't presume to educate you about Popper; I recall  
that you studied Philosophy of Science and so should know all about him.)


-gts







-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Betting and multiple-component truth values

2007-02-08 Thread gts

re: the right order of definition

De Finetti's (and Ramsey's) main contribution was in showing that the  
formal axioms of probability can be derived entirely from considerations  
about people betting on their subjective beliefs under the relatively  
simple constraint of coherency. No other rational/logical constraints are  
needed, which is contrary to the suppositions of for example Keynes. This  
was I think a pretty remarkable discovery!


I'm not yet sure what it means to derive the 'axioms of Novamente' in the  
same way, but I think it's pretty cool that Ben is attempting it. :)


-gts

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Betting and multiple-component truth values

2007-02-07 Thread gts

On Tue, 06 Feb 2007 20:02:11 -0500, Ben Goertzel [EMAIL PROTECTED] wrote:

Consistency in the sense of de Finetti or Cox is out of reach for a  
modest-resources AGI, in principle...


Sorry to be the one to break the news...


You used the word consistency instead of the word coherency that I was  
using, but assuming you mean them as synonyms, and assuming you're  
correct, then I think that really is terrible news for AGI and I wonder  
why you're even bothering with it.


Coherency in the De Finetti sense is not very much different from  
coherency as the word is used in normal conversation, as when evaluating  
the words and mental states of people. Incoherent people are in worse  
shape than stupid. We put incoherent people in psychiatric facilities.


-gts


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Betting and multiple-component truth values

2007-02-07 Thread gts

On Wed, 07 Feb 2007 10:57:04 -0500, Ben Goertzel [EMAIL PROTECTED] wrote:

The dramatic probabilistic incoherency of humans is demonstrated by  
human behavior in casinos.


You mean something more stringent than me by the word incoherency, then.  
Human betting behavior in casinos is stupid but it is not incoherent in  
the De Finetti sense as I understand it.


It's easy to prove incoherence: one need only show how a dutch book can be  
made against the allegedly incoherent person. Vulnerability to dutch books  
is how incoherence is defined under the theory.


Casino gamblers are stupid in so much as they place bets with unfavorable  
odds, but they do not by virtue of those stupid bets make themselves  
vulnerable to dutch books. One sometimes wins against unfavorable odds but  
it is never possible to beat a dutch book. In fact casinos do not even  
offer such betting situations.


-gts


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Betting and multiple-component truth values

2007-02-07 Thread gts

Ben,

Of course the world is an enormously complex relation of interdependencies  
between many causes and effects. I do not dispute that fact.


I question however whether this should really be an important  
consideration in developing AGI.


One's probabilistic judgements should always be justified, yes? And when a  
probabilistic judgement P(A) is justified only by one or more other  
probabilistic judgements [P(Q), P(R), and P(S), say] then one is not  
justified in assuming P(A) should have a value greater than [P(Q) * P(R) *  
P(S)]. Yes?


If that coherency condition is not true for an AGI then I might have  
trouble trusting its probabilistic judgements. I do not much care in this  
case whether our AGI is correct in its probabilistic judgement about A (it  
may be ignorant about many facts of the world including many facts related  
to judgements about Q, R and S) but I do care whether our AGI is  
*justified* in its appraisal of P(A).


Note that dutch books cannot be made against an AGI that does not claim to  
have knowledge it does not have.


-gts



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Betting and multiple-component truth values

2007-02-07 Thread gts

On Wed, 07 Feb 2007 16:07:13 -0500, Ben Goertzel [EMAIL PROTECTED] wrote:


only under an independence assumption.


True, I did not make the independence assumption explicit.

Note that dutch books cannot be made against an AGI that does not claim  
to have knowledge it does not have.


That is true and important, and is why Pei and I and others use  
multiple-component truth values in our systems -- we explicitly track  
the weight of evidence associated with uncertainty estimates.


I don't see how multiple-component truth values might block a fully  
developed Novamente from being vulnerable to dutch books, if that is what  
you are saying here.


In fact I have been thinking about how one might attempt a dutch book  
against Novamente involving your multiple component values, but I do not  
yet fully understand b. My impression at the moment is that b is similar  
to 'power' in conventional statistics -- a real number from 0 to 1 that  
roughly speaking acts as a measure of the robustness of the analysis. Fair  
comparison?


-gts


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Betting and multiple-component truth values

2007-02-06 Thread gts

Russell,

I'm not suggesting that an omniscient player would not win over time as a  
result of its superior knowledge.


I am suggesting that a non-omniscient player need not necessarily be  
bilked in the sense meant by De Finetti; that is, it needn't be forced to  
lose automatically due to dutch books made against it.


To illustrate a dutch book:

Say you believe in life on Mars with p=.1 and in intelligent life on Mars  
with p=.01.


To De Finetti (and Ramsey), this is the same as saying you would pay 10  
cents for a ticket worth $1 if there is life on Mars, and 1 cent for a  
ticket worth $1 if there is intelligent life on Mars. Also you believe  
these are fair bets such that you would be willing to take either side of  
either transaction.


You are coherent here in the De Finetti sense no matter how right or wrong  
you may be about the probabilities of life on Mars. No dutch books can be  
made against you. No bookie can bilk you.


Would you consider instead valuing the tickets such that the first is  
worth 1 cent and the second is worth 10 cents? No, you would not, because  
in that case you would be incoherent: someone (an omniscient bookie or  
otherwise) could exploit your incoherency by buying from you the first  
ticket and selling you the second, locking in a profit of at least 9 cents  
at your expense no matter what is true about life on Mars.


That is a dutch book.

Can entities with limited knowledge and resources be coherent in the sense  
described, thus avoiding being bilked by omniscient bookies seeking to  
make dutch books? I don't see why not.


-gts



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Betting and multiple-component truth values

2007-02-06 Thread gts
On Tue, 06 Feb 2007 09:56:10 -0500, Russell Wallace  
[EMAIL PROTECTED] wrote:


I'm not talking about dutch book, I'm talking about the following quoted  
from Ben's original post, emphasis added):


I think Ben is talking about dutch books, at least implicitly. I think he  
wants to show that multiple-component truth values are consistent with a  
De Finetti-like subjectivist interpretation of probability. Dutch book  
considerations are central to that interpretation.


-gts



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Betting and multiple-component truth values

2007-02-06 Thread gts

On Tue, 06 Feb 2007 11:18:09 -0500, Ben Goertzel [EMAIL PROTECTED] wrote:


The scenario I described was in fact a dutch book scenario...


The next step might then be to show how Novamente is constrained from  
allowing dutch books to be made against it. This would prove Novamente's  
probabilistic reasoning to be coherent in the sense meant by De Finetti.


-gts

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Betting and multiple-component truth values

2007-02-06 Thread gts
On Tue, 06 Feb 2007 16:27:22 -0500, Jef Allbright [EMAIL PROTECTED]  
wrote:



You would have to assume that statement 2 is *entirely* contingent on
statement 1.


I don't believe so. If statement S is only partially contingent on some  
other statement, or contingent on any number of other statements, then  
simple coherency demands only that we assign the p of S to be less than  
the p of any of those other statements on which S is contingent. It makes  
no difference for the sake of coherency how many of those other statements  
are known or in memory, nor does it matter whether our assigned  
probabilities match reality.


I think coherency is probably a necessary but not a sufficient condition  
for intelligence. I hope it is not really outside the range of what is  
possible in AI.


-gts

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Probabilistic consistency

2007-02-05 Thread gts

Inconsistency, though annoying, is a major driving force for learning
and creativity.


Along these lines I was reading an old research paper about subjective  
notions of randomness a few weeks back (sorry I don't have a reference).  
It seems back in the 30's, a radio station sponsored a series of  
experiments in ESP in which someone at the station would attempt to  
mentally broadcast to the listening audience a random number from 0 to 9.  
The audience then wrote to the station with their mental impressions of  
the random numbers. Their votes were tabulated. This experiment was  
repeated numerous times. The experiment failed -- no correlation was found  
to support the ESP hypothesis.


So these other researchers used the data to analyze the subjective meaning  
of 'random number'. As it turned out, the number 7 was predicted about  
twice as often as any other number. The data was statistically significant  
with n something like 1800. From this one can infer that the typical human  
mind regards the number 7 as the 'most random' digit.


These researchers posited a theory to explain the human penchant for 7 as  
most random (it was not clear if the theory was ad hoc or not, but I think  
it's interesting regardless):


According to the theory, the numbers 2, 4, 6 and 8 are multiples of 2,  
which one might say makes them less random than 7 which is not a multiple  
of any other digit. 0 and 9 are endpoints on the 0-9 scale, which also  
makes them less random than 7. The number 5 is in the middle, which is  
non-random, etc. It seems that less can be said about 7 than about any  
other digit, and that the human mind considers this to be evidence that 7  
is the most random.


These considerations may not be exactly rational but apparently the human  
mind sees them as rational at some unconscious level.


One might ask what this means in terms of AGI. Should an AGI also regard 7  
as about twice as random as any other digit? Or would that be irrational  
and inconsistent with probability theory? I would suppose little  
considerations like these would make the difference between 'robot-like'  
and 'human-like'...


-gts



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Betting and multiple-component truth values

2007-02-05 Thread gts

On Mon, 05 Feb 2007 02:03:21 -0500, Ben Goertzel [EMAIL PROTECTED] wrote:

I thought of a way to define two-component truth values in terms of  
betting strategies (vaguely in the spirit of de Finetti).


I think your thought-experiment here is ingenious! I'm not yet totally  
sure whether I agree with your set-up or your conclusions (I need to think  
about this more) but in general I applaud your effort to make subjectivist  
sense of multiple-component truth values. Kudos. :)


-gts



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Optimality of probabilistic consistency

2007-02-04 Thread gts

On Sun, 04 Feb 2007 07:52:02 -0500, Pei Wang [EMAIL PROTECTED] wrote:


However, the axioms of probability theory and interpretations of
probability (frequentist, logical, subjective) all take a consistent
probability distribution as precondition.


Also I think the meaning of 'probabilistic consistency' might change  
according to the interpretation of probability. For example two  
subjectivist-like AGI's might arrive at different conclusions, at least  
early in the learning process, without probabilistic inconsistency. Such  
apparent inconsistencies are however prohibited under the logical  
interpretation.


This I think may also go to the question of resources. I'm thinking a  
subjectivist (De Finetti-Ramsey inspired) AGI should require a different  
amount of resources than a logical (Keynes-Jaynes-Cox inspired) AGI. At  
the moment my conjecture is that implementations of the logical  
interpretation would require the greater resources in that it imposes more  
restraints, but I can also see some possible rationale for the converse.


Ben, this is also why I was wondering why your hypothesis is framed in  
terms of both Cox and De Finetti. Unless I misunderstand Cox, their  
interpretations are in some ways diametrically opposed. De Finetti was a  
radical subjectivist while Cox is (epistemically) an ardent  
logical/objectivist (or so I gather). Apparently you see their ideas as  
complementary rather than mutually exclusive, which is interesting... is  
it because De Finetti's subjective interpretation gives a theoretical  
foundation to your use of [U,L] ranges in your quadruples?


Another question on my mind is if and how it might be possible to design  
an AGI based entirely on the subjectivist ideas of De Finetti, an idea  
that I find very attractive. However I am at the moment stumped on that  
question; it may be true that no matter the philosophy of the programmer,  
he must for practical reasons implement something like a logical/objective  
interpretation of bayes' rule. Comments?


-gts


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Optimality of probabilistic consistency

2007-02-04 Thread gts
A mathematical test for objectivity/subjectivity might be whether  
Novamente (or any AGI) could allow, in principle, for the possibility of  
different posterior probabilities on bayes rule as can happen under  
subjectivism. My thought is that a programmer is essentially forced for  
practical reasons to disallow that sort of inconsistency -- that he must  
implement an objective interpretation.


The definition of 'probabilistic consistency' that I was using comes from  
ET Jaynes' book _Probability Theory - The Logic of Science_, page 114.


These are Jaynes' three 'consistency desiderata' for a probabilistic robot:

1. If a conclusion can be reasoned out in more than one way, then every  
possible way must lead to the same result.


2. The robot takes into account all information relevant to the question.

3. The robot always represents equivalent states of information with  
equivalent plausibility assignments.


Seems to me that strict enforcement of these desiderata (especially #3)  
would make the robot an objective bayesian as opposed to a subjective  
bayesian in the De Finetti sense.


-gts

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Optimality of probabilistic consistency

2007-02-04 Thread gts

On Sun, 04 Feb 2007 11:10:57 -0500, Pei Wang [EMAIL PROTECTED] wrote:


I don't think any intelligent system (human or machine) can achieve
any of the three desiderata, except in trivial cases.


I have no doubt you and Ben are correct on this point. Enormous resources  
would be required for an ideal version of Jaynes' objective bayesian model  
of the probabilistic robot, which is one reason why I think it might be  
important to consider which philosophical interpretation to emulate.


Personally I would be inclined to allow exceptions to Jaynes' second and  
third desiderata. The reason for compromising the second is easy enough to  
see: it is simply not always feasible to have and consider all the  
relevant information before making a decision. Any compromise of the third  
desiderata (that our AGI must by some supposed force of objective logic  
always represent equivalent states of information with equivalent  
plausibility assignments) is more controversial.


People of Keynesian/logical persuasion might cry heresy, but I would  
respond that all is not lost; that these apparent sacrifices still leave  
us with the perfectly reasonable and coherent subjectivist account of De  
Finetti. The question then would be how to go about implementing it. I'm a  
bit skeptical that it can be done, but, unlike you and Ben, I am by no  
means an expert in the field of AI. Is it possible to program AGI without  
forcing it to abide by the tenets of objective/logical bayesianism?


Subjectivists like De Finetti and Ramsey define probability as degree of  
belief but unlike the objective/logical bayesians they measure it  
according to an agent's *willingness to act* on said degrees of belief,  
(as opposed to some supposed calculable mental barometer of rationally  
determined belief separate from the will). Even though I might support the  
subjectivist programme philosophically, I'm not sure if or how a  
programmer might get a handle on this subjective 'willingness to act', as  
distinct from the logical restraints that objective bayesians would  
already seek to impose.



-gts

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Optimality of probabilistic consistency

2007-02-04 Thread gts

On Sun, 04 Feb 2007 13:15:27 -0500, Pei Wang [EMAIL PROTECTED] wrote:

none of the existing AGI project is designed [according to the tenets of  
objective/logical bayesianism]


Hmm. My impression is that to whatever extent AGI projects use bayesian  
reasoning, they usually do so in a way that satisfies the tenets of  
objective/logical bayesianism. I hope you understand I mean objective in  
the epistemic and not the physical sense.


I see objective/logical bayesianism embodied in Jaynes' third desiderata  
of probabilistic consistency, a principle that I doubt all AGI projects  
reject, assuming any do.  Those projects which do allow for any compromise  
of that principle, if they exist, would I think be better described as  
implementations of subjective rather than objective bayesianism.


Of course this is only according to my understanding of these two schools  
of bayesian thought and their differences, which may be different from  
yours.


-gts

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Relevance of Probability

2007-02-04 Thread gts
On Sun, 04 Feb 2007 12:46:06 -0500, Richard Loosemore [EMAIL PROTECTED]  
wrote:


If we knew for sure that the human mind was using something like a  
formalized system (and not the messy nonlinear stuff I described), then  
we could quite comfortably say Hey, let's do the same, but simpler and  
maybe even better.  My problem is, of course, that the human mind may  
well not be doing it that way...


I'm somewhat sympathetic to that point of view, Richard, in case it's any  
consolation to you. :)


Your words remind me of the criticism that the subjectivist theorist F.P.  
Ramsey had of the logical theories of J.M. Keynes, which I mentioned here  
yesterday or the day before and which I find very persuasive.


Keynes argued for the existence of something he called probability  
relations. These relationships were supposed to be perceivable by the  
human mind in the same manner in which it sees logical relationships. For  
Keynes, probability theory was in fact a sort of extension of deductive  
logic in which probable conclusions were partially entailed by their  
premises. The degree of partial entailment was supposed to be equal to the  
probability.


So for example on Keynes' view the statement Ten black ravens exist  
partially entails the statement All ravens are black and the degree of  
entailment = P(All ravens are black).


On this view all rational minds should assign exactly the same value to:

P(All ravens are black|Ten black ravens exist)

Keynes was influenced heavily by Bertrand Russell and Alfred North  
Whitehead who had together attempted to do something similar with their  
*Principia Mathematica*. It's doubtful that Russell and Whitehead  
succeeded, and I think the same can be said of Keynes.


Ramsey's most pointed criticism was that these Keynesian probability  
relationships, if they exist, certainly are not perceived by the mind as  
Keynes claimed. And who here can argue with Ramsey's criticism? If these  
probability relationships were perceivable in the same way as ordinary  
logical relations then there would be hardly any question about the  
correct way to do probabilistic reasoning in AGI -- we'd all immediately  
recognize the correct algorithms and agree.


-gts

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] foundations of probability theory

2007-02-03 Thread gts

On Fri, 02 Feb 2007 22:01:34 -0500, Ben Goertzel [EMAIL PROTECTED] wrote:

In Novamente, we use entities called indefinite probabilities, which  
are described in a paper to appear in the AGIRI Workshop Proceedings  
later this year...


Roughly speaking an indefinite probability is a quadruple (L,U,b,N) with  
interpretation


The probability is b that after I make N more observations, my  
estimated mean for the probability distribution attached to statement S  
will be in the interval (L,U)


Where statement S might be some general hypothesis, e.g., All ravens are  
black, is that right? And then b increases as N increases -- as Novamente  
sees more black ravens. Yes? Does the confidence interval also change?


-gts


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Optimality of using probability

2007-02-03 Thread gts

On Sat, 03 Feb 2007 07:29:26 -0500, Ben Goertzel [EMAIL PROTECTED] wrote:


Can't we just speak about this in terms of optimization?

...
The subtlety comes in the definition of what it means to use an  
approximation to probability theory.


The cleanest definition would be: To act in such a way that its  
behaviors are approximately consistent with probability theory


Now, how can we define this?


It seems to me you're just offering up a definition of decision theory  
which might be defined as the science of acting in such a way that one's  
goal-seeking behaviors are optimized and approximately consistent with  
probability theory. Decision theory is the hand-maiden of probability  
theory and of course there is already a huge body of knowledge on the  
subject.


Or do you mean something that a decision theorist would not consider part  
of his domain?


-gts







-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] foundations of probability theory

2007-02-02 Thread gts

On Thu, 01 Feb 2007 14:00:06 -0500, Ben Goertzel [EMAIL PROTECTED] wrote:


Discussing Cox's work is on-topic for this list...


Okay, I'll get a copy and read it.

Let me tell you one research project that interests me re Cox and  
subjective probability:



Justifying Probability Theory as a Foundation for Cognition.

Cox's axioms and de Finetti's subjective probability approach, developed  
in the first part of the last century, give mathematical arguments as to  
why probability theory is the optimal way to reason under conditions of  
uncertainty.


What are you quoting here, if I may ask? I'm surprised to see Cox  
mentioned this way in the same sentence with de Finetti, because it's my  
impression that Cox's views are similar to those of Jaynes, who was a  
pretty sharp critic of de Finetti.


I was under the impression that Cox, like Jaynes, rejected the extreme  
subjectivist views of de Finetti in favor of a more objective/logical  
interpretation. But this is admittedly based only on my very scant  
knowledge of Cox.


I don't know of any work explicitly addressing this sort of issue, do  
you?


No, none that address Cox and AI directly, but I suspect one is  
forthcoming perhaps from you. Yes? :)


The only work I know of that addresses both AI and probability theory is  
one currently on my reading list by Professor Donald Gillies of King's  
College, London (not to be confused with some Canadian character named  
Donald B. Gillies, whose name comes up in a google search). Gillies earned  
his Phd under your own favorite Lakatos, with a dissertation in  
probability theory (I think) and wrote a book about AI and the scientific  
method which I believe also deals with at least tangentially with  
probability theory. Maybe you've already read it. It was published a while  
ago and you probably stay on the leading of edge of AI.


Artificial Intelligence and Scientific Method (Paperback)
http://www.amazon.com/Artificial-Intelligence-Scientific-Method-Gillies/dp/0198751591/sr=8-2/qid=1170441700/ref=sr_1_2/103-6974055-7831844?ie=UTF8s=books

I should mention here that although I am certified with Microsoft as a  
C++ application developer, I clam no special knowledge of AI programming  
techniques. I expect this may change soon, however.


-gts




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] foundations of probability theory

2007-02-02 Thread gts

On Fri, 02 Feb 2007 15:57:24 -0500, Ben Goertzel [EMAIL PROTECTED] wrote:

Interpretation-wise, Cox followed Keynes pretty closely.  Keynes had his  
own eccentric view of probability...


Although I don't yet know much about Cox, (Amazon is shipping his book to  
me), I have studied a bit about Keynes and yes, eccentric is my view an  
understatement!


I assume you are familiar with F.P. Ramsey? (If not, he was one of the  
founders/discoverers of the subjective theory along with de Finneti, but  
separately.) I read Ramsey's classic paper Truth and Probability and  
found his arguments very convincing, including his criticisms of Keynes.  
For example:


But let us now return to a more fundamental criticism of Mr Keynes'  
views, which is the obvious
one that there really do not seem to be any such things as the probability  
relations he describes. He
supposes that, at any rate in certain cases, they can be perceived; but  
speaking for myself I feel confident
that this is not true. I do not perceive them, and if I am to be persuaded  
that they exist it must
be by argument; moreover I shrewdly suspect that others do not perceive  
them either, because they
are able to come to so very little agreement as to which of them relates  
any two given propositions. [1]


I agree with Ramsey that Keynes' supposed probability relations do not  
seem to exist and that in any case they cannot be perceived in the way  
Keynes claimed. I echo Ramsey here in saying, I do not perceive them, and  
if I am to be persuaded that they exist it must be by argument.


I suspect that if Ramsey were alive today, he would shudder at the thought  
of programming Keynesian-like probability relations in AGI. Are you  
attempting something like this in Novamente? (Please forgive my ignorance  
of your Novamente project. I'm still learning about it.)


-gts

1. Truth and Probability by Frank P. Ramsey
cepa.newschool.edu/het/texts/ramsey/ramsess.pdf

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] foundations of probability theory

2007-02-01 Thread gts

Hi Ben,

Well, Jaynes showed that the PI can be derived from another assumption,  
right?: That equivalent states of information yield equivalent  
probabilities


Yes, as I understand it the principle of indifference is a special case of  
Jaynes' principle of maximum entropy.


I have no problem with the principle of maximum entropy except in this  
special case in which we have *zero* information relevant to the true  
probabilities of outcomes. The ramifications of mathematical concepts can  
sometimes change radically when considering zero quantities, and I  
strongly suspect this is an example.


It seems to me that in this special case of the maximum entropy principle,  
known as the indifference principle, we are not actually considering  
'equivalent states of information' which might in theory yield equivalent  
probabilities. We are not at all considering information. We have no  
pieces of information to analyze, test, compare, or otherwise consider.  
Instead we are considering non-information (whatever that means).


How can we justify probabilistic inferences from non-information? How can  
we justify a decision to infer something from nothing?


I use the word justify here in the formal epistemological/logical sense.  
This goes to the question of whether the principle of indifference is  
truly a valid logical principle, in the formal sense of that word, as is  
maintained by certain people loyal to certain logical interpretations of  
probability theory.


As you know I believe the PI falls short of that definition -- that it is  
instead merely a heuristic device -- a bit of semi-religious quasi-logic  
left over from the essentially defunct classical theory of Laplace.  
Perhaps someone can convince me otherwise (you came very close, when you  
answered the wine/water paradox!)



This seems to also be dealt with at the end of Cox's book...


Interesting. I'm tempted to read Cox's book so that you and I can discuss  
his ideas in more detail here on your list. (I worry that my enthusiasm  
for this subject is only annoying people on that other discussion list.)  
Is that something you would like to do? Please let me know!


I'm copying Jef and Stu here, as this is not the first time the principle  
of maximum entropy has come up in the dialogue. (I don't want you guys to  
think your thoughts on this subject went ignored or unanswered.)


-gts

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Chaitin randomness

2007-01-20 Thread gts
On Sat, 20 Jan 2007 00:32:18 -0500, Benjamin Goertzel [EMAIL PROTECTED]  
wrote:



I'm not sure exchangeability implies Chaitin randomness.


Yeah, you're right, this statement needs qualification -- it wasn;t
quite right as stated.  You're right that a binary series formed by
tossing a weighted coin is exchangeable but not Chaitin random.


Okay, then this observation leads me back to the same puzzlement I  
expressed on extropy-chat, which I will re-state here in slightly  
different language:


Very improbable-appearing subsequences can and inevitably do appear in  
long random sequences. Flip a fair coin a few thousand times, for example,  
and there is a very good chance you'll see some extraordinarily long runs  
of heads and tails along with other very non-random-appearing  
subsequences. In binary terms, you'll see many runs like 111  
and 101010101010101.


We can imagine ourselves parsing the sequence, dividing it into two  
groups: 1) complex/disorderly subsequences not amenable to simple  
algorithmic derivation and 2) simple/orderly subsequences such as those  
above that are so amenable.


Now, if I understand Chaitin's information-theoretic compressibility  
definition of randomness correctly (and I very likely do not), the  
simple/orderly subsequences in group 2) are compressible and so would  
count against the larger sequence in any compressibility measure of its  
randomness. If that is so then a maximally random sequence might be best  
considered as one that is at least slightly compressible. But this  
definition would be contrary to Chaitin's idea that maximally random  
sequences are incompressible!


I have to conclude that either a) my understanding of the  
information-theoretic incompressibility definition of randomness is  
deficient, or b) incompressible Chaitin-random numbers are in some sense  
'artificial'.


Probably a) is true, but at the moment I don't see why.

-gts



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Chaitin randomness

2007-01-20 Thread gts
This author makes a distinction, similar to the one in my mind, between  
algorithmic and intuitive randomness.


===
We can say that a sequence is algorithmically random if it has an amount  
of algorithmic information approximately equal to its length. Note that  
this is related to, but not exactly the same as our intuitive conception  
of randomness. Intuitively, we apply the term to processes (like coin  
tossing) rather than the results of such processes (like the resulting  
sequence). We would naturally call the process random even if it  
(freakishly) ended up producing a long string of heads.


http://www.amirrorclear.net/academic/research-topics/algorithmic-randomness.html
===

-gts


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Chaitin randomness

2007-01-20 Thread gts
On Sat, 20 Jan 2007 20:41:55 -0500, Matt Mahoney [EMAIL PROTECTED]  
wrote:



Any information you save by compressing the compressible bits of a
random sequence is lost because you also have to specify the location of  
those bits.  (You can use the counting argument to prove this).


Ah, yes... Thank you. Your (and other's) mention of the counting argument  
reminded me of something I once considered in the past, and led me to this  
comp.compression FAQ which explains it all very nicely:


Compression of random data (WEB, Gilbert and others)
http://www.faqs.org/faqs/compression-faq/part1/section-8.html

-gts

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


[agi] Chaitin randomness

2007-01-19 Thread gts

On Fri, 19 Jan 2007 18:32:54 -0500, Benjamin Goertzel [EMAIL PROTECTED]
wrote:


I think this topic is more appropriate for

agi@v2.listbox.com


Sorry, I thought that was where I was! :) Sending there now...


Anyway, to respond to your point: Yep, I agree that exchangeability is
different from, but closely related to Chaitin randomness, in the
sense that for finite series it seems to be the case that

* Chaitin randomness almost always implies exchangeability
* Exchangeability almost always implies Chaitin randomness


I'm not sure exchangeability implies Chaitin randomness. Exchangeability
is the subjective correlate to independence and it's my understanding that
independence does not imply Chaitin randomness.

Consider for example a finite sequence of independent trials of a heavily
weighted coin that turns up heads 99% of the time. Am I wrong to think
this sequence would be highly compressible and thus not Chaitin-random?

My thinking here is that the number bits required to encode the sequence
would be much fewer than the bits in the sequence, and that following
Chaitin, a series of numbers is random in the Chaitin sense iff the
smallest algorithm capable of specifying it has about the same number of
bits of information as the series itself.

(This is my understanding of Chatin randomness gleaned from
http://www.cs.auckland.ac.nz/CDMTCS/chaitin/sciamer.html)

-gts

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303