Re: [agi] Enumeration of useful genetic biases for AGI

2007-02-14 Thread Richard Loosemore

Ben,

I am not sure the question has been stated clearly enough to be answered 
meaningfully, yet.


The list given by your correspondent was extremely vague:  what does it 
mean to talk about an implicit set of constraints on ontologies that 
can be discovered by systematic 'scientific' investigation?  For 
example, there are things I can only perceive *directly* (whatever that 
means) if they are in 3-D space, but systematic scientific 
investigation allows me to think about spaces with other numbers of 
dimensions, in all kinds of ways.  Same goes for causality.


Having said that, I know what you mean at an intuitive level (and I do 
believe there are built in biasses) but I think the problem is deeply 
tangled up with what you think the machinery is, that is getting 
biassed.  I am not even convinced that the question can be properly 
asked unless you can talk in terms of that machinery.


And what is the boundary between an ontological bias and a lesser 
tendency to learn a certain kind of thing, which can nevertheless be 
overridden through experience?



Richard Loosemore.


Ben Goertzel wrote:

Hi,

In a recent offlist email dialogue with an AI researcher, he made the
following suggestion regarding the inductive bias that DNA supplies
to the human brain to aid it in learning:

*
What is encoded in the DNA may include a starting ontology (as proposed,
with exasperating vaguess, by developmental psychologists, though much
more complex than anything they have thought of) but the more important
thing is an implicit set of constraints on ontologies that can be
discovered by systematic 'scientific' investigation. So it might not
work in an arbitrary universe, including some simulated universes,e.g.
'tileworld' universes.

One such constraint (as Kant pointed out in 1780) is the
assumption that everything physical happens in 3-D space and
time. Another is the requirement for causal determinism (for most
processes).

There may also be constraints on kinds of information-processing
entities that can be learnt about in the environment, e.g. other humans,
other animals, dead-ancestors, gods, spirits, computer games, 

The major, substantive, ontology extensions have to happen in (partially
ordered) stages, each stage building on previous stages, and brain
development is staggered accordingly.
**


My response to him was that these genetic biases are indeed encoded
in the Novamente design, but in a somewhat unsystematic and scattered way.


For instance, in the Novamente system,

-- the restriction to 3D space is implicit in the set of elementary 
predicates and procedures supplied
to the system for preprocessing perceptual data on its way to abstract 
cognition


-- the bias toward causal determinism is implicit in an inference 
control mechanism that specifically
tries to build PredictiveAttractionLink relationships that embody 
likely causal relationships


etc.

I have actually never gone through the design with an eye towards 
identifying exactly how each
important genetic bias of cognition is encoded in the system.  
However, this would be an interesting

and worthwhile thing to do.

Toward that end, it would be interesting to have a systematic list 
somewhere of the genetic biases

that are thought to be most important for structuring human cognition.

Does anyone know of a well-thought-out list of this sort.  Of course I 
could make one by surveying

the cognitive psych literature, but why reinvent the wheel?

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Enumeration of useful genetic biases for AGI

2007-02-14 Thread William Pearson

On 14/02/07, Ben Goertzel [EMAIL PROTECTED] wrote:


Does anyone know of a well-thought-out list of this sort.  Of course I
could make one by surveying
the cognitive psych literature, but why reinvent the wheel?


None that I have come across. Biases that I have come across are
things like paying attention to face like objects(1) and the on going
debate over language centres, that is we are biased to expect language
of some variety.

These two biases I think are parts of the very important general bias
to expect other intelligent agents that we can learn from. Without
that starting bias, or the ability to have the general form of that
bias (the ability to learn almost arbitrary facts/skills/biases from
other agents), I think an AGI is going to be very slow at learning
about the world, even if its powers of inference are magnitudes above
humans.


 Will Pearson
1. 
http://info.anu.edu.au/mac/Media/Research_Review/_articles/_2005/_researchreviewmckone.asp

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Enumeration of useful genetic biases for AGI

2007-02-14 Thread gts

On Tue, 13 Feb 2007 21:28:53 -0500, Ben Goertzel [EMAIL PROTECTED] wrote:

Toward that end, it would be interesting to have a systematic list  
somewhere of the genetic biases that are thought to be mostimportant for  
structuring human cognition.


Does anyone know of a well-thought-out list of this sort.  Of course I  
could make one by surveying the cognitive psych literature,but why  
reinvent the wheel?


Your email acquaintance mentioned Kant. You may want to look at Kant's  
categories, in his Critique of Pure Reason.


These are the 'Categories of the Understanding' by which Kant thought the  
mind structures cognition:


Quantity
*Unity
*Plurality
*Totality

Quality
*Reality
*Negation
*Limitation

Relation
*Inherence and Subsistence (substance and accident)
*Causality and Dependence (cause and effect)
*Community (reciprocity)

Modality
*Possibility
*Existence
*Necessity

-gts

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


[agi]

2007-02-14 Thread Eric Baum
bcc:[EMAIL PROTECTED] 
Subject: Re: [agi] Enumeration of useful genetic biases for AGI
In-Reply-To: [EMAIL PROTECTED]
References: [EMAIL PROTECTED]
[EMAIL PROTECTED]
X-Mailer: VM 7.17 under 21.4 (patch 19) Constant Variable XEmacs Lucid
Reply-To: [EMAIL PROTECTED]
--text follows this line--

Ben Matt Mahoney wrote:
 I don't think there is a simple answer to this problem.  We observe
 very complex behavior in much simpler organisms that lack long term
 memory or the ability to learn.  For example, bees are born knowing
 how to fly, build hives, gather food, and communicate its location.
 

Ben Indeed, and we observe complex behaviors in turbulent fluid flow,
Ben plasmas, and other nonliving self-organizing systems as well

Ben But I don't see any of this as terribly relevant to the question
Ben I was asking ;-)

Ben Bees are born knowing how to build hives, but are children born
Ben knowing how to build houses?  I have a feeling a human's
Ben cognitive architecture and dynamics are quite different from
Ben those of a bee...

If a bee is born knowing how to build a hive, that implies, I expect,
that it is born with a program library containing many objects,
classes, methods, etc that would be extremely useful for constructing
a program to build houses. Building houses no doubt requires adding a
bunch more well-organized code, but that code is likely to be a lot
easier to write starting with the bee's library. I expect that code
discovery is only possible when it can be broken down into steps each
of which is not too large, and starting with the bee library it may be
that relatively small steps can take you a long way.

So I expect that there are biases that are a lot like a killer object
oriented code library.

 The complexity of inductive bias is bounded by the complexity of
 your DNA, about 6 x 10^9 bits.  This is probably too high by a few
 orders of magnitude, just as the number of synapses overestimates
 the complexity of AGI.  Nevertheless, we risk repeating the error
 of GOFAI.  Early AI researchers were led astray by the successes of
 explicitly coding knowledge into toy systems.  Now we know to use
 statistical and machine learning techniques, but we may still be
 led astray by oversimplified models of inductive bias.  Certain
 aspects of the cerebral cortex are highly uniform, which suggests a
 simple model.  But the rest of the brain has a complex structure
 that is poorly understood.
 
 
Ben I'm not thinking that a systematic list of known human inductive
Ben biases could be derived from genetics neuroscience (in the near
Ben term), but rather from cognitive psychology.

In the near term, I am trying introspection. To actually build these
biases into a system will, I expect, involve a collaboration of human 
programmers and evolutionary programming. We also have some windows on
these biases from ethology (eg bees, see above), and imaging, etc.
But working out the genomics could turn out to be the way that gets
the most data the soonest.

Ben And, I'm not thinking to use such a list as the basis for
Ben creating an AGI, but simply as a tool for assisting in thinking
Ben about an already-existing AGI design that was created based on
Ben other principles.  My suspicion is that all the known and
Ben powerful human inductive biases are already built into Novamente
Ben in various ways, 

I much doubt Novamente has the library of procedures that a Bee is 
born with.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi]

2007-02-14 Thread Ben Goertzel


Eric Baum wrote:


Ben And, I'm not thinking to use such a list as the basis for
Ben creating an AGI, but simply as a tool for assisting in thinking
Ben about an already-existing AGI design that was created based on
Ben other principles.  My suspicion is that all the known and
Ben powerful human inductive biases are already built into Novamente
Ben in various ways, 

I much doubt Novamente has the library of procedures that a Bee is 
born with.
  


Correct.  This is an apparent point of disagreement between us. 

My own working hypothesis is that the hard-coded inductive biases needed 
for achieving AGI
are at a higher level than, say, specific navigation routines. 

As two, semi-random examples: We do build in a bias to look for patterns 
among percepts that appear to
originate from physically nearby locations.   And we build in a bias 
toward imitative behavior.  And
our program learning component has a bias for hierarchical learning that 
will bias the system toward e.g.
learning recursive dynamic-programming-like algorithms for navigation 
(but is still different than

supplying the system with navigation algorithms).

I think that the old book Rethinking Innateness

http://crl.ucsd.edu/~elman/Papers/book/index.shtml

got a lot of things right about the nature/nurture controversy.  The 
genome definitely encodes a lot of
biases that direct learning in appropriate directions, but my suspicion 
is that you overestimate the

specificity and concreteness of the genetically inbuilt biases.

However, the Novamente architecture does in fact support import of more 
specific biases
and code routines as you suggest.  So, if you create them, we can plug 
them in and see if
the Novamente learning mechanisms are able to take them as 
building-blocks and utilize them

effectively.

Thus, I believe the Novamente architecture is going to be suitable for 
experimenting with

both of our working hypotheses.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


[agi] hard-coded inductive biases

2007-02-14 Thread Peter Voss
... various comments ...

It more fundamental than that: The design of your 'senses' - what feature
extraction, sampling and encoding you provide lays a primary foundation to
induction.

Peter


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] hard-coded inductive biases

2007-02-14 Thread Ben Goertzel

Peter Voss wrote:

... various comments ...
  


It more fundamental than that: The design of your 'senses' - what feature
extraction, sampling and encoding you provide lays a primary foundation to
induction.

Peter
  


That is definitely true, and is PART of what I meant by saying that the 
inductive biases of the human

mind are largely inbuilt **implicitly** in Novamente.

Some are inbuilt implicitly in the design of the perception module...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


[agi] the birth of the mind

2007-02-14 Thread Eugen Leitl

http://www.amazon.com/Birth-Mind-Creates-Complexities-Thought/dp/0465044069/sr=8-1/qid=1171483943/ref=pd_bbs_sr_1/105-4534151-3528451?ie=UTF8s=books

A good easy account of the developing brain, wherein it is described 
where the (many) bits missing from the genome come from.

Might be of interest to some AGI folks.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


signature.asc
Description: Digital signature


Re: [agi]

2007-02-14 Thread Eric Baum

Ben Eric Baum wrote:



Ben I think that the old book Rethinking Innateness

Ben http://crl.ucsd.edu/~elman/Papers/book/index.shtml

Ben got a lot of things right about the nature/nurture controversy.
Ben The genome definitely encodes a lot of biases that direct
Ben learning in appropriate directions, but my suspicion is that you
Ben overestimate the specificity and concreteness of the genetically
Ben inbuilt biases.

There are monkeys that are born pre-programmed to learn, from a single
episode of seeing a video of another monkey shrieking at a snake,
to fear snakes. The monkey will not learn from seeing another monkey
shriek at a flower to fear the flower. Nor will the monkey fear snakes 
if it hasn't previously seen another monkey do so. 

It also seems clear that monkeys are programmed to learn social 
interaction routines, because if they miss social input during a 
critical period in their development, they never develop them.
Just as humans are programmed to learn language.

Navigation is pretty important to creatures, and its not likely to 
be easy to build those programs unless there's a lot built in,
so you might see evolution having incentive to build in routines.
Even very simple creatures do some navigation, so evolution has been
perfecting these routines for a long time. Evolution had way more
computational power than a creature does during life, so I can't see
why if one thinks the creature could learn it during life, one wouldn't
think evolution could build it in, which would likely be fitter. 
I guess you'd credit that the birds, who if they don't see the 
heavens during the critical
period in their development never learn to navigate by the stars,
and if they do see them do, are programmed to develop a navigation
instinct. I don't see why its surprising if the kind of navigation
routines that are useful for playing Sokoban are programmed in as
well. 

But I should clarify-- I don't mean the final routines are explicitly
coded in exactly. The genomic code runs, interacts with data in 
the sensory stream, and produces the mental structures reflecting
the routines. That's how it evolves, because as the genome is being
mutated, what survives is what works in development which takes place
in contact with the sensory stream. If the
monkey doesn't see the other monkey shrieking, it won't build the
snake fear routine. There will thus be a sense in which what is 
genomically coded is a bias to develop routines, rather than explicit
routines in final form. But as I try to think what kinds of bias I can
write down that will be useful, and what kinds accord with
introspection, big chunks of code like scaffolds come to mind.

Ben However, the Novamente architecture does in fact support import
Ben of more specific biases and code routines as you suggest.  So, if
Ben you create them, we can plug them in and see if the Novamente
Ben learning mechanisms are able to take them as building-blocks and
Ben utilize them effectively.

Ben Thus, I believe the Novamente architecture is going to be
Ben suitable for experimenting with both of our working hypotheses.

That's good.

Ben -- Ben

Ben - This list is sponsored by AGIRI: http://www.agiri.org/email
Ben To unsubscribe or change your options, please go to:
Ben http://v2.listbox.com/member/?list_id=303

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Priors and indefinite probabilities

2007-02-14 Thread gts

Tying together recent threads on indefinite probabilities and prior
distributions (PI, maxent, Occam)...


For those who might not know, the PI (the principle of indifference)  
advises us, when confronted with n mutually exclusive and exhaustive  
possibilities, to assign probabilities of 1/n to each of them.


In his book _The Algebra of Probable Inference_, R.T. Cox presents a  
convincing disproof of the PI when n = 2. I'm confident his argument  
applies for greater values of n, though of course the formalism would be  
more complicated.


His argument is by reductio ad absurdum; Cox shows that the PI leads to an  
absurdity. (Not just an absurdity in his view, but a monstrous absurdity  
:-)


The following quote is verbatim from his book, except that in the interest  
of clarity I have used the symbol  to mean and instead of the dot  
used by Cox. The symbol v means or in the sense of and/or.


Also there is an axiom used in the argument, referred to as Eq. (2.8 I).  
That axiom is


(a v ~a)  b = b.

Cox writes, concerning two mutually exclusive and exhaustive propositions  
a and b...

==
...it is supposed that

a | a v ~a = 1/2

for arbitrary meanings of a.

In disproof of this supposition, let us consider the probability of the  
conjunction a  b on each of the two hypotheses, a v ~a and b v ~b. We have


a v b | a v ~a = (a | a v ~a)[b | (a v ~a)  a]

By Eq (2.8 I) (a v ~a)  a = a and therefore

a  b | a v ~a = (a | a v ~a) (b | a)

Similarly

a  b | b v ~b = (b | b v ~b) (a | b)

But, also by Eq. (2.8 I), a v ~a and b v ~b are each equal to (a v ~a)   
(b v ~b) and each is therefore equal to the other.


Thus

a  b | b v ~b = a  b | a v ~a

and hence

(a | a v ~a) (b | a) = (b | b v ~b) (a | b)

If then a | a v ~a and b | b v ~b were each equal to 1/2, it would follow  
that b | a = a | b for arbitrary meanings of  and b.


This would be a monstrous conclusion, because b | a and a | b can have any  
ratio from zero to infinity.


Instead of supposing that a | a v ~a = 1/2, we may more reasonably  
conclude, when the hypothesis is the truism, that all probabilities are  
entirely undefined except these of the truism itself and its  
contradictory, the absurdity.


This conclusion agrees with common sense and might perhaps have been  
reached without formal argument, because the knowledge of a probability,  
though it is knowledge of a particular and limited kind, is still  
knowledge, and it would be surprising if it could be derived from the  
truism, which is the expression of complete ignorance, asserting nothing.

===

-gts




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi]

2007-02-14 Thread Ben Goertzel


 


But I should clarify-- I don't mean the final routines are explicitly
coded in exactly. The genomic code runs, interacts with data in 
the sensory stream, and produces the mental structures reflecting

the routines. That's how it evolves, because as the genome is being
mutated, what survives is what works in development which takes place
in contact with the sensory stream. If the
monkey doesn't see the other monkey shrieking, it won't build the
snake fear routine. There will thus be a sense in which what is 
genomically coded is a bias to develop routines, rather than explicit
routines in final form. 


Agreed, yes.  This is the main point Elman et al make in their book as 
well, as you know.



But as I try to think what kinds of bias I can
write down that will be useful, and what kinds accord with
introspection, big chunks of code like scaffolds come to mind.
  
This is where I'm not sure you're right ... I'm not sure the relevant 
biases are best provided

to an AGI system as big chunks of code.

For each of your big chunks of code, I might be able to figure out a way 
to achieve the same
bias -- in a more flexible and learning-friendly way -- via subtler 
mechanisms within the
Novamente system (a few parameter tweaks, a few small in-built 
procedures, etc.)


-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Priors and indefinite probabilities

2007-02-14 Thread Ben Goertzel


Indeed, that is a cleaner and simpler argument than the various more 
concrete PI paradoxes... (wine/water, etc.)


It seems to show convincingly that the PI cannot be consistently applied 
across the board, but can be heuristically applied to certain cases but 
not others as judged contextually appropriate.


This of course is one of the historical arguments for the subjective, 
Bayesian view of statistics; and also for the interval representation of 
probabilities (so when you don't know what P(A) is, you can just assign 
it the interval [0,1])


Ben

gts wrote:

Tying together recent threads on indefinite probabilities and prior
distributions (PI, maxent, Occam)...


For those who might not know, the PI (the principle of indifference) 
advises us, when confronted with n mutually exclusive and exhaustive 
possibilities, to assign probabilities of 1/n to each of them.


In his book _The Algebra of Probable Inference_, R.T. Cox presents a 
convincing disproof of the PI when n = 2. I'm confident his argument 
applies for greater values of n, though of course the formalism would 
be more complicated.


His argument is by reductio ad absurdum; Cox shows that the PI leads 
to an absurdity. (Not just an absurdity in his view, but a monstrous 
absurdity :-)


The following quote is verbatim from his book, except that in the 
interest of clarity I have used the symbol  to mean and instead 
of the dot used by Cox. The symbol v means or in the sense of 
and/or.


Also there is an axiom used in the argument, referred to as Eq. (2.8 
I). That axiom is


(a v ~a)  b = b.

Cox writes, concerning two mutually exclusive and exhaustive 
propositions a and b...

==
...it is supposed that

a | a v ~a = 1/2

for arbitrary meanings of a.

In disproof of this supposition, let us consider the probability of 
the conjunction a  b on each of the two hypotheses, a v ~a and b v 
~b. We have


a v b | a v ~a = (a | a v ~a)[b | (a v ~a)  a]

By Eq (2.8 I) (a v ~a)  a = a and therefore

a  b | a v ~a = (a | a v ~a) (b | a)

Similarly

a  b | b v ~b = (b | b v ~b) (a | b)

But, also by Eq. (2.8 I), a v ~a and b v ~b are each equal to (a v ~a) 
 (b v ~b) and each is therefore equal to the other.


Thus

a  b | b v ~b = a  b | a v ~a

and hence

(a | a v ~a) (b | a) = (b | b v ~b) (a | b)

If then a | a v ~a and b | b v ~b were each equal to 1/2, it would 
follow that b | a = a | b for arbitrary meanings of  and b.


This would be a monstrous conclusion, because b | a and a | b can have 
any ratio from zero to infinity.


Instead of supposing that a | a v ~a = 1/2, we may more reasonably 
conclude, when the hypothesis is the truism, that all probabilities 
are entirely undefined except these of the truism itself and its 
contradictory, the absurdity.


This conclusion agrees with common sense and might perhaps have been 
reached without formal argument, because the knowledge of a 
probability, though it is knowledge of a particular and limited kind, 
is still knowledge, and it would be surprising if it could be derived 
from the truism, which is the expression of complete ignorance, 
asserting nothing.

===

-gts




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


RE: [agi] Priors and indefinite probabilities

2007-02-14 Thread Jef Allbright
Chuckling that this is still going on, and top posting based on Ben's
prior example...

Cox's proof is all well and good, but I think gts still misses the
point:

The principle of indifference is still the *best* one can do under
conditions of total ignorance.
Any other distribution would imply some latent knowledge.

The subtle and deeper point missed by gts, and unacknowledged by Cox, is
that while it is logically true you can't get knowledge from ignorance,
as a subjective agent within a consistent reality, sometimes you just
gotta choose anyway, or you don't get to play the game.

LEADING TO THE ONLY THING REALLY INTERESTING ABOUT THIS DISCUSSION:
The deeper philosophical point that, as subjective agents, we can't
actually ask a fully specified question without having a prior of some
kind, and that by playing the game we tend always to move toward a state
of less ignorance.

The principle of indifference, or as Jaynes put it, equal information
yields equal probabilities, is beautiful in its insistence on
consistency, and there's an even greater beauty in what it says about
our place in the universe.

Ben, thanks for your diplomatic acknowledgement of both sides, below.

- Jef



Ben Goertzel wrote:
 
 Indeed, that is a cleaner and simpler argument than the various
 more concrete PI paradoxes... (wine/water, etc.)
 
 It seems to show convincingly that the PI cannot be consistently
 applied across the board, but can be heuristically applied to
 certain cases but not others as judged contextually appropriate.
 
 This of course is one of the historical arguments for the
 subjective, Bayesian view of statistics; and also for the
 interval representation of probabilities (so when you don't know
 what P(A) is, you can just assign it the interval [0,1])
 
 Ben
 
 gts wrote:
  Tying together recent threads on indefinite probabilities and
 prior
  distributions (PI, maxent, Occam)...
 
  For those who might not know, the PI (the principle of
 indifference)
  advises us, when confronted with n mutually exclusive and
 exhaustive
  possibilities, to assign probabilities of 1/n to each of them.
 
  In his book _The Algebra of Probable Inference_, R.T. Cox
 presents a
  convincing disproof of the PI when n = 2. I'm confident his
 argument
  applies for greater values of n, though of course the formalism
 would
  be more complicated.
 
  His argument is by reductio ad absurdum; Cox shows that the PI
 leads
  to an absurdity. (Not just an absurdity in his view, but a
 monstrous
  absurdity :-)
 
  The following quote is verbatim from his book, except that in the
  interest of clarity I have used the symbol  to mean and
 instead
  of the dot used by Cox. The symbol v means or in the sense of
  and/or.
 
  Also there is an axiom used in the argument, referred to as Eq.
 (2.8
  I). That axiom is
 
  (a v ~a)  b = b.
 
  Cox writes, concerning two mutually exclusive and exhaustive
  propositions a and b...
  ==
  ...it is supposed that
 
  a | a v ~a = 1/2
 
  for arbitrary meanings of a.
 
  In disproof of this supposition, let us consider the probability
 of
  the conjunction a  b on each of the two hypotheses, a v ~a and b
 v
  ~b. We have
 
  a v b | a v ~a = (a | a v ~a)[b | (a v ~a)  a]
 
  By Eq (2.8 I) (a v ~a)  a = a and therefore
 
  a  b | a v ~a = (a | a v ~a) (b | a)
 
  Similarly
 
  a  b | b v ~b = (b | b v ~b) (a | b)
 
  But, also by Eq. (2.8 I), a v ~a and b v ~b are each equal to (a
 v ~a)
   (b v ~b) and each is therefore equal to the other.
 
  Thus
 
  a  b | b v ~b = a  b | a v ~a
 
  and hence
 
  (a | a v ~a) (b | a) = (b | b v ~b) (a | b)
 
  If then a | a v ~a and b | b v ~b were each equal to 1/2, it
 would
  follow that b | a = a | b for arbitrary meanings of  and b.
 
  This would be a monstrous conclusion, because b | a and a | b can
 have
  any ratio from zero to infinity.
 
  Instead of supposing that a | a v ~a = 1/2, we may more
 reasonably
  conclude, when the hypothesis is the truism, that all
 probabilities
  are entirely undefined except these of the truism itself and its
  contradictory, the absurdity.
 
  This conclusion agrees with common sense and might perhaps have
 been
  reached without formal argument, because the knowledge of a
  probability, though it is knowledge of a particular and limited
 kind,
  is still knowledge, and it would be surprising if it could be
 derived
  from the truism, which is the expression of complete ignorance,
  asserting nothing.
  ===
 
  -gts
 
 
 
 
  -
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?list_id=303
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303