Re: [FRIAM] Model of induction

2016-12-14 Thread Russell Standish
On Tue, Dec 13, 2016 at 08:41:12PM -0700, Nick Thompson wrote:
> Hi, Russell S., 
> 
> It's a long time since the old days of the Three Russell's, isn't it?  Where 
> have all the Russell's gone?  Good to hear from you. 
> 
> This has been a humbling experience.  My brother was a mathematician and he 
> used to frown every time asked him what I thought was a simple mathematical 
> question.  
> 
> So ... with my heart in my hands ... please tell me, why a string of 100 
> one's , followed by a string of 100 2's, ..., followed by a string of 100 
> zero's wouldn’t be regarded as random.  There must be something more than 
> uniform distribution, eh?
> 

Yes - the modern notion of a random string is that it is
uncompressible by a Turing machine shorter than itself.

Obviously, you can exploit nonuniformity to provide a compression - eg
the way that 'e' and 't' are represented by single . and -
respectively provides a compression of random English language
phrases. Hence why uniformity is one test of randomness

That is why non-uniform random, whilst a thing, must be defined by an
algorithmic transformation to a uniform random thing (the
algorithmically uncompressible things mentioned above).

> Is there a halting problem lurking here?  
> 

Absolutely.

-- 


Dr Russell StandishPhone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Senior Research Fellowhpco...@hpcoders.com.au
Economics, Kingston University http://www.hpcoders.com.au



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Re: [FRIAM] Model of induction

2016-12-14 Thread Owen Densmore
All three (Aaron Clauset and Cosma R. Shalizi and Mark E. J. Newman) have
given great courses at the SFI summer school.

On Tue, Dec 13, 2016 at 8:41 PM, Nick Thompson 
wrote:

> Hi, Russell S.,
>
> It's a long time since the old days of the Three Russell's, isn't it?
> Where have all the Russell's gone?  Good to hear from you.
>
> This has been a humbling experience.  My brother was a mathematician and
> he used to frown every time asked him what I thought was a simple
> mathematical question.
>
> So ... with my heart in my hands ... please tell me, why a string of 100
> one's , followed by a string of 100 2's, ..., followed by a string of 100
> zero's wouldn’t be regarded as random.  There must be something more than
> uniform distribution, eh?
>
> Is there a halting problem lurking here?
>
> Nick
>
> Nicholas S. Thompson
> Emeritus Professor of Psychology and Biology
> Clark University
> http://home.earthlink.net/~nickthompson/naturaldesigns/
>
> -Original Message-
> From: Friam [mailto:friam-boun...@redfish.com] On Behalf Of Russell
> Standish
> Sent: Tuesday, December 13, 2016 7:59 PM
> To: 'The Friday Morning Applied Complexity Coffee Group' <
> friam@redfish.com>
> Subject: Re: [FRIAM] Model of induction
>
> On Mon, Dec 12, 2016 at 02:45:11PM -0700, Nick Thompson wrote:
> >
> >
> > Let’s take out all the colorful stuff and try again.  Imagine a thousand
> computers, each generating a list of random numbers.  Now imagine that for
> some small quantity of these computers, the numbers generated are in n a
> normal (Poisson?) distribution with mean mu and standard deviation s.  Now,
> the problem is how to detect these non-random computers and estimate the
> values of mu and s.
> >
>
> Your question comes down to: given a set of statistical distributions (ie
> models), which model best fits a given data source. In your case,
> presumably you have two models - a uniform distribution and a normal (or
> Poisson - they're two different distibutions resulting from additive versus
> multiplicative processes respectively) distribution.
>
> The paper to read on this topic is
>
> @Article{Clauset-etal07,
>   author =   {Aaron Clauset and Cosma R. Shalizi and Mark E. J.
> Newman},
>   title ={Power-law Distributions in Empirical Data},
>   journal =  {SIAM Review},
>   volume = 51,
>   pages = {661-703},
>   year = 2009,
>   note = {arXiv:0706.1062}
> }
>
> Almost everyone doing work in Complex Systems theory with power laws has
> been doing it wrong! The way it should be done is to compare a metric
> called "likelihood" calculated over the data and a model, for the different
> models in question.
>
> I was scheduled to give a talk "Perils of Power Laws" at a local Complex
> Systems conference in 2007. Originally, when I proposed the topic, I
> planned to synthesise and collect some of my war stories relating to power
> law problems - but a couple of months before the conference, someone showed
> me Clauset's paper. I was so impressed by it, not only superseding anything
> I could do on the timescale, but also I felt was so important for my
> colleagues to know about that I took the unprecedented step of presenting
> someone else's paper at the conference. With full attribution, of course. I
> still feel it was the most important paper in my field of 2007, and one of
> the most important papers of this century. Even though it didn't officially
> get published until 2009 :).
>
> Nick's question is unrelated to the question of how to detect whether a
> source is random or not. A non-uniform random source is one that can be
> transformed into a uniform random source by a computable transformation, so
> uniformity is not really a test of randomness.
>
> Detecting whether a source is random or not is not a computational
> feasible task. All one can do is prove that a given source is non-random
> (by providing an effective generator of the data), but you can never prove
> a source is truly random, except by exhaustive testing of all Turing
> machines less than the data's complexity, which suffers from combinatoric
> computational complexity.
>
> Cheers
>
> --
>
> 
> 
> Dr Russell StandishPhone 0425 253119 (mobile)
> Principal, High Performance Coders
> Visiting Senior Research Fellowhpco...@hpcoders.com.au
> Economics, Kingston University http://www.hpcoders.com.au
> 
> 
>
> 

Re: [FRIAM] Model of induction

2016-12-13 Thread Nick Thompson
Hi, Russell S., 

It's a long time since the old days of the Three Russell's, isn't it?  Where 
have all the Russell's gone?  Good to hear from you. 

This has been a humbling experience.  My brother was a mathematician and he 
used to frown every time asked him what I thought was a simple mathematical 
question.  

So ... with my heart in my hands ... please tell me, why a string of 100 one's 
, followed by a string of 100 2's, ..., followed by a string of 100 zero's 
wouldn’t be regarded as random.  There must be something more than uniform 
distribution, eh?

Is there a halting problem lurking here?  

Nick 

Nicholas S. Thompson
Emeritus Professor of Psychology and Biology
Clark University
http://home.earthlink.net/~nickthompson/naturaldesigns/

-Original Message-
From: Friam [mailto:friam-boun...@redfish.com] On Behalf Of Russell Standish
Sent: Tuesday, December 13, 2016 7:59 PM
To: 'The Friday Morning Applied Complexity Coffee Group' 
Subject: Re: [FRIAM] Model of induction

On Mon, Dec 12, 2016 at 02:45:11PM -0700, Nick Thompson wrote:
>  
> 
> Let’s take out all the colorful stuff and try again.  Imagine a thousand 
> computers, each generating a list of random numbers.  Now imagine that for 
> some small quantity of these computers, the numbers generated are in n a 
> normal (Poisson?) distribution with mean mu and standard deviation s.  Now, 
> the problem is how to detect these non-random computers and estimate the 
> values of mu and s.  
> 

Your question comes down to: given a set of statistical distributions (ie 
models), which model best fits a given data source. In your case, presumably 
you have two models - a uniform distribution and a normal (or Poisson - they're 
two different distibutions resulting from additive versus multiplicative 
processes respectively) distribution.

The paper to read on this topic is

@Article{Clauset-etal07,
  author =   {Aaron Clauset and Cosma R. Shalizi and Mark E. J. Newman},
  title ={Power-law Distributions in Empirical Data},
  journal =  {SIAM Review},
  volume = 51,
  pages = {661-703},
  year = 2009,
  note = {arXiv:0706.1062}
}

Almost everyone doing work in Complex Systems theory with power laws has been 
doing it wrong! The way it should be done is to compare a metric called 
"likelihood" calculated over the data and a model, for the different models in 
question.

I was scheduled to give a talk "Perils of Power Laws" at a local Complex 
Systems conference in 2007. Originally, when I proposed the topic, I planned to 
synthesise and collect some of my war stories relating to power law problems - 
but a couple of months before the conference, someone showed me Clauset's 
paper. I was so impressed by it, not only superseding anything I could do on 
the timescale, but also I felt was so important for my colleagues to know about 
that I took the unprecedented step of presenting someone else's paper at the 
conference. With full attribution, of course. I still feel it was the most 
important paper in my field of 2007, and one of the most important papers of 
this century. Even though it didn't officially get published until 2009 :).

Nick's question is unrelated to the question of how to detect whether a source 
is random or not. A non-uniform random source is one that can be transformed 
into a uniform random source by a computable transformation, so uniformity is 
not really a test of randomness.

Detecting whether a source is random or not is not a computational feasible 
task. All one can do is prove that a given source is non-random (by providing 
an effective generator of the data), but you can never prove a source is truly 
random, except by exhaustive testing of all Turing machines less than the 
data's complexity, which suffers from combinatoric computational complexity.

Cheers

-- 


Dr Russell StandishPhone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Senior Research Fellowhpco...@hpcoders.com.au
Economics, Kingston University http://www.hpcoders.com.au



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe 
http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Re: [FRIAM] Model of induction

2016-12-13 Thread Russell Standish
On Mon, Dec 12, 2016 at 02:45:11PM -0700, Nick Thompson wrote:
>  
> 
> Let’s take out all the colorful stuff and try again.  Imagine a thousand 
> computers, each generating a list of random numbers.  Now imagine that for 
> some small quantity of these computers, the numbers generated are in n a 
> normal (Poisson?) distribution with mean mu and standard deviation s.  Now, 
> the problem is how to detect these non-random computers and estimate the 
> values of mu and s.  
> 

Your question comes down to: given a set of statistical distributions
(ie models), which model best fits a given data source. In your case,
presumably you have two models - a uniform distribution and a normal
(or Poisson - they're two different distibutions resulting from
additive versus multiplicative processes respectively) distribution.

The paper to read on this topic is

@Article{Clauset-etal07,
  author =   {Aaron Clauset and Cosma R. Shalizi and Mark E. J. Newman},
  title ={Power-law Distributions in Empirical Data},
  journal =  {SIAM Review},
  volume = 51,
  pages = {661-703},
  year = 2009,
  note = {arXiv:0706.1062}
}

Almost everyone doing work in Complex Systems theory with power laws
has been doing it wrong! The way it should be done is to compare a
metric called "likelihood" calculated over the data and a model, for
the different models in question.

I was scheduled to give a talk "Perils of Power Laws" at a local
Complex Systems conference in 2007. Originally, when I proposed the
topic, I planned to synthesise and collect some of my war stories
relating to power law problems - but a couple of months before the
conference, someone showed me Clauset's paper. I was so impressed by
it, not only superseding anything I could do on the timescale, but
also I felt was so important for my colleagues to know about that I
took the unprecedented step of presenting someone else's paper at the
conference. With full attribution, of course. I still feel it was the
most important paper in my field of 2007, and one of the most
important papers of this century. Even though it didn't officially get
published until 2009 :).

Nick's question is unrelated to the question of how to detect whether
a source is random or not. A non-uniform random source is one that can
be transformed into a uniform random source by a computable
transformation, so uniformity is not really a test of randomness.

Detecting whether a source is random or not is not a computational
feasible task. All one can do is prove that a given source is
non-random (by providing an effective generator of the data), but you
can never prove a source is truly random, except by exhaustive testing
of all Turing machines less than the data's complexity, which suffers
from combinatoric computational complexity.

Cheers

-- 


Dr Russell StandishPhone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Senior Research Fellowhpco...@hpcoders.com.au
Economics, Kingston University http://www.hpcoders.com.au



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Re: [FRIAM] Model of induction

2016-12-13 Thread Marcus Daniels
If you can write down a Hamiltonian for your domain-specific problem, the 
D-Wave could sample from that Boltzmann distribution.

From: Friam [mailto:friam-boun...@redfish.com] On Behalf Of Owen Densmore
Sent: Tuesday, December 13, 2016 11:18 AM
To: The Friday Morning Applied Complexity Coffee Group 
Subject: Re: [FRIAM] Model of induction

Domain Specific Random Number Generators? Kinda interesting idea.

   -- Owen


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Re: [FRIAM] Model of induction

2016-12-13 Thread Owen Densmore
Domain Specific Random Number Generators? Kinda interesting idea.

   -- Owen

FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Re: [FRIAM] Model of induction

2016-12-13 Thread Nick Thompson
Oh c__p, Roger. Even I should have seen that coming.  

 

Yes, Nick, what ever do you MEAN by a GENERATED RANDOM number?  

 

Seems like an oxymoron, doesn’t it?

 

Ok.  Can’t I just ask that we stipulate that the stream of numbers on the 
screen of the computer is random and let it go at that?  

 

Nick 

 

PS  Roger, I hear that the high temp in Boston will be 19 degrees?  How is it 
in the bubble?  

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

 <http://home.earthlink.net/~nickthompson/naturaldesigns/> 
http://home.earthlink.net/~nickthompson/naturaldesigns/

 

From: Friam [mailto:friam-boun...@redfish.com] On Behalf Of Eric Charles
Sent: Tuesday, December 13, 2016 6:50 AM
To: The Friday Morning Applied Complexity Coffee Group 
Subject: Re: [FRIAM] Model of induction

 

Roger, this seems to get the heart of the matter! I think we must wonder your 
final sentence is not begging the question: "This was discovered because the 
random numbers were used in simulations which failed to simulate the random 
processes they were designed to simulate." 

 

I'm not saying that is it begging the question, I'm just saying it seems to me 
like we are peering deep into the rabbit hole. Presumably, we must have rather 
extreme confidence that the process we are trying to simulate is, in fact, 
"truly random", AND rather extreme confidence that our simulation it is not 
simply having a "bad run", as one would expect any random system to have every 
so often.  Maybe our simulation is doing great, but the process we are trying 
to simulate is not random in several subtle ways we have not anticipated. How 
would we know? 

 

(P.S. In hindsight, this is either right at the heart of the matter, or a 
complete tangent, and I'm not as confident which it is as I was when I started 
replying.) 

 

 





---
Eric P. Charles, Ph.D.
Supervisory Survey Statistician

U.S. Marine Corps

 

On Tue, Dec 13, 2016 at 8:24 AM, Roger Critchlow mailto:r...@elf.org> > wrote:

You have left the model for the untainted computers unspecified, but let's say 
that they are producing uniform pseudo-random numbers over some interval, like 
0 .. 1.  Then your question becomes how do we distinguish the tainted 
computers, which are only simulating a uniform distribution?

 

This problem encapsulates the history of pseudo-random number generation 
algorithms.  A researcher named George Marsaglia spent a good part of his 
career developing algorithms which detected flaws in pseudo-random number 
generators.  The battery of tests is described here, 
https://en.wikipedia.org/wiki/Diehard_tests, so I won't go over them, but it's 
a good list.

 

But, as Marsaglia reported in 
http://www.ics.uci.edu/~fowlkes/class/cs177/marsaglia.pdf, we don't even know 
all the ways a pseudo-random number generator can go wrong, we discover the 
catalog of faults as we go merrily assuming that the algorithm is producing 
numbers with the properties of our ideal distribution.  This was discovered 
because the random numbers were used in simulations which failed to simulate 
the random processes they were designed to simulate.

 

-- rec --

 

 

On Mon, Dec 12, 2016 at 4:45 PM, Nick Thompson mailto:nickthomp...@earthlink.net> > wrote:

Everybody, 

 

As usual, when we “citizens” ask mathematical questions, we throw in WAY too 
much surplus meaning.  

 

Thanks for all your fine-tuned efforts to straighten me out.  

 

Let’s take out all the colorful stuff and try again.  Imagine a thousand 
computers, each generating a list of random numbers.  Now imagine that for some 
small quantity of these computers, the numbers generated are in n a normal 
(Poisson?) distribution with mean mu and standard deviation s.  Now, the 
problem is how to detect these non-random computers and estimate the values of 
mu and s.  

 

Let’s leave aside for the moment what kind of –duction that is.  I shouldn’t 
have thrown that in.  And  besides, I’ve had enough humiliation for one day.  

 

 

Nick 

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

 <http://home.earthlink.net/~nickthompson/naturaldesigns/> 
http://home.earthlink.net/~nickthompson/naturaldesigns/

 

From: Friam [mailto:friam-boun...@redfish.com 
<mailto:friam-boun...@redfish.com> ] On Behalf Of Frank Wimberly
Sent: Monday, December 12, 2016 12:06 PM
To: The Friday Morning Applied Complexity Coffee Group mailto:friam@redfish.com> >
Subject: Re: [FRIAM] Model of induction

 

Mathematical induction is a method for proving theorems.  "Scientific 
induction" is a method for accumulating evidence to support one hypothesis or 
another; no proof involved, or possible.

 

Frank

Frank Wimberly
Phone (505) 670-9918  

 

On Dec 12, 2016 11:44 AM, "Owen Densmore" mailto:o...@backspaces.net> > wrote

Re: [FRIAM] Model of induction

2016-12-13 Thread Eric Charles
Roger, this seems to get the heart of the matter! I think we must
wonder your final sentence is not begging the question: "This was
discovered because the random numbers were used in simulations which failed
to simulate the random processes they were designed to simulate."

I'm not saying that is it begging the question, I'm just saying it seems to
me like we are peering deep into the rabbit hole. Presumably, we must have
rather extreme confidence that the process we are trying to simulate is, in
fact, "truly random", AND rather extreme confidence that our simulation it
is not simply having a "bad run", as one would expect any random system to
have every so often.  Maybe our simulation is doing great, but the process
we are trying to simulate is not random in several subtle ways we have not
anticipated. How would we know?

(P.S. In hindsight, this is either right at the heart of the matter, or a
complete tangent, and I'm not as confident which it is as I was when I
started replying.)




---
Eric P. Charles, Ph.D.
Supervisory Survey Statistician
U.S. Marine Corps


On Tue, Dec 13, 2016 at 8:24 AM, Roger Critchlow  wrote:

> You have left the model for the untainted computers unspecified, but let's
> say that they are producing uniform pseudo-random numbers over some
> interval, like 0 .. 1.  Then your question becomes how do we distinguish
> the tainted computers, which are only simulating a uniform distribution?
>
> This problem encapsulates the history of pseudo-random number generation
> algorithms.  A researcher named George Marsaglia spent a good part of his
> career developing algorithms which detected flaws in pseudo-random number
> generators.  The battery of tests is described here, https://en.wikipedia.
> org/wiki/Diehard_tests, so I won't go over them, but it's a good list.
>
> But, as Marsaglia reported in http://www.ics.uci.edu/~
> fowlkes/class/cs177/marsaglia.pdf, we don't even know all the ways a
> pseudo-random number generator can go wrong, we discover the catalog of
> faults as we go merrily assuming that the algorithm is producing numbers
> with the properties of our ideal distribution.  This was discovered because
> the random numbers were used in simulations which failed to simulate the
> random processes they were designed to simulate.
>
> -- rec --
>
>
> On Mon, Dec 12, 2016 at 4:45 PM, Nick Thompson  > wrote:
>
>> Everybody,
>>
>>
>>
>> As usual, when we “citizens” ask mathematical questions, we throw in WAY
>> too much surplus meaning.
>>
>>
>>
>> Thanks for all your fine-tuned efforts to straighten me out.
>>
>>
>>
>> Let’s take out all the colorful stuff and try again.  Imagine a thousand
>> computers, each generating a list of random numbers.  Now imagine that for
>> some small quantity of these computers, the numbers generated are in n a
>> normal (Poisson?) distribution with mean mu and standard deviation s.  Now,
>> the problem is how to detect these non-random computers and estimate the
>> values of mu and s.
>>
>>
>>
>> Let’s leave aside for the moment what kind of –duction that is.  I
>> shouldn’t have thrown that in.  And  besides, I’ve had enough humiliation
>> for one day.
>>
>>
>>
>>
>>
>> Nick
>>
>>
>>
>> Nicholas S. Thompson
>>
>> Emeritus Professor of Psychology and Biology
>>
>> Clark University
>>
>> http://home.earthlink.net/~nickthompson/naturaldesigns/
>>
>>
>>
>> *From:* Friam [mailto:friam-boun...@redfish.com] *On Behalf Of *Frank
>> Wimberly
>> *Sent:* Monday, December 12, 2016 12:06 PM
>> *To:* The Friday Morning Applied Complexity Coffee Group <
>> friam@redfish.com>
>> *Subject:* Re: [FRIAM] Model of induction
>>
>>
>>
>> Mathematical induction is a method for proving theorems.  "Scientific
>> induction" is a method for accumulating evidence to support one hypothesis
>> or another; no proof involved, or possible.
>>
>>
>>
>> Frank
>>
>> Frank Wimberly
>> Phone (505) 670-9918
>>
>>
>>
>> On Dec 12, 2016 11:44 AM, "Owen Densmore"  wrote:
>>
>> What's the difference between mathematical induction and scientific?
>>
>>   https://en.wikipedia.org/wiki/Mathematical_induction
>>
>>
>>
>>-- Owen
>>
>>
>>
>> On Mon, Dec 12, 2016 at 10:44 AM, Robert J. Cordingley <
>> rob...@cirrillian.com> wrote:
>>
>> Based on https://plato.stanford.edu/entries/peirce/#dia - it looks like
>> abduction (A

Re: [FRIAM] Model of induction

2016-12-13 Thread Roger Critchlow
You have left the model for the untainted computers unspecified, but let's
say that they are producing uniform pseudo-random numbers over some
interval, like 0 .. 1.  Then your question becomes how do we distinguish
the tainted computers, which are only simulating a uniform distribution?

This problem encapsulates the history of pseudo-random number generation
algorithms.  A researcher named George Marsaglia spent a good part of his
career developing algorithms which detected flaws in pseudo-random number
generators.  The battery of tests is described here,
https://en.wikipedia.org/wiki/Diehard_tests, so I won't go over them, but
it's a good list.

But, as Marsaglia reported in
http://www.ics.uci.edu/~fowlkes/class/cs177/marsaglia.pdf, we don't even
know all the ways a pseudo-random number generator can go wrong, we
discover the catalog of faults as we go merrily assuming that the algorithm
is producing numbers with the properties of our ideal distribution.  This
was discovered because the random numbers were used in simulations which
failed to simulate the random processes they were designed to simulate.

-- rec --


On Mon, Dec 12, 2016 at 4:45 PM, Nick Thompson 
wrote:

> Everybody,
>
>
>
> As usual, when we “citizens” ask mathematical questions, we throw in WAY
> too much surplus meaning.
>
>
>
> Thanks for all your fine-tuned efforts to straighten me out.
>
>
>
> Let’s take out all the colorful stuff and try again.  Imagine a thousand
> computers, each generating a list of random numbers.  Now imagine that for
> some small quantity of these computers, the numbers generated are in n a
> normal (Poisson?) distribution with mean mu and standard deviation s.  Now,
> the problem is how to detect these non-random computers and estimate the
> values of mu and s.
>
>
>
> Let’s leave aside for the moment what kind of –duction that is.  I
> shouldn’t have thrown that in.  And  besides, I’ve had enough humiliation
> for one day.
>
>
>
>
>
> Nick
>
>
>
> Nicholas S. Thompson
>
> Emeritus Professor of Psychology and Biology
>
> Clark University
>
> http://home.earthlink.net/~nickthompson/naturaldesigns/
>
>
>
> *From:* Friam [mailto:friam-boun...@redfish.com] *On Behalf Of *Frank
> Wimberly
> *Sent:* Monday, December 12, 2016 12:06 PM
> *To:* The Friday Morning Applied Complexity Coffee Group <
> friam@redfish.com>
> *Subject:* Re: [FRIAM] Model of induction
>
>
>
> Mathematical induction is a method for proving theorems.  "Scientific
> induction" is a method for accumulating evidence to support one hypothesis
> or another; no proof involved, or possible.
>
>
>
> Frank
>
> Frank Wimberly
> Phone (505) 670-9918
>
>
>
> On Dec 12, 2016 11:44 AM, "Owen Densmore"  wrote:
>
> What's the difference between mathematical induction and scientific?
>
>   https://en.wikipedia.org/wiki/Mathematical_induction
>
>
>
>-- Owen
>
>
>
> On Mon, Dec 12, 2016 at 10:44 AM, Robert J. Cordingley <
> rob...@cirrillian.com> wrote:
>
> Based on https://plato.stanford.edu/entries/peirce/#dia - it looks like
> abduction (AAA-2) to me - ie developing an educated guess as to which might
> be the winning wheel. Enough funds should find it with some degree of
> certainty but that may be a different question and should use different
> statistics because the 'longest run' is a poor metric compared to say net
> winnings or average rate of winning. A long run is itself a data point and
> the premise in red (below) is false.
>
> Waiting for wisdom to kick in. R
>
> PS FWIW the article does not contain the phrase 'scientific induction' R
>
>
>
> On 12/12/16 12:31 AM, Nick Thompson wrote:
>
> Dear Wise Persons,
>
>
>
> Would the following work?
>
>
>
> *Imagine you enter a casino that has a thousand roulette tables.  The
> rumor circulates around the casino that one of the wheels is loaded.  So,
> you call up a thousand of your friends and you all work together to find
> the loaded wheel.  Why, because if you use your knowledge to play that
> wheel you will make a LOT of money.  Now the problem you all face, of
> course, is that a run of successes is not an infallible sign of a loaded
> wheel.  In fact, given randomness, it is assured that with a thousand
> players playing a thousand wheels as fast as they can, there will be random
> long runs of successes.  But **the longer a run of success continues, the
> greater is the probability that the wheel that produces those successes is
> biased.**  So, your team of players would be paid, on this account, for
> beginning to focus its play on those wheels with the longest runs

Re: [FRIAM] Model of induction

2016-12-12 Thread Nick Thompson
Everybody, 

 

As usual, when we “citizens” ask mathematical questions, we throw in WAY too 
much surplus meaning.  

 

Thanks for all your fine-tuned efforts to straighten me out.  

 

Let’s take out all the colorful stuff and try again.  Imagine a thousand 
computers, each generating a list of random numbers.  Now imagine that for some 
small quantity of these computers, the numbers generated are in n a normal 
(Poisson?) distribution with mean mu and standard deviation s.  Now, the 
problem is how to detect these non-random computers and estimate the values of 
mu and s.  

 

Let’s leave aside for the moment what kind of –duction that is.  I shouldn’t 
have thrown that in.  And  besides, I’ve had enough humiliation for one day.  

 

 

Nick 

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

 <http://home.earthlink.net/~nickthompson/naturaldesigns/> 
http://home.earthlink.net/~nickthompson/naturaldesigns/

 

From: Friam [mailto:friam-boun...@redfish.com] On Behalf Of Frank Wimberly
Sent: Monday, December 12, 2016 12:06 PM
To: The Friday Morning Applied Complexity Coffee Group 
Subject: Re: [FRIAM] Model of induction

 

Mathematical induction is a method for proving theorems.  "Scientific 
induction" is a method for accumulating evidence to support one hypothesis or 
another; no proof involved, or possible.

 

Frank

Frank Wimberly
Phone (505) 670-9918

 

On Dec 12, 2016 11:44 AM, "Owen Densmore" mailto:o...@backspaces.net> > wrote:

What's the difference between mathematical induction and scientific?

  https://en.wikipedia.org/wiki/Mathematical_induction

 

   -- Owen

 

On Mon, Dec 12, 2016 at 10:44 AM, Robert J. Cordingley mailto:rob...@cirrillian.com> > wrote:

Based on https://plato.stanford.edu/entries/peirce/#dia - it looks like 
abduction (AAA-2) to me - ie developing an educated guess as to which might be 
the winning wheel. Enough funds should find it with some degree of certainty 
but that may be a different question and should use different statistics 
because the 'longest run' is a poor metric compared to say net winnings or 
average rate of winning. A long run is itself a data point and the premise in 
red (below) is false.

Waiting for wisdom to kick in. R

PS FWIW the article does not contain the phrase 'scientific induction' R

 

On 12/12/16 12:31 AM, Nick Thompson wrote:

Dear Wise Persons, 

 

Would the following work?  

 

Imagine you enter a casino that has a thousand roulette tables.  The rumor 
circulates around the casino that one of the wheels is loaded.  So, you call up 
a thousand of your friends and you all work together to find the loaded wheel.  
Why, because if you use your knowledge to play that wheel you will make a LOT 
of money.  Now the problem you all face, of course, is that a run of successes 
is not an infallible sign of a loaded wheel.  In fact, given randomness, it is 
assured that with a thousand players playing a thousand wheels as fast as they 
can, there will be random long runs of successes.  But the longer a run of 
success continues, the greater is the probability that the wheel that produces 
those successes is biased.  So, your team of players would be paid, on this 
account, for beginning to focus its play on those wheels with the longest runs. 

 

FWIW, this, I think, is Peirce’s model of scientific induction.  

 

Nick

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

http://home.earthlink.net/~nickthompson/naturaldesigns/ 
<http://home.earthlink.net/%7Enickthompson/naturaldesigns/> 

 

 


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove





-- 
Cirrillian 
Web Design & Development
Santa Fe, NM
http://cirrillian.com
281-989-6272   (cell)
Member Design Corps of Santa Fe



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

 



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

 


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Re: [FRIAM] Model of induction

2016-12-12 Thread Nick Thompson
Robert,  

I want to get back to you eventually concerning what kind of -duction we are
talking about here.  

But before that< I want to clear up any other confusions I may have.  

Let's take your coin example; it's all my poor civilian brain can really
handle.  

You are quite right that if coin is a balanced coin, a run of 100 heads is
no reason to believe that the next flip will be a heads.  On the other hand,
after 100 heads, what is the probability that the coin is balanced?  I used
to play a game in my freshman class in which I would bring in "my own
special" coin, and flip it for them.   After each flip, I would ask them
whether they still believed that the coin was fair.  What amazed me was with
what consistency people fell off the wagon between .10 and .05.  The would
help to answer the question, later on. Why psychologists tended to use the
.05 level of significance.  

N

Nicholas S. Thompson
Emeritus Professor of Psychology and Biology
Clark University
http://home.earthlink.net/~nickthompson/naturaldesigns/

-Original Message-
From: Friam [mailto:friam-boun...@redfish.com] On Behalf Of Robert J.
Cordingley
Sent: Monday, December 12, 2016 3:21 PM
To: The Friday Morning Applied Complexity Coffee Group 
Subject: Re: [FRIAM] Model of induction

Hi Eric

I was remembering that if you tossed a perfectly balanced coin and got
10 or 100 heads in a row it says absolutely nothing about the future coin
tosses nor undermines the initial condition of a perfectly balanced coin.
Bayesian or not the next head has a 50:50 probability of occurring. If you
saw a player get a long winning streak would you really place your bet in
the same way on the next spin? I would need to see lots of long runs (data
points) to make a choice on which tables to focus my efforts and we can then
employ Bayesian or formal statistics to the problem.

I think your excellent analysis was founded on 'relative wins' which is fine
by me in identifying a winning wheel, as against 'the longer a run of
success' finding one which I'd consider very 'dodgy'.

Thanks Robert


On 12/12/16 1:56 PM, Eric Smith wrote:
> Hi Robert,
>
> I worry about mixing technical and informal claims, and making it hard for
people with different backgrounds to track which level the conversation is
operating at.
>
> You said:
>
>> A long run is itself a data point and the premise in red (below) is
false.
> and the premise in red (I am not using an RTF sender) from Nick was:
>
>>> But the longer a run of success continues, the greater is the
probability that the wheel that produces those successes is biased.
> Whether or not it is false actually depends on what "probability" one 
> means to be referring to.  (I am ending many sentences with 
> prepositions; apologies.)
>
> It is hard to say that any "probability" inherently is "the" probability
that the wheel produces those successes.  A wheel is just a wheel (Freud or
no Freud); to assign it a probability requires choosing a set and measure
within which to embed it, and that always involves other assumptions by
whoever is making the assertion.
>
> Under typical usages, yes, there could be some kind of "a priori" (or, in
Bayesian-inference language, "prior") probability that the wheel has a
property, and yes, that probability would not be changed by testing how many
wins it produces.
>
> On the other hand, the Bayesian posterior probability, obtained from the
prior (however arrived-at) and the likelihood function, would indeed put
greater weight on the wheel that is loaded, (under yet more assumptions of
independence etc. to account for Roger's comment that long runs are not the
only possible signature of loading, and your own comments as well), the more
wins one had seen from it relatively.
>
> I _assume_ that this intuition for how one updates Bayesian posteriors is
behind Nick's common-language premise that "the longer a run of success
continues, the greater is the probability that the wheel that produces those
successes is biased".  That would certainly have been what I meant in a
short-hand for the more laborious Bayesian formula.
>
>
> For completeness, the Bayesian way of choosing a meaning for probabilities
updated by observations is the following.
>
> Assume two random variables, M and D, which take values respectively
standing for a Model or hypothesis, and an observed-value or Datum.  So:
hypothesis: this wheel and not that one is loaded.  datum: this wheel has
produced relatively more wins.
>
> Then, by some means, commit to what probability you assign to each value
of M before you make an observation.  Call it P(M).  This is your Bayesian
prior (for whether or not a certain wheel is loaded).  Maybe you admit the
possibility that some wheel is loaded because you hav

Re: [FRIAM] Model of induction

2016-12-12 Thread Nick Thompson
O.  Everything.  Mathematical induction is a form of Deduction.  Alas.  N

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

 <http://home.earthlink.net/~nickthompson/naturaldesigns/> 
http://home.earthlink.net/~nickthompson/naturaldesigns/

 

From: Friam [mailto:friam-boun...@redfish.com] On Behalf Of Owen Densmore
Sent: Monday, December 12, 2016 11:45 AM
To: The Friday Morning Applied Complexity Coffee Group 
Subject: Re: [FRIAM] Model of induction

 

What's the difference between mathematical induction and scientific?

  https://en.wikipedia.org/wiki/Mathematical_induction

 

   -- Owen

 

On Mon, Dec 12, 2016 at 10:44 AM, Robert J. Cordingley mailto:rob...@cirrillian.com> > wrote:

Based on https://plato.stanford.edu/entries/peirce/#dia - it looks like 
abduction (AAA-2) to me - ie developing an educated guess as to which might be 
the winning wheel. Enough funds should find it with some degree of certainty 
but that may be a different question and should use different statistics 
because the 'longest run' is a poor metric compared to say net winnings or 
average rate of winning. A long run is itself a data point and the premise in 
red (below) is false.

Waiting for wisdom to kick in. R

PS FWIW the article does not contain the phrase 'scientific induction' R

 

On 12/12/16 12:31 AM, Nick Thompson wrote:

Dear Wise Persons, 

 

Would the following work?  

 

Imagine you enter a casino that has a thousand roulette tables.  The rumor 
circulates around the casino that one of the wheels is loaded.  So, you call up 
a thousand of your friends and you all work together to find the loaded wheel.  
Why, because if you use your knowledge to play that wheel you will make a LOT 
of money.  Now the problem you all face, of course, is that a run of successes 
is not an infallible sign of a loaded wheel.  In fact, given randomness, it is 
assured that with a thousand players playing a thousand wheels as fast as they 
can, there will be random long runs of successes.  But the longer a run of 
success continues, the greater is the probability that the wheel that produces 
those successes is biased.  So, your team of players would be paid, on this 
account, for beginning to focus its play on those wheels with the longest runs. 

 

FWIW, this, I think, is Peirce’s model of scientific induction.  

 

Nick

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

http://home.earthlink.net/~nickthompson/naturaldesigns/ 
<http://home.earthlink.net/%7Enickthompson/naturaldesigns/> 

 

 


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove





-- 
Cirrillian 
Web Design & Development
Santa Fe, NM
http://cirrillian.com
281-989-6272   (cell)
Member Design Corps of Santa Fe



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

 


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Re: [FRIAM] Model of induction

2016-12-12 Thread Robert Wall
Eric,

(I am ending many sentences with prepositions; apologies.)


Modern language usage manuals, for example,* Garner's Modern American Usage*
[2009: 3rd Edition, page 654], advise that you no longer have to worry
about ending a sentence with a preposition. As Winston Churchill once
quipped when criticized for occasionally ending a sentence with a
preposition, "This is the type of errant pedantry up with which I will not
put." 🤐

Cheers,

Robert W.

On Mon, Dec 12, 2016 at 3:21 PM, Robert J. Cordingley  wrote:

> Hi Eric
>
> I was remembering that if you tossed a perfectly balanced coin and got 10
> or 100 heads in a row it says absolutely nothing about the future coin
> tosses nor undermines the initial condition of a perfectly balanced coin.
> Bayesian or not the next head has a 50:50 probability of occurring. If you
> saw a player get a long winning streak would you really place your bet in
> the same way on the next spin? I would need to see lots of long runs (data
> points) to make a choice on which tables to focus my efforts and we can
> then employ Bayesian or formal statistics to the problem.
>
> I think your excellent analysis was founded on 'relative wins' which is
> fine by me in identifying a winning wheel, as against 'the longer a run of
> success' finding one which I'd consider very 'dodgy'.
>
> Thanks Robert
>
>
>
> On 12/12/16 1:56 PM, Eric Smith wrote:
>
>> Hi Robert,
>>
>> I worry about mixing technical and informal claims, and making it hard
>> for people with different backgrounds to track which level the conversation
>> is operating at.
>>
>> You said:
>>
>> A long run is itself a data point and the premise in red (below) is false.
>>>
>> and the premise in red (I am not using an RTF sender) from Nick was:
>>
>> But the longer a run of success continues, the greater is the probability
 that the wheel that produces those successes is biased.

>>> Whether or not it is false actually depends on what “probability” one
>> means to be referring to.  (I am ending many sentences with prepositions;
>> apologies.)
>>
>> It is hard to say that any “probability” inherently is “the” probability
>> that the wheel produces those successes.  A wheel is just a wheel (Freud or
>> no Freud); to assign it a probability requires choosing a set and measure
>> within which to embed it, and that always involves other assumptions by
>> whoever is making the assertion.
>>
>> Under typical usages, yes, there could be some kind of “a priori” (or, in
>> Bayesian-inference language, “prior”) probability that the wheel has a
>> property, and yes, that probability would not be changed by testing how
>> many wins it produces.
>>
>> On the other hand, the Bayesian posterior probability, obtained from the
>> prior (however arrived-at) and the likelihood function, would indeed put
>> greater weight on the wheel that is loaded, (under yet more assumptions of
>> independence etc. to account for Roger’s comment that long runs are not the
>> only possible signature of loading, and your own comments as well), the
>> more wins one had seen from it relatively.
>>
>> I _assume_ that this intuition for how one updates Bayesian posteriors is
>> behind Nick’s common-language premise that “the longer a run of success
>> continues, the greater is the probability that the wheel that produces
>> those successes is biased”.  That would certainly have been what I meant in
>> a short-hand for the more laborious Bayesian formula.
>>
>>
>> For completeness, the Bayesian way of choosing a meaning for
>> probabilities updated by observations is the following.
>>
>> Assume two random variables, M and D, which take values respectively
>> standing for a Model or hypothesis, and an observed-value or Datum.  So:
>> hypothesis: this wheel and not that one is loaded.  datum: this wheel has
>> produced relatively more wins.
>>
>> Then, by some means, commit to what probability you assign to each value
>> of M before you make an observation.  Call it P(M).  This is your Bayesian
>> prior (for whether or not a certain wheel is loaded).  Maybe you admit the
>> possibility that some wheel is loaded because you have heard it said, and
>> maybe you even assume that precisely one wheel in the house is loaded, only
>> you don’t know which one.  Lots of forms could be adopted.
>>
>> Next, we assume a true, physical property of the wheel is the probability
>> distribution with which it produces wins, given whether it is or is not
>> loaded.  Notation is P(D|M).  This is called the _likelihood function_ for
>> data given a model.
>>
>> The Bayes construction is to say that the structure of unconditioned and
>> conditioned probabilites requires that the same joint probability be
>> arrivable-at in either of two ways:
>> P(D,M) = P(D|M)P(M) = P(M|D)P(D).
>>
>> We have had to introduce a new “conditioned” probability, called the
>> Bayesian Posterior, P(M|D), which treats the model as if it depended on the
>> data.  But this is just chopping a joint spac

Re: [FRIAM] Model of induction

2016-12-12 Thread Robert J. Cordingley

Hi Eric

I was remembering that if you tossed a perfectly balanced coin and got 
10 or 100 heads in a row it says absolutely nothing about the future 
coin tosses nor undermines the initial condition of a perfectly balanced 
coin. Bayesian or not the next head has a 50:50 probability of 
occurring. If you saw a player get a long winning streak would you 
really place your bet in the same way on the next spin? I would need to 
see lots of long runs (data points) to make a choice on which tables to 
focus my efforts and we can then employ Bayesian or formal statistics to 
the problem.


I think your excellent analysis was founded on 'relative wins' which is 
fine by me in identifying a winning wheel, as against 'the longer a run 
of success' finding one which I'd consider very 'dodgy'.


Thanks Robert


On 12/12/16 1:56 PM, Eric Smith wrote:

Hi Robert,

I worry about mixing technical and informal claims, and making it hard for 
people with different backgrounds to track which level the conversation is 
operating at.

You said:


A long run is itself a data point and the premise in red (below) is false.

and the premise in red (I am not using an RTF sender) from Nick was:


But the longer a run of success continues, the greater is the probability that 
the wheel that produces those successes is biased.

Whether or not it is false actually depends on what “probability” one means to 
be referring to.  (I am ending many sentences with prepositions; apologies.)

It is hard to say that any “probability” inherently is “the” probability that 
the wheel produces those successes.  A wheel is just a wheel (Freud or no 
Freud); to assign it a probability requires choosing a set and measure within 
which to embed it, and that always involves other assumptions by whoever is 
making the assertion.

Under typical usages, yes, there could be some kind of “a priori” (or, in 
Bayesian-inference language, “prior”) probability that the wheel has a 
property, and yes, that probability would not be changed by testing how many 
wins it produces.

On the other hand, the Bayesian posterior probability, obtained from the prior 
(however arrived-at) and the likelihood function, would indeed put greater 
weight on the wheel that is loaded, (under yet more assumptions of independence 
etc. to account for Roger’s comment that long runs are not the only possible 
signature of loading, and your own comments as well), the more wins one had 
seen from it relatively.

I _assume_ that this intuition for how one updates Bayesian posteriors is 
behind Nick’s common-language premise that “the longer a run of success 
continues, the greater is the probability that the wheel that produces those 
successes is biased”.  That would certainly have been what I meant in a 
short-hand for the more laborious Bayesian formula.


For completeness, the Bayesian way of choosing a meaning for probabilities 
updated by observations is the following.

Assume two random variables, M and D, which take values respectively standing 
for a Model or hypothesis, and an observed-value or Datum.  So: hypothesis: 
this wheel and not that one is loaded.  datum: this wheel has produced 
relatively more wins.

Then, by some means, commit to what probability you assign to each value of M 
before you make an observation.  Call it P(M).  This is your Bayesian prior 
(for whether or not a certain wheel is loaded).  Maybe you admit the 
possibility that some wheel is loaded because you have heard it said, and maybe 
you even assume that precisely one wheel in the house is loaded, only you don’t 
know which one.  Lots of forms could be adopted.

Next, we assume a true, physical property of the wheel is the probability 
distribution with which it produces wins, given whether it is or is not loaded. 
 Notation is P(D|M).  This is called the _likelihood function_ for data given a 
model.

The Bayes construction is to say that the structure of unconditioned and 
conditioned probabilites requires that the same joint probability be 
arrivable-at in either of two ways:
P(D,M) = P(D|M)P(M) = P(M|D)P(D).

We have had to introduce a new “conditioned” probability, called the Bayesian 
Posterior, P(M|D), which treats the model as if it depended on the data.  But 
this is just chopping a joint space of models and data two ways, and we are 
always allowed to do that.  The unconditioned probability for data values, 
P(D), is usually expressed as the sum of P(D|M)P(M) over all values that M can 
take.  That is the probability to see that datum any way it can be produced, if 
the prior describes that world correctly.  In any case, if the prior P(M) was 
the best you can do, then P(D) is the best you can produce from it within this 
system.

Bayesian updating says we can consistently assign this posterior probability 
as: P(M|D) = P(D|M) P(M) / P(D).

P(M|D) obeys the axioms of a probability, and so is eligible to be the referent 
of Nick’s informal claim, and it would have the property he asserts,

Re: [FRIAM] Model of induction

2016-12-12 Thread Eric Smith
Hi Robert,

I worry about mixing technical and informal claims, and making it hard for 
people with different backgrounds to track which level the conversation is 
operating at.

You said: 

> A long run is itself a data point and the premise in red (below) is false.

and the premise in red (I am not using an RTF sender) from Nick was:

>> But the longer a run of success continues, the greater is the probability 
>> that the wheel that produces those successes is biased.

Whether or not it is false actually depends on what “probability” one means to 
be referring to.  (I am ending many sentences with prepositions; apologies.) 

It is hard to say that any “probability” inherently is “the” probability that 
the wheel produces those successes.  A wheel is just a wheel (Freud or no 
Freud); to assign it a probability requires choosing a set and measure within 
which to embed it, and that always involves other assumptions by whoever is 
making the assertion.  

Under typical usages, yes, there could be some kind of “a priori” (or, in 
Bayesian-inference language, “prior”) probability that the wheel has a 
property, and yes, that probability would not be changed by testing how many 
wins it produces.

On the other hand, the Bayesian posterior probability, obtained from the prior 
(however arrived-at) and the likelihood function, would indeed put greater 
weight on the wheel that is loaded, (under yet more assumptions of independence 
etc. to account for Roger’s comment that long runs are not the only possible 
signature of loading, and your own comments as well), the more wins one had 
seen from it relatively.  

I _assume_ that this intuition for how one updates Bayesian posteriors is 
behind Nick’s common-language premise that “the longer a run of success 
continues, the greater is the probability that the wheel that produces those 
successes is biased”.  That would certainly have been what I meant in a 
short-hand for the more laborious Bayesian formula.


For completeness, the Bayesian way of choosing a meaning for probabilities 
updated by observations is the following.

Assume two random variables, M and D, which take values respectively standing 
for a Model or hypothesis, and an observed-value or Datum.  So: hypothesis: 
this wheel and not that one is loaded.  datum: this wheel has produced 
relatively more wins.

Then, by some means, commit to what probability you assign to each value of M 
before you make an observation.  Call it P(M).  This is your Bayesian prior 
(for whether or not a certain wheel is loaded).  Maybe you admit the 
possibility that some wheel is loaded because you have heard it said, and maybe 
you even assume that precisely one wheel in the house is loaded, only you don’t 
know which one.  Lots of forms could be adopted.

Next, we assume a true, physical property of the wheel is the probability 
distribution with which it produces wins, given whether it is or is not loaded. 
 Notation is P(D|M).  This is called the _likelihood function_ for data given a 
model.

The Bayes construction is to say that the structure of unconditioned and 
conditioned probabilites requires that the same joint probability be 
arrivable-at in either of two ways:
P(D,M) = P(D|M)P(M) = P(M|D)P(D).

We have had to introduce a new “conditioned” probability, called the Bayesian 
Posterior, P(M|D), which treats the model as if it depended on the data.  But 
this is just chopping a joint space of models and data two ways, and we are 
always allowed to do that.  The unconditioned probability for data values, 
P(D), is usually expressed as the sum of P(D|M)P(M) over all values that M can 
take.  That is the probability to see that datum any way it can be produced, if 
the prior describes that world correctly.  In any case, if the prior P(M) was 
the best you can do, then P(D) is the best you can produce from it within this 
system. 

Bayesian updating says we can consistently assign this posterior probability 
as: P(M|D) = P(D|M) P(M) / P(D).

P(M|D) obeys the axioms of a probability, and so is eligible to be the referent 
of Nick’s informal claim, and it would have the property he asserts, relative 
to P(M).

Of course, none of this ensures that any of these probabilities is empirically 
accurate; that requires efforts at calibrating your whole system.  Cosma 
Shalizi and Andrew Gelman have some lovely write-up of this somewhere, which 
should be easy enough to find (about standard fallacies in use of Bayesian 
updating, and what one can do to avoid committing them naively).   Nonetheless, 
Bayesian updating does have many very desirable properties of converging on 
consistent answers in the limit of long observations, and making you less 
sensitive to mistakes in your original premises (at least under many 
circumstances, inluding roulette wheels) than you were originally. 

To my mind, none of this grants probabilities from God, which then end 
discussions.  (So no buying into “objective Bayesianism”.)  What thi

Re: [FRIAM] Model of induction

2016-12-12 Thread Steven A Smith

Eudamonic Pie anyone?

https://en.wikipedia.org/wiki/The_Eudaemonic_Pie

The eudaemonic pie - bookcover.jpg 



It seems that (some) roulette wheels (being imperfect, analog devices) 
can and have been predicted statistically by NM boys born and 
bred...   This was all still a fresh story when I met Doyne (and 
eventually Norm) back in the early 80s.

On 12/12/16 12:23 PM, Roger Critchlow wrote:
Seems like the abduction step would be assuming that there are loaded 
wheels before you have any empirical evidence.


A wheel could be fat-tailed, tending to longer runs, without being 
biased toward any particular numbers.  There would be an incentive to 
bet on a run continuing, but no particular number would be more likely 
to have long runs. That wouldn't be a loaded wheel in the usual 
understanding of crooked gambling devices.  But it would be the sort 
of device to encourage gamblers to believe they have a hot hand.


-- rec --


On Mon, Dec 12, 2016 at 2:06 PM, Frank Wimberly > wrote:


Mathematical induction is a method for proving theorems.
 "Scientific induction" is a method for accumulating evidence to
support one hypothesis or another; no proof involved, or possible.

Frank

Frank Wimberly
Phone (505) 670-9918 

On Dec 12, 2016 11:44 AM, "Owen Densmore" mailto:o...@backspaces.net>> wrote:

What's the difference between mathematical induction and
scientific?
https://en.wikipedia.org/wiki/Mathematical_induction


   -- Owen

On Mon, Dec 12, 2016 at 10:44 AM, Robert J. Cordingley
mailto:rob...@cirrillian.com>> wrote:

Based on https://plato.stanford.edu/entries/peirce/#dia
 - it
looks like abduction (AAA-2) to me - ie developing an
educated guess as to which might be the winning wheel.
Enough funds should find it with some degree of certainty
but that may be a different question and should use
different statistics because the 'longest run' is a poor
metric compared to say net winnings or average rate of
winning. A long run is itself a data point and the premise
in red (below) is false.

Waiting for wisdom to kick in. R

PS FWIW the article does not contain the phrase
'scientific induction' R


On 12/12/16 12:31 AM, Nick Thompson wrote:


Dear Wise Persons,

Would the following work?

*/Imagine you enter a casino that has a thousand roulette
tables. The rumor circulates around the casino that one
of the wheels is loaded. So, you call up a thousand of
your friends and you all work together to find the loaded
wheel.  Why, because if you use your knowledge to play
that wheel you will make a LOT of money.  Now the problem
you all face, of course, is that a run of successes is
not an infallible sign of a loaded wheel.  In fact, given
randomness, it is assured that with a thousand players
playing a thousand wheels as fast as they can, there will
be random long runs of successes.  But the longer a run
of success continues, the greater is the probability that
the wheel that produces those successes is biased. So,
your team of players would be paid, on this account, for
beginning to focus its play on those wheels with the
longest runs. /*

FWIW, this, I think, is Peirce’s model of scientific
induction.

Nick

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

http://home.earthlink.net/~nickthompson/naturaldesigns/





FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribehttp://redfish.com/mailman/listinfo/friam_redfish.com

FRIAM-COMIChttp://friam-comic.blogspot.com/
  by Dr. Strangelove


-- 
Cirrillian

Web Design & Development
Santa Fe, NM
http://cirrillian.com
281-989-6272   (cell)
Member Design Corps of Santa Fe


FRIAM Applied Complexity Group listserv Meets Fridays
9a-11:30 at cafe at St. John's Coll

Re: [FRIAM] Model of induction

2016-12-12 Thread Frank Wimberly
Example of mathematical induction:

Theorem: The sum of the first n integers is n(n+1)/2.

Proof: If n=1, check.  We assume 1+2+...m=m(m+1)/2 we need to show that if
n=m+1 then 1+2+...+n = n(n+1)/2.  But we know it's =  m(m+1)/2 + n which is
= (n-1)n/2 + n =
(n^2 - n)/2 +n=n^2/2 -n/2 +n =n^2/2 +n/2=
n(n+1)/2.  QED



Frank Wimberly
Phone (505) 670-9918

On Dec 12, 2016 12:06 PM, "Frank Wimberly"  wrote:

> Mathematical induction is a method for proving theorems.  "Scientific
> induction" is a method for accumulating evidence to support one hypothesis
> or another; no proof involved, or possible.
>
> Frank
>
> Frank Wimberly
> Phone (505) 670-9918
>
> On Dec 12, 2016 11:44 AM, "Owen Densmore"  wrote:
>
> What's the difference between mathematical induction and scientific?
>   https://en.wikipedia.org/wiki/Mathematical_induction
>
>-- Owen
>
> On Mon, Dec 12, 2016 at 10:44 AM, Robert J. Cordingley <
> rob...@cirrillian.com> wrote:
>
>> Based on https://plato.stanford.edu/entries/peirce/#dia - it looks like
>> abduction (AAA-2) to me - ie developing an educated guess as to which might
>> be the winning wheel. Enough funds should find it with some degree of
>> certainty but that may be a different question and should use different
>> statistics because the 'longest run' is a poor metric compared to say net
>> winnings or average rate of winning. A long run is itself a data point and
>> the premise in red (below) is false.
>>
>> Waiting for wisdom to kick in. R
>>
>> PS FWIW the article does not contain the phrase 'scientific induction' R
>>
>> On 12/12/16 12:31 AM, Nick Thompson wrote:
>>
>> Dear Wise Persons,
>>
>>
>>
>> Would the following work?
>>
>>
>>
>> *Imagine you enter a casino that has a thousand roulette tables.  The
>> rumor circulates around the casino that one of the wheels is loaded.  So,
>> you call up a thousand of your friends and you all work together to find
>> the loaded wheel.  Why, because if you use your knowledge to play that
>> wheel you will make a LOT of money.  Now the problem you all face, of
>> course, is that a run of successes is not an infallible sign of a loaded
>> wheel.  In fact, given randomness, it is assured that with a thousand
>> players playing a thousand wheels as fast as they can, there will be random
>> long runs of successes.  But the longer a run of success continues, the
>> greater is the probability that the wheel that produces those successes is
>> biased.  So, your team of players would be paid, on this account, for
>> beginning to focus its play on those wheels with the longest runs. *
>>
>>
>>
>> FWIW, this, I think, is Peirce’s model of scientific induction.
>>
>>
>>
>> Nick
>>
>>
>>
>> Nicholas S. Thompson
>>
>> Emeritus Professor of Psychology and Biology
>>
>> Clark University
>>
>> http://home.earthlink.net/~nickthompson/naturaldesigns/
>>
>>
>>
>>
>> 
>> FRIAM Applied Complexity Group listserv
>> Meets Fridays 9a-11:30 at cafe at St. John's College
>> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>> FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
>>
>>
>> --
>> Cirrillian
>> Web Design & Development
>> Santa Fe, NMhttp://cirrillian.com281-989-6272 <(281)%20989-6272> (cell)
>> Member Design Corps of Santa Fe
>>
>>
>> 
>> FRIAM Applied Complexity Group listserv
>> Meets Fridays 9a-11:30 at cafe at St. John's College
>> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>> FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
>>
>
>
> 
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
>
>
>

FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Re: [FRIAM] Model of induction

2016-12-12 Thread Roger Critchlow
Seems like the abduction step would be assuming that there are loaded
wheels before you have any empirical evidence.

A wheel could be fat-tailed, tending to longer runs, without being biased
toward any particular numbers.  There would be an incentive to bet on a run
continuing, but no particular number would be more likely to have long
runs.  That wouldn't be a loaded wheel in the usual understanding of
crooked gambling devices.  But it would be the sort of device to encourage
gamblers to believe they have a hot hand.

-- rec --


On Mon, Dec 12, 2016 at 2:06 PM, Frank Wimberly  wrote:

> Mathematical induction is a method for proving theorems.  "Scientific
> induction" is a method for accumulating evidence to support one hypothesis
> or another; no proof involved, or possible.
>
> Frank
>
> Frank Wimberly
> Phone (505) 670-9918
>
> On Dec 12, 2016 11:44 AM, "Owen Densmore"  wrote:
>
> What's the difference between mathematical induction and scientific?
>   https://en.wikipedia.org/wiki/Mathematical_induction
>
>-- Owen
>
> On Mon, Dec 12, 2016 at 10:44 AM, Robert J. Cordingley <
> rob...@cirrillian.com> wrote:
>
>> Based on https://plato.stanford.edu/entries/peirce/#dia - it looks like
>> abduction (AAA-2) to me - ie developing an educated guess as to which might
>> be the winning wheel. Enough funds should find it with some degree of
>> certainty but that may be a different question and should use different
>> statistics because the 'longest run' is a poor metric compared to say net
>> winnings or average rate of winning. A long run is itself a data point and
>> the premise in red (below) is false.
>>
>> Waiting for wisdom to kick in. R
>>
>> PS FWIW the article does not contain the phrase 'scientific induction' R
>>
>> On 12/12/16 12:31 AM, Nick Thompson wrote:
>>
>> Dear Wise Persons,
>>
>>
>>
>> Would the following work?
>>
>>
>>
>> *Imagine you enter a casino that has a thousand roulette tables.  The
>> rumor circulates around the casino that one of the wheels is loaded.  So,
>> you call up a thousand of your friends and you all work together to find
>> the loaded wheel.  Why, because if you use your knowledge to play that
>> wheel you will make a LOT of money.  Now the problem you all face, of
>> course, is that a run of successes is not an infallible sign of a loaded
>> wheel.  In fact, given randomness, it is assured that with a thousand
>> players playing a thousand wheels as fast as they can, there will be random
>> long runs of successes.  But the longer a run of success continues, the
>> greater is the probability that the wheel that produces those successes is
>> biased.  So, your team of players would be paid, on this account, for
>> beginning to focus its play on those wheels with the longest runs. *
>>
>>
>>
>> FWIW, this, I think, is Peirce’s model of scientific induction.
>>
>>
>>
>> Nick
>>
>>
>>
>> Nicholas S. Thompson
>>
>> Emeritus Professor of Psychology and Biology
>>
>> Clark University
>>
>> http://home.earthlink.net/~nickthompson/naturaldesigns/
>>
>>
>>
>>
>> 
>> FRIAM Applied Complexity Group listserv
>> Meets Fridays 9a-11:30 at cafe at St. John's College
>> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>> FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
>>
>>
>> --
>> Cirrillian
>> Web Design & Development
>> Santa Fe, NMhttp://cirrillian.com281-989-6272 <(281)%20989-6272> (cell)
>> Member Design Corps of Santa Fe
>>
>>
>> 
>> FRIAM Applied Complexity Group listserv
>> Meets Fridays 9a-11:30 at cafe at St. John's College
>> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>> FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
>>
>
>
> 
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
>
>
>
> 
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
>

FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Re: [FRIAM] Model of induction

2016-12-12 Thread Frank Wimberly
Mathematical induction is a method for proving theorems.  "Scientific
induction" is a method for accumulating evidence to support one hypothesis
or another; no proof involved, or possible.

Frank

Frank Wimberly
Phone (505) 670-9918

On Dec 12, 2016 11:44 AM, "Owen Densmore"  wrote:

What's the difference between mathematical induction and scientific?
  https://en.wikipedia.org/wiki/Mathematical_induction

   -- Owen

On Mon, Dec 12, 2016 at 10:44 AM, Robert J. Cordingley <
rob...@cirrillian.com> wrote:

> Based on https://plato.stanford.edu/entries/peirce/#dia - it looks like
> abduction (AAA-2) to me - ie developing an educated guess as to which might
> be the winning wheel. Enough funds should find it with some degree of
> certainty but that may be a different question and should use different
> statistics because the 'longest run' is a poor metric compared to say net
> winnings or average rate of winning. A long run is itself a data point and
> the premise in red (below) is false.
>
> Waiting for wisdom to kick in. R
>
> PS FWIW the article does not contain the phrase 'scientific induction' R
>
> On 12/12/16 12:31 AM, Nick Thompson wrote:
>
> Dear Wise Persons,
>
>
>
> Would the following work?
>
>
>
> *Imagine you enter a casino that has a thousand roulette tables.  The
> rumor circulates around the casino that one of the wheels is loaded.  So,
> you call up a thousand of your friends and you all work together to find
> the loaded wheel.  Why, because if you use your knowledge to play that
> wheel you will make a LOT of money.  Now the problem you all face, of
> course, is that a run of successes is not an infallible sign of a loaded
> wheel.  In fact, given randomness, it is assured that with a thousand
> players playing a thousand wheels as fast as they can, there will be random
> long runs of successes.  But the longer a run of success continues, the
> greater is the probability that the wheel that produces those successes is
> biased.  So, your team of players would be paid, on this account, for
> beginning to focus its play on those wheels with the longest runs. *
>
>
>
> FWIW, this, I think, is Peirce’s model of scientific induction.
>
>
>
> Nick
>
>
>
> Nicholas S. Thompson
>
> Emeritus Professor of Psychology and Biology
>
> Clark University
>
> http://home.earthlink.net/~nickthompson/naturaldesigns/
>
>
>
>
> 
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
>
>
> --
> Cirrillian
> Web Design & Development
> Santa Fe, NMhttp://cirrillian.com281-989-6272 <(281)%20989-6272> (cell)
> Member Design Corps of Santa Fe
>
>
> 
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
>



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Re: [FRIAM] Model of induction

2016-12-12 Thread Owen Densmore
What's the difference between mathematical induction and scientific?
  https://en.wikipedia.org/wiki/Mathematical_induction

   -- Owen

On Mon, Dec 12, 2016 at 10:44 AM, Robert J. Cordingley <
rob...@cirrillian.com> wrote:

> Based on https://plato.stanford.edu/entries/peirce/#dia - it looks like
> abduction (AAA-2) to me - ie developing an educated guess as to which might
> be the winning wheel. Enough funds should find it with some degree of
> certainty but that may be a different question and should use different
> statistics because the 'longest run' is a poor metric compared to say net
> winnings or average rate of winning. A long run is itself a data point and
> the premise in red (below) is false.
>
> Waiting for wisdom to kick in. R
>
> PS FWIW the article does not contain the phrase 'scientific induction' R
>
> On 12/12/16 12:31 AM, Nick Thompson wrote:
>
> Dear Wise Persons,
>
>
>
> Would the following work?
>
>
>
> *Imagine you enter a casino that has a thousand roulette tables.  The
> rumor circulates around the casino that one of the wheels is loaded.  So,
> you call up a thousand of your friends and you all work together to find
> the loaded wheel.  Why, because if you use your knowledge to play that
> wheel you will make a LOT of money.  Now the problem you all face, of
> course, is that a run of successes is not an infallible sign of a loaded
> wheel.  In fact, given randomness, it is assured that with a thousand
> players playing a thousand wheels as fast as they can, there will be random
> long runs of successes.  But the longer a run of success continues, the
> greater is the probability that the wheel that produces those successes is
> biased.  So, your team of players would be paid, on this account, for
> beginning to focus its play on those wheels with the longest runs. *
>
>
>
> FWIW, this, I think, is Peirce’s model of scientific induction.
>
>
>
> Nick
>
>
>
> Nicholas S. Thompson
>
> Emeritus Professor of Psychology and Biology
>
> Clark University
>
> http://home.earthlink.net/~nickthompson/naturaldesigns/
>
>
>
>
> 
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
>
>
> --
> Cirrillian
> Web Design & Development
> Santa Fe, NMhttp://cirrillian.com281-989-6272 <(281)%20989-6272> (cell)
> Member Design Corps of Santa Fe
>
>
> 
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
>

FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Re: [FRIAM] Model of induction

2016-12-12 Thread Eric Charles
I'll assume you meant something generic like: "*focus its play on those
wheels with the longest runs [of unusual results, whatever form that might
take]."*

With that in mind, your test better work. If it doesn't, then casinos and
players have wasted a lot of time worrying about loaded equipment.

Or, to phrase it differently, if I'm the casino manager, and you tell me we
have a problem with a loaded wheel, you'll have my attention. However, if
you also tell me that someone following the proposed plan couldn't possibly
detect the difference between the loaded wheel and a non-loaded one, then
you'll lose my attention, because apparently you don't know what the word
"loaded" means.

Now, you (Nick) might be pointing out that if we spun each wheel a million
times, then concluded which one was *the* loaded one, *and* made a fortune
betting smartly for the next million spins... it is still the case that
our conclusion may be drawn into suspicion during the third million spins.
That strikes me as a different problem... That is, the question of the best
way to make money off of a loaded wheel shouldn't be held up by generic
reference to the problem of induction.


---
Eric P. Charles, Ph.D.
Supervisory Survey Statistician
U.S. Marine Corps


On Mon, Dec 12, 2016 at 12:44 PM, Robert J. Cordingley <
rob...@cirrillian.com> wrote:

> Based on https://plato.stanford.edu/entries/peirce/#dia - it looks like
> abduction (AAA-2) to me - ie developing an educated guess as to which might
> be the winning wheel. Enough funds should find it with some degree of
> certainty but that may be a different question and should use different
> statistics because the 'longest run' is a poor metric compared to say net
> winnings or average rate of winning. A long run is itself a data point and
> the premise in red (below) is false.
>
> Waiting for wisdom to kick in. R
>
> PS FWIW the article does not contain the phrase 'scientific induction' R
>
> On 12/12/16 12:31 AM, Nick Thompson wrote:
>
> Dear Wise Persons,
>
>
>
> Would the following work?
>
>
>
> *Imagine you enter a casino that has a thousand roulette tables.  The
> rumor circulates around the casino that one of the wheels is loaded.  So,
> you call up a thousand of your friends and you all work together to find
> the loaded wheel.  Why, because if you use your knowledge to play that
> wheel you will make a LOT of money.  Now the problem you all face, of
> course, is that a run of successes is not an infallible sign of a loaded
> wheel.  In fact, given randomness, it is assured that with a thousand
> players playing a thousand wheels as fast as they can, there will be random
> long runs of successes.  But the longer a run of success continues, the
> greater is the probability that the wheel that produces those successes is
> biased.  So, your team of players would be paid, on this account, for
> beginning to focus its play on those wheels with the longest runs. *
>
>
>
> FWIW, this, I think, is Peirce’s model of scientific induction.
>
>
>
> Nick
>
>
>
> Nicholas S. Thompson
>
> Emeritus Professor of Psychology and Biology
>
> Clark University
>
> http://home.earthlink.net/~nickthompson/naturaldesigns/
>
>
>
>
> 
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
>
>
> --
> Cirrillian
> Web Design & Development
> Santa Fe, NMhttp://cirrillian.com281-989-6272 <(281)%20989-6272> (cell)
> Member Design Corps of Santa Fe
>
>
> 
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
>

FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Re: [FRIAM] Model of induction

2016-12-12 Thread Robert J. Cordingley
Based on https://plato.stanford.edu/entries/peirce/#dia - it looks like 
abduction (AAA-2) to me - ie developing an educated guess as to which 
might be the winning wheel. Enough funds should find it with some degree 
of certainty but that may be a different question and should use 
different statistics because the 'longest run' is a poor metric compared 
to say net winnings or average rate of winning. A long run is itself a 
data point and the premise in red (below) is false.


Waiting for wisdom to kick in. R

PS FWIW the article does not contain the phrase 'scientific induction' R


On 12/12/16 12:31 AM, Nick Thompson wrote:


Dear Wise Persons,

Would the following work?

*/Imagine you enter a casino that has a thousand roulette tables.  The 
rumor circulates around the casino that one of the wheels is loaded.  
So, you call up a thousand of your friends and you all work together 
to find the loaded wheel.  Why, because if you use your knowledge to 
play that wheel you will make a LOT of money.  Now the problem you all 
face, of course, is that a run of successes is not an infallible sign 
of a loaded wheel.  In fact, given randomness, it is assured that with 
a thousand players playing a thousand wheels as fast as they can, 
there will be random long runs of successes.  But the longer a run of 
success continues, the greater is the probability that the wheel that 
produces those successes is biased.  So, your team of players would be 
paid, on this account, for beginning to focus its play on those wheels 
with the longest runs. /*


FWIW, this, I think, is Peirce’s model of scientific induction.

Nick

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

http://home.earthlink.net/~nickthompson/naturaldesigns/ 






FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


--
Cirrillian
Web Design & Development
Santa Fe, NM
http://cirrillian.com
281-989-6272 (cell)
Member Design Corps of Santa Fe


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove