Re: [FRIAM] probability vs. statistics (was Re: Model of induction)

2016-12-13 Thread Eric Charles
I don't have an answer per se, but I have some relevant information:

Back in the early days of statistics, one could become a pariah in the eyes
of the field if it became suspected one had surreptitiously used Bayes'
Theorem in a proof. This was because the early statisticians believed
future events were probable. They really, deeply believed it. They were
defining a new world view, to be contrasted with the deterministic world
view. If you smoked, there was a probability that in the future you might
get cancer; it was not certain, nothing was predetermined. In such a
context, any talk of backwards-probability is nonsensical. After you have
lung cancer, there is not "a probability" that you smoked. Either you did
or you did not; it already happened! Thus, at least for the early
statisticians, people like Fisher, time was inherent to claims about
probability.

Now, it is worth noting that one can wager on past events of any kind,
given someone willing to take the bet. And in such a context, Bayes'
Theorem can be mighty useful. The Theorem is thus quite popular these days,
but that is a different matter. Whatever the results of such equations are
--- between 1 and 0, having certain properties, etc. --- so long as the
results refer to past events, Fisher and many others would have insisted
that the result is not "a probability" that said event occurred.

Also, from what I can tell, as mathematicians became more prevalent in
statistics, as opposed to the grand tradition of scientist-philosophers who
happened to be highly proficient in mathematics, such
ontological/metaphysical points seem to have become much less important.





---
Eric P. Charles, Ph.D.
Supervisory Survey Statistician
U.S. Marine Corps


On Mon, Dec 12, 2016 at 6:47 PM, glen ☣  wrote:

>
> I have a large stash of nonsense I could write that might be on topic.
> But the topic coincides with an argument I had about 2 weeks ago.  My
> opponent said something generalizing about the use of statistics and I made
> a comment (I thought was funny, but apparently not) that I don't really
> know what statistics _is_.  I also made the mistake of claiming that I _do_
> know what probability theory is. [sigh]  Fast forward through lots of
> nonsense to the gist:
>
> My opponent claims that time (the experience of, the passage of, etc.) is
> required by probability theory.  He seemed to hinge his entire argument on
> the vernacular concept of an "event".  My argument was that, akin to the
> idea that we discover (rather than invent) math theorems, probability
> theory was all about counting -- or measurement.  So, it's all already
> there, including things like power sets.  There's no need for time to pass
> in order to measure the size of any given subset of the possibility space.
>
> In any case, I'm a bit of a jerk, obviously.  So, I just assumed I was
> right and didn't look anything up.  But after this conversation here, I
> decided to spend lunch doing so.  And ran across the idea that probability
> is the forward map (given the generator, what phenomena will emerge?) and
> statistics is the inverse map (given the phenomena you see, what's the
> generator?).  And although neither of these really require time, per se,
> there is a definite role for [ir]reversibility or at least asymmetry.
>
> So, does anyone here have an opinion on the ontological status of one or
> both probability and/or statistics?  Am I demonstrating my ignorance by
> suggesting the "events" we study in probability are not (identical to) the
> events we experience in space & time?
>
>
> On 12/11/2016 11:31 PM, Nick Thompson wrote:
> > Would the following work?
> >
> > */Imagine you enter a casino that has a thousand roulette tables.  The
> rumor circulates around the casino that one of the wheels is loaded.  So,
> you call up a thousand of your friends and you all work together to find
> the loaded wheel.  Why, because if you use your knowledge to play that
> wheel you will make a LOT of money.  Now the problem you all face, of
> course, is that a run of successes is not an infallible sign of a loaded
> wheel.  In fact, given randomness, it is assured that with a thousand
> players playing a thousand wheels as fast as they can, there will be random
> long runs of successes.  But the longer a run of success continues, the
> greater is the probability that the wheel that produces those successes is
> biased.  So, your team of players would be paid, on this account, for
> beginning to focus its play on those wheels with the longest runs. /*
> >
> >
> >
> > FWIW, this, I think, is Peirce’s model of scientific induction.
>
> --
> ☣ glen
>
> 
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
==

Re: [FRIAM] probability vs. statistics (was Re: Model of induction)

2016-12-13 Thread ┣glen┫

Excellent!  My opponent will be very happy when I make that concession.  It's 
interesting that, for this argument, I've adopted the Platonic perspective 
despite being a constructivist myself.  And it's interesting that my current 
position (that the math world is extant and static) seems to rely a bit on 
viewing probability theory as a special subset of math overall.  But that 
perspective seems to encourage me to think about the ontological/metaphysical 
aspects.  Perhaps it's only because I'm not a mathematician.

Thanks!

On 12/13/2016 05:00 AM, Eric Charles wrote:
> I don't have an answer per se, but I have some relevant information:
> 
> Back in the early days of statistics, one could become a pariah in the eyes
> of the field if it became suspected one had surreptitiously used Bayes'
> Theorem in a proof. This was because the early statisticians believed
> future events were probable. They really, deeply believed it. They were
> defining a new world view, to be contrasted with the deterministic world
> view. If you smoked, there was a probability that in the future you might
> get cancer; it was not certain, nothing was predetermined. In such a
> context, any talk of backwards-probability is nonsensical. After you have
> lung cancer, there is not "a probability" that you smoked. Either you did
> or you did not; it already happened! Thus, at least for the early
> statisticians, people like Fisher, time was inherent to claims about
> probability.
> 
> Now, it is worth noting that one can wager on past events of any kind,
> given someone willing to take the bet. And in such a context, Bayes'
> Theorem can be mighty useful. The Theorem is thus quite popular these days,
> but that is a different matter. Whatever the results of such equations are
> --- between 1 and 0, having certain properties, etc. --- so long as the
> results refer to past events, Fisher and many others would have insisted
> that the result is not "a probability" that said event occurred.
> 
> Also, from what I can tell, as mathematicians became more prevalent in
> statistics, as opposed to the grand tradition of scientist-philosophers who
> happened to be highly proficient in mathematics, such
> ontological/metaphysical points seem to have become much less important.


-- 
␦glen?


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Re: [FRIAM] probability vs. statistics (was Re: Model of induction)

2016-12-13 Thread Nick Thompson
Glen and Eric, In my role as the Fool Who Rushes In, let me just say that 
according to an experience monist, past experience, present experience, and 
future experience are all on the same footing.  We come to know them as 
different because they prove out in different ways.  This should fit nicely 
with your constructivism, Glen, although you may see it as too much of a good 
thing.  We can have expectations about the past, just as well as we can have 
expectations of the future, and those expectations can prove out or not in 
subsequent experience. 

Nick 

Nicholas S. Thompson
Emeritus Professor of Psychology and Biology
Clark University
http://home.earthlink.net/~nickthompson/naturaldesigns/


-Original Message-
From: Friam [mailto:friam-boun...@redfish.com] On Behalf Of ?glen?
Sent: Tuesday, December 13, 2016 8:37 AM
To: The Friday Morning Applied Complexity Coffee Group 
Subject: Re: [FRIAM] probability vs. statistics (was Re: Model of induction)


Excellent!  My opponent will be very happy when I make that concession.  It's 
interesting that, for this argument, I've adopted the Platonic perspective 
despite being a constructivist myself.  And it's interesting that my current 
position (that the math world is extant and static) seems to rely a bit on 
viewing probability theory as a special subset of math overall.  But that 
perspective seems to encourage me to think about the ontological/metaphysical 
aspects.  Perhaps it's only because I'm not a mathematician.

Thanks!

On 12/13/2016 05:00 AM, Eric Charles wrote:
> I don't have an answer per se, but I have some relevant information:
> 
> Back in the early days of statistics, one could become a pariah in the 
> eyes of the field if it became suspected one had surreptitiously used Bayes'
> Theorem in a proof. This was because the early statisticians believed 
> future events were probable. They really, deeply believed it. They 
> were defining a new world view, to be contrasted with the 
> deterministic world view. If you smoked, there was a probability that 
> in the future you might get cancer; it was not certain, nothing was 
> predetermined. In such a context, any talk of backwards-probability is 
> nonsensical. After you have lung cancer, there is not "a probability" 
> that you smoked. Either you did or you did not; it already happened! 
> Thus, at least for the early statisticians, people like Fisher, time 
> was inherent to claims about probability.
> 
> Now, it is worth noting that one can wager on past events of any kind, 
> given someone willing to take the bet. And in such a context, Bayes'
> Theorem can be mighty useful. The Theorem is thus quite popular these 
> days, but that is a different matter. Whatever the results of such 
> equations are
> --- between 1 and 0, having certain properties, etc. --- so long as 
> the results refer to past events, Fisher and many others would have 
> insisted that the result is not "a probability" that said event occurred.
> 
> Also, from what I can tell, as mathematicians became more prevalent in 
> statistics, as opposed to the grand tradition of 
> scientist-philosophers who happened to be highly proficient in 
> mathematics, such ontological/metaphysical points seem to have become much 
> less important.


--
␦glen?


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe 
http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Re: [FRIAM] probability vs. statistics (was Re: Model of induction)

2016-12-13 Thread Grant Holland

Glenn,

This topic was well-developed in the last century. The probabilists 
argued the issues thoroughly. But I find what the philosophers of 
science have to say about the subject a little more pertinent to what 
you are asking, since your discussion seems to be somewhat ontological. 
In particular I'm thinking of Peirce, Popper and especially Mario Bunge. 
The latter two had to account for quantum theory, so are a little more 
pertinent - and interesting. I can give you more specific references if 
you are interested.


Take care,

Grant


On 12/12/16 4:47 PM, glen ☣ wrote:

I have a large stash of nonsense I could write that might be on topic.  But the 
topic coincides with an argument I had about 2 weeks ago.  My opponent said 
something generalizing about the use of statistics and I made a comment (I 
thought was funny, but apparently not) that I don't really know what statistics 
_is_.  I also made the mistake of claiming that I _do_ know what probability 
theory is. [sigh]  Fast forward through lots of nonsense to the gist:

My opponent claims that time (the experience of, the passage of, etc.) is required by 
probability theory.  He seemed to hinge his entire argument on the vernacular concept of 
an "event".  My argument was that, akin to the idea that we discover (rather 
than invent) math theorems, probability theory was all about counting -- or measurement.  
So, it's all already there, including things like power sets.  There's no need for time 
to pass in order to measure the size of any given subset of the possibility space.

In any case, I'm a bit of a jerk, obviously.  So, I just assumed I was right 
and didn't look anything up.  But after this conversation here, I decided to 
spend lunch doing so.  And ran across the idea that probability is the forward 
map (given the generator, what phenomena will emerge?) and statistics is the 
inverse map (given the phenomena you see, what's the generator?).  And although 
neither of these really require time, per se, there is a definite role for 
[ir]reversibility or at least asymmetry.

So, does anyone here have an opinion on the ontological status of one or both probability 
and/or statistics?  Am I demonstrating my ignorance by suggesting the "events" we 
study in probability are not (identical to) the events we experience in space & time?


On 12/11/2016 11:31 PM, Nick Thompson wrote:

Would the following work?

*/Imagine you enter a casino that has a thousand roulette tables.  The rumor 
circulates around the casino that one of the wheels is loaded.  So, you call up 
a thousand of your friends and you all work together to find the loaded wheel.  
Why, because if you use your knowledge to play that wheel you will make a LOT 
of money.  Now the problem you all face, of course, is that a run of successes 
is not an infallible sign of a loaded wheel.  In fact, given randomness, it is 
assured that with a thousand players playing a thousand wheels as fast as they 
can, there will be random long runs of successes.  But the longer a run of 
success continues, the greater is the probability that the wheel that produces 
those successes is biased.  So, your team of players would be paid, on this 
account, for beginning to focus its play on those wheels with the longest runs. 
/*

  


FWIW, this, I think, is Peirce’s model of scientific induction.




FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Re: [FRIAM] probability vs. statistics (was Re: Model of induction)

2016-12-13 Thread glen ☣

Yes, definitely.  I intend to bring up deterministic stochasticity >8^D the 
next time I see him.  So a discussion of it in the context QM would be helpful.

On 12/13/2016 10:54 AM, Grant Holland wrote:
> This topic was well-developed in the last century. The probabilists argued 
> the issues thoroughly. But I find what the philosophers of science have to 
> say about the subject a little more pertinent to what you are asking, since 
> your discussion seems to be somewhat ontological. In particular I'm thinking 
> of Peirce, Popper and especially Mario Bunge. The latter two had to account 
> for quantum theory, so are a little more pertinent - and interesting. I can 
> give you more specific references if you are interested.

-- 
☣ glen


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Re: [FRIAM] probability vs. statistics (was Re: Model of induction)

2016-12-13 Thread Grant Holland

Glen,

On closer reading of the issue you are interested in, and upon 
re-consulting the sources I was thinking of (Bunge and Popper), I can 
see that neither of those sources directly address the question of 
whether time must be involved in order for probability theory to come 
into play. Nevertheless, I  think you may be interested in these two 
sources anyway.


The works that I've been reading from these two folks are: /Causality 
and Modern Science/ by Mario Bunge and /The Logic of Scientific 
Discovery/ by Karl Popper. Bunge takes (positive) probability to 
essentially be the complement of causation. Thus his book ends up being 
very much about probability. Popper has an eighty page section on 
probability and is well worth reading from a philosophy of science 
perspective. I recommend both of these sources.


While I'm at it, let me add my two cents worth to the question 
concerning the difference between probability and statistics. In my 
view, Probability Theory /should be /defined as "the study of 
probability spaces". Its not often defined that way - usually something 
about "random variables" appears in the definition. But the subject of 
probability spaces is more inclusive, so I prefer it.


Secondly, its reasonable to say that a probability space defines 
"events" (at least in the finite case) as essentially a set of 
combinations of the sample space (with a few more specifications). 
Nothing is said in this definition that requires that "the event must 
occur in the future". But it seems that many people (students) insist 
that it has to - or else they can't seem to wrap their minds around it. 
I usually just let them believe that "the event has to be in the future" 
and let it go at that. But there is nothing in the definition of an 
event in a probability space that requires anything about time.


I regard the discipline of statistics (of the Fisher/Neyman type) as the 
study of a particular class of problems pertaining to probability 
distributions and joint distributions: for example, test of hypotheses, 
analysis of variance, and other problems. Statistics makes some very 
specific assumptions that probability theory does not always make: such 
as that there is an underlying theoretical distribution that exhibits 
"parameters" against which are compared "sample distributions" that 
exhibit corresponding "statistics". Moreover, the sweet spot of 
statistics, as I see it, is the moment and central moment functionals 
that, essentially, measure chance variation of random variables.


I admit that some folks would say that probability theory is no more 
inclusive than I described statistics as being. But I think that it is. 
Admittedly, what I have just said is more along the lines of "what it is 
to me" - a statement of preference, rather than an ontic argument that 
"this is what it is".


As long as we're all having a good time...

Grant

On 12/13/16 12:03 PM, glen ☣ wrote:

Yes, definitely.  I intend to bring up deterministic stochasticity >8^D the 
next time I see him.  So a discussion of it in the context QM would be helpful.

On 12/13/2016 10:54 AM, Grant Holland wrote:

This topic was well-developed in the last century. The probabilists argued the 
issues thoroughly. But I find what the philosophers of science have to say 
about the subject a little more pertinent to what you are asking, since your 
discussion seems to be somewhat ontological. In particular I'm thinking of 
Peirce, Popper and especially Mario Bunge. The latter two had to account for 
quantum theory, so are a little more pertinent - and interesting. I can give 
you more specific references if you are interested.



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Re: [FRIAM] probability vs. statistics (was Re: Model of induction)

2016-12-13 Thread Robert Wall
Hi Glen,

I feel a bit like Nick says he feels when immersed in the stream of such
erudite responses to each of your seemingly related, but thread-separated
questions.  As always, though, when reading the posted responses in this
forum, I learn a lot from the various and remarkable ways questions can be
interpreted based on individual experiences.  Perhaps this props up the
idea of social constructivism more than Platonism.  So, if you can bear
with me, my response here is more of a summary of my takeaways from the
variety of responses to your two respective questions, with my own
interpretations thrown in and based on my own experiences.

Taking each question separately ...

Imagine a thousand computers, each generating a list of random numbers.
> Now imagine that for some small quantity of these computers, the numbers
> generated are in n a normal (Poisson?) distribution with mean mu and
> standard deviation s.  Now, the problem is how to detect these non-random
> computers and estimate the values of mu and s.


Nick's question seems to be about how to determine non-random event
generators from independent streams of reportedly random processes.  This
is not really difficult to do and doesn't require any assumptions about
underlying probability distributions other than that each number in the
stream is equally likely as any other number in the stream [i.e., uniformly
distributed in probability space] and that the cumulative probability over
all possible outcomes sums to unity: the very definition of a random
variable ... a non-deterministic event--an observation--mapped to a number
line or a categorical bin.  A random variable has both mathematical and
philosophical properties, as we have heard in this thread.

For Nick's question, I think that Roger has provided the most practical
answer with Marsaglia's Die Hard battery of tests for randomness.  In my
professional life, I used these tests to prepare, for example, a QC
procedure for ensuring our hashing algorithms remained random allocators
after each new build of our software suite.  For example, a simple test
called the "poker test" using the Chi-squared distribution could be used to
satisfy Nick's question with the power of the test (i.e., reducing the
probability of rejecting the null hypothesis of randomness when it is true;
thus perhaps finding more non-random processes than really exist)
increasing with larger sample sizes ... longer runs.

So, does anyone here have an opinion on the ontological status of one or
> both probability and/or statistics?  Am I demonstrating my ignorance by
> suggesting the "events" we study in probability are not (identical to) the
> events we experience in space & time?


At the risk of exposing my own ignorance, I'll also say your question has
to do with the ontological status of any random "event" when treated in any
estimation experiments or likelihood computation; that is, are proposed
probability events or measured statistical events real?

For example--examples are always good to help clarify the question--is the
likelihood of a lung cancer event given a history of smoking pointing to
some reality that will actually occur with a certain amount of uncertainty?
In a population of smokers, yes.  For an individual smoker, no. In the
language of probability and statistics, we say that in a population of
smokers we *expect *this reality to be observed with a certain amount of
certainty (probability). To be sure, these tests would likely involve
several levels of contingencies to tame troublesome confounding variables
(e.g., age, length of time, smoking rate). Don't want to get into
multi-variate statistics, though.

Obviously, time is involved here but doesn't have to be (e.g., the
probability of drawing four aces from a trial of five random draws). An
event is an observation in, say, a nonparametric Fisher exact test of
significance against the null hypothesis of, say, a person that smokes will
contract lung cancer, which we can make contingent on, say, the number of
years of smoking. Epidemiological studies can be very complex, so maybe not
the best of examples ...

So, since probability and statistics both deal with the idea of an
event--as your "opponent" insists--events are just observations that the
event of interest [e.g., four of a kind] occurred; so I would say
epistemologically they are real experiences with a potential (probability)
based on either controlled randomized experiments of observational
experience.  But is a potential ontologically real?  🤔

Asking if those events come with ontologically real probabilistic
properties is another, perhaps, different question?  This gets into
worldview notions of determinism and randomness. We tend to say that if a
human cannot predict the event in advance, it is random ... enough. If it
can be predicted based, say, on known initial conditions, then using
probability theory here is misplaced. Still, there are chaotic non-random
events that are not practically pred

Re: [FRIAM] probability vs. statistics (was Re: Model of induction)

2016-12-14 Thread glen ☣

Thanks!  Everything you say seems to land squarely in my opponent's camp, with 
the focus on the concept of an action or event, requiring some sort of 
partially ordered index (like time).  But you included the clause "but doesn't 
have to be".  I'd like to hear more about what you conceive probability theory 
to be without events, actions, time, etc.

For the sake of this argument, anyway, my concept is affine to Grant's: "the 
study of probability spaces".  Probability, to me, is just the study of the 
sizes of sets where all the sizes are normalized to the [0,1] interval.  We 
talk of "selecting" or "choosing" subsets or elements from larger sets.  But 
such "selection" isn't an action in time.  Such "selection" is an already 
extant property of that organization of sets.  Likewise, the "events" of 
probability are merely analogous to the events we experience in subjective 
time.  Those "events" are (various) properties or predicates that hold over 
whatever set of sets is under consideration.  Those "events" don't _happen_.  
They simply _are_.

Since your language seems to depend on the idea that those predicates must 
_happen_ (i.e. at one point, they are potential or imaginary, and the next they 
are actual or factual), yet you say they don't have to, I'd like to hear you 
explain how "they don't have to".  What are these "events" absent time (or 
another such partially ordered index)?

p.s. FWIW, I have the same problem with the concept of "function" and 
asymmetric transformations.  I accept the idea of a non-invertible function.  
But by accepting that, am I forced to admit something like time?  Or, asked 
another way: As all the no-go theorem provers keep telling us (Tarski, Gödel, 
Wolpert, Arrow, ...), are we doomed to a "turtles all the way down" perspective?


On 12/13/2016 05:03 PM, Robert Wall wrote:
> At the risk of exposing my own ignorance, I'll also say your question has to 
> do with the ontological status of any random "event" when treated in any 
> estimation experiments or likelihood computation; that is, are proposed 
> probability events or measured statistical events real? 
> 
> For example--examples are always good to help clarify the question--is the 
> likelihood of a lung cancer event given a history of smoking pointing to some 
> reality that will actually occur with a certain amount of uncertainty? In a 
> population of smokers, yes.  For an individual smoker, no. In the language of 
> probability and statistics, we say that in a population of smokers we /expect 
> /this reality to be observed with a certain amount of certainty 
> (probability). To be sure, these tests would likely involve several levels of 
> contingencies to tame troublesome confounding variables (e.g., age, length of 
> time, smoking rate). Don't want to get into multi-variate statistics, though. 
> 
> Obviously, time is involved here but doesn't have to be (e.g., the 
> probability of drawing four aces from a trial of five random draws). An event 
> is an observation in, say, a nonparametric Fisher exact test of significance 
> against the null hypothesis of, say, a person that smokes will contract lung 
> cancer, which we can make contingent on, say, the number of years of smoking. 
> Epidemiological studies can be very complex, so maybe not the best of 
> examples ...
> 
> So, since probability and statistics both deal with the idea of an event--as 
> your "opponent" insists--events are just observations that the event of 
> interest [e.g., four of a kind] occurred; so I would say epistemologically 
> they are real experiences with a potential (probability) based on either 
> controlled randomized experiments of observational experience.  But is a 
> potential ontologically real?  🤔
> 
> Asking if those events come with ontologically real probabilistic properties 
> is another, perhaps, different question?  This gets into worldview notions of 
> determinism and randomness. We tend to say that if a human cannot predict the 
> event in advance, it is random ... enough. If it can be predicted based, say, 
> on known initial conditions, then using probability theory here is misplaced. 
> Still, there are chaotic non-random events that are not practically 
> predictable ... they seem random ... enough.  Santa Fe science writer and 
> book author George Johnson gets into this in his book /Fire in the Mind/.

-- 
☣ glen


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Re: [FRIAM] probability vs. statistics (was Re: Model of induction)

2016-12-14 Thread Eric Charles
Ack! Well... I guess now we're in the muck of what the heck probability and
statistics are for mathematicians vs. scientists. Of note, my understanding
is that statistics was a field for at least a few decades before it was
specified in a formal enough way to be invited into the hallows of
mathematics departments, and that it is still frequently viewed with
suspicion there.

Glen states: *We talk of "selecting" or "choosing" subsets or elements from
larger sets.  But such "selection" isn't an action in time.  Such
"selection" is an already extant property of that organization of sets.*

I find such talk quite baffling. When I talk about selecting or choosing or
assigning, I am talking about an action in time. Often I'm talking about an
action that I personally performed. "You are in condition A. You are in
condition B. You are in condition A." etc. Maybe I flip a coin when you
walk into my lab room, maybe I pre-generated some random numbers, maybe I
look at the second hand of my watch as soon as you walk in, maybe I write
down a number "arbitrarily", etc. At any rate, you are not in a condition
before I put you in one, and whatever it is I want to measure about you
hasn't happened yet.

I fully admit that we can model the system without reference to time, if we
want to. Such efforts might yield keen insights. If Glen had said that we
can usefully model what we are interested in as an organized set with
such-and-such properties, and time no where to be found, that might seem
pretty reasonable. But that would be a formal model produced for specific
purposes, not the actual phenomenon of interest. Everything interesting
that we want to describe as "probable" and all the conclusions we want to
come to "statistically" are, for the lab scientist, time dependent
phenomena. (I assert.)



---
Eric P. Charles, Ph.D.
Supervisory Survey Statistician
U.S. Marine Corps


On Wed, Dec 14, 2016 at 12:16 PM, glen ☣  wrote:

>
> Thanks!  Everything you say seems to land squarely in my opponent's camp,
> with the focus on the concept of an action or event, requiring some sort of
> partially ordered index (like time).  But you included the clause "but
> doesn't have to be".  I'd like to hear more about what you conceive
> probability theory to be without events, actions, time, etc.
>
> For the sake of this argument, anyway, my concept is affine to Grant's:
> "the study of probability spaces".  Probability, to me, is just the study
> of the sizes of sets where all the sizes are normalized to the [0,1]
> interval.  We talk of "selecting" or "choosing" subsets or elements from
> larger sets.  But such "selection" isn't an action in time.  Such
> "selection" is an already extant property of that organization of sets.
> Likewise, the "events" of probability are merely analogous to the events we
> experience in subjective time.  Those "events" are (various) properties or
> predicates that hold over whatever set of sets is under consideration.
> Those "events" don't _happen_.  They simply _are_.
>
> Since your language seems to depend on the idea that those predicates must
> _happen_ (i.e. at one point, they are potential or imaginary, and the next
> they are actual or factual), yet you say they don't have to, I'd like to
> hear you explain how "they don't have to".  What are these "events" absent
> time (or another such partially ordered index)?
>
> p.s. FWIW, I have the same problem with the concept of "function" and
> asymmetric transformations.  I accept the idea of a non-invertible
> function.  But by accepting that, am I forced to admit something like
> time?  Or, asked another way: As all the no-go theorem provers keep telling
> us (Tarski, Gödel, Wolpert, Arrow, ...), are we doomed to a "turtles all
> the way down" perspective?
>
>
> On 12/13/2016 05:03 PM, Robert Wall wrote:
> > At the risk of exposing my own ignorance, I'll also say your question
> has to do with the ontological status of any random "event" when treated in
> any estimation experiments or likelihood computation; that is, are proposed
> probability events or measured statistical events real?
> >
> > For example--examples are always good to help clarify the question--is
> the likelihood of a lung cancer event given a history of smoking pointing
> to some reality that will actually occur with a certain amount of
> uncertainty? In a population of smokers, yes.  For an individual smoker,
> no. In the language of probability and statistics, we say that in a
> population of smokers we /expect /this reality to be observed with a
> certain amount of certainty (probability). To be sure, these tests would
> likely involve several levels of contingencies to tame troublesome
> confounding variables (e.g., age, length of time, smoking rate). Don't want
> to get into multi-variate statistics, though.
> >
> > Obviously, time is involved here but doesn't have to be (e.g., the
> probability of drawing four aces from a trial of five random draws). An
> event is an obs

Re: [FRIAM] probability vs. statistics (was Re: Model of induction)

2016-12-14 Thread glen ☣

Ha!  Yay!  Yes, now I feel like we're discussing the radicality (radicalness?) 
of Platonic math ... and how weird mathematicians sound (to me) when they say 
we're discovering theorems rather than constructing them. 8^)

Perhaps it's helpful to think about the "axiom of choice"?  Is a "choosable" 
element somehow distinct from a "chosen" element?  Does the act of choosing 
change the element in some way I'm unaware of?  Does choosability require an 
agent exist and (eventually) _do_ the choosing?



On 12/14/2016 10:24 AM, Eric Charles wrote:
> Ack! Well... I guess now we're in the muck of what the heck probability and 
> statistics are for mathematicians vs. scientists. Of note, my understanding 
> is that statistics was a field for at least a few decades before it was 
> specified in a formal enough way to be invited into the hallows of 
> mathematics departments, and that it is still frequently viewed with 
> suspicion there.
> 
> Glen states: /We talk of "selecting" or "choosing" subsets or elements from 
> larger sets.  But such "selection" isn't an action in time.  Such "selection" 
> is an already extant property of that organization of sets./
> 
> I find such talk quite baffling. When I talk about selecting or choosing or 
> assigning, I am talking about an action in time. Often I'm talking about an 
> action that I personally performed. "You are in condition A. You are in 
> condition B. You are in condition A." etc. Maybe I flip a coin when you walk 
> into my lab room, maybe I pre-generated some random numbers, maybe I look at 
> the second hand of my watch as soon as you walk in, maybe I write down a 
> number "arbitrarily", etc. At any rate, you are not in a condition before I 
> put you in one, and whatever it is I want to measure about you hasn't 
> happened yet.
> 
> I fully admit that we can model the system without reference to time, if we 
> want to. Such efforts might yield keen insights. If Glen had said that we can 
> usefully model what we are interested in as an organized set with 
> such-and-such properties, and time no where to be found, that might seem 
> pretty reasonable. But that would be a formal model produced for specific 
> purposes, not the actual phenomenon of interest. Everything interesting that 
> we want to describe as "probable" and all the conclusions we want to come to 
> "statistically" are, for the lab scientist, time dependent phenomena. (I 
> assert.)

-- 
☣ glen


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Re: [FRIAM] probability vs. statistics (was Re: Model of induction)

2016-12-14 Thread Grant Holland
And I completely agree with Eric. But we can language it real simply and 
intuitively by just looking at what a probability space is. For further 
simplicity lets keep it to a finite probability space. (Neither a finite 
nor an infinite one says anything about "time".)


A finite probability space has 3 elements: 1) a set of sample points 
called "the sample space", 2) a set of events, and 3) a set of 
probabilities /for the events/. (An infinite probability space is 
strongly similar.)


But what is this "set of events"? That's the question that is being 
discussed on this thread. It turns out that the events for a finite 
space is nothing more than /the set of all possible combinations of the 
sample points/. (Formally the event set is something called a "sigma 
algebra", but no matter.) So, an event scan be thought of simply /all 
//combination//s of the sample points/.


Notice that it is the events that have probabilities - not the sample 
points. Of course it turns out that each of the sample points happens to 
be a  (trivial) combination of the sample space - therefore it has a 
probability too!


So, the events already /have/ probabilities by virtue of just being in a 
probability space. They don't have to be "selected", "chosen" or any 
such thing. They "just sit there" and have probabilities - all of them. 
The notion of time is never mentioned or required.


Admittedly, this formal (mathematical) definition of "event" is not 
equivalent to the one that you will find in everyday usage. The everyday 
one /does/ involve time. So you could say that everyday usage of "event" 
is "an application" of the formal "event" used in probability theory. 
This confusion between the everyday "event" and the formal "event" may 
be the root of the issue.


Jus' sayin'.

Grant


On 12/14/16 11:36 AM, glen ☣ wrote:

Ha!  Yay!  Yes, now I feel like we're discussing the radicality (radicalness?) 
of Platonic math ... and how weird mathematicians sound (to me) when they say 
we're discovering theorems rather than constructing them. 8^)

Perhaps it's helpful to think about the "axiom of choice"?  Is a "choosable" element 
somehow distinct from a "chosen" element?  Does the act of choosing change the element in some way 
I'm unaware of?  Does choosability require an agent exist and (eventually) _do_ the choosing?



On 12/14/2016 10:24 AM, Eric Charles wrote:

Ack! Well... I guess now we're in the muck of what the heck probability and 
statistics are for mathematicians vs. scientists. Of note, my understanding is 
that statistics was a field for at least a few decades before it was specified 
in a formal enough way to be invited into the hallows of mathematics 
departments, and that it is still frequently viewed with suspicion there.

Glen states: /We talk of "selecting" or "choosing" subsets or elements from larger sets.  But such 
"selection" isn't an action in time.  Such "selection" is an already extant property of that 
organization of sets./

I find such talk quite baffling. When I talk about selecting or choosing or assigning, I am talking 
about an action in time. Often I'm talking about an action that I personally performed. "You 
are in condition A. You are in condition B. You are in condition A." etc. Maybe I flip a coin 
when you walk into my lab room, maybe I pre-generated some random numbers, maybe I look at the 
second hand of my watch as soon as you walk in, maybe I write down a number 
"arbitrarily", etc. At any rate, you are not in a condition before I put you in one, and 
whatever it is I want to measure about you hasn't happened yet.

I fully admit that we can model the system without reference to time, if we want to. Such efforts 
might yield keen insights. If Glen had said that we can usefully model what we are interested in as 
an organized set with such-and-such properties, and time no where to be found, that might seem 
pretty reasonable. But that would be a formal model produced for specific purposes, not the actual 
phenomenon of interest. Everything interesting that we want to describe as "probable" and 
all the conclusions we want to come to "statistically" are, for the lab scientist, time 
dependent phenomena. (I assert.)



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Re: [FRIAM] probability vs. statistics (was Re: Model of induction)

2016-12-14 Thread Frank Wimberly
Don't think about choosing.  The axiom of choice says that there is a function 
from each set (subset) to an element of itself, as I recall.

Frank


Frank C. Wimberly
140 Calle Ojo Feliz
Santa Fe, NM 87505

wimber...@gmail.com wimbe...@cal.berkeley.edu
Phone:  (505) 995-8715  Cell:  (505) 670-9918

-Original Message-
From: Friam [mailto:friam-boun...@redfish.com] On Behalf Of glen ?
Sent: Wednesday, December 14, 2016 11:36 AM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] probability vs. statistics (was Re: Model of induction)


Ha!  Yay!  Yes, now I feel like we're discussing the radicality (radicalness?) 
of Platonic math ... and how weird mathematicians sound (to me) when they say 
we're discovering theorems rather than constructing them. 8^)

Perhaps it's helpful to think about the "axiom of choice"?  Is a "choosable" 
element somehow distinct from a "chosen" element?  Does the act of choosing 
change the element in some way I'm unaware of?  Does choosability require an 
agent exist and (eventually) _do_ the choosing?



On 12/14/2016 10:24 AM, Eric Charles wrote:
> Ack! Well... I guess now we're in the muck of what the heck probability and 
> statistics are for mathematicians vs. scientists. Of note, my understanding 
> is that statistics was a field for at least a few decades before it was 
> specified in a formal enough way to be invited into the hallows of 
> mathematics departments, and that it is still frequently viewed with 
> suspicion there.
> 
> Glen states: /We talk of "selecting" or "choosing" subsets or elements 
> from larger sets.  But such "selection" isn't an action in time.  Such 
> "selection" is an already extant property of that organization of 
> sets./
> 
> I find such talk quite baffling. When I talk about selecting or choosing or 
> assigning, I am talking about an action in time. Often I'm talking about an 
> action that I personally performed. "You are in condition A. You are in 
> condition B. You are in condition A." etc. Maybe I flip a coin when you walk 
> into my lab room, maybe I pre-generated some random numbers, maybe I look at 
> the second hand of my watch as soon as you walk in, maybe I write down a 
> number "arbitrarily", etc. At any rate, you are not in a condition before I 
> put you in one, and whatever it is I want to measure about you hasn't 
> happened yet.
> 
> I fully admit that we can model the system without reference to time, 
> if we want to. Such efforts might yield keen insights. If Glen had 
> said that we can usefully model what we are interested in as an 
> organized set with such-and-such properties, and time no where to be 
> found, that might seem pretty reasonable. But that would be a formal 
> model produced for specific purposes, not the actual phenomenon of 
> interest. Everything interesting that we want to describe as 
> "probable" and all the conclusions we want to come to "statistically" 
> are, for the lab scientist, time dependent phenomena. (I assert.)

--
☣ glen


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe 
http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Re: [FRIAM] probability vs. statistics (was Re: Model of induction)

2016-12-14 Thread Robert Wall
Hi Glen, et al,

Thanks for cashing mu $0.02 check. :-)

When I wrote that "but it doesn't have to be" I wasn't asserting that
probability theory is devoid of events.  Events are fundamental to
probability theory.  They are the outcomes to which probability is
assigned.  In a nutshell, the practice of probability theory is the mapping
of the events--outcomes-- from random processes to numbers, thus making the
practice purposefully mathematical.  And in this regard, we speak of a
mathematical entity dubbed a random variable in order to carry out the
calculus of probability and statistics.

A random variable is like any other variable in mathematics, but with
specific properties concerning the values it can take on.  A random
variable is considered "discrete" if it can take on only countable,
distinct, or separate values [e.g., the sum of the values of a roll of
seven die].  Otherwise, a random variable can be considered "continuous" if
it can take on any value in an interval [e.g., the mass of an animal].
But, a random variable is a real-valued function--a one-to-one mapping of a
random process to a number line.

This is arguably as long about way of explaining [muck?!] why I said "but
it doesn't have to be ... time." Time doesn't have to be involved such that
the random variable does not have to distributed in time but often can be,
such as in reliability theory--for, example, the probability that a device
will survive for at least T cycles or months.

Yes to your and Grant's notion that thinking in terms of probability spaces
is a good way of thinking of probability and statistic and this mapping, as
mathematically we are doing convolutions of distributions [spaces?] when
modeling independent, usually identically distributed random trials
[activities]. But, let's not confuse the mathematical modeling with the
selection process of, say picking four of a kind from a deck of 52 cards.
All we are interested in doing is mapping the outcomes--events-- to
possibilities over which the probabilities all sum or integrate to no more
than unity. The activity gets mapped in the treatment of the random
variable in the mapping [e..g., the number of trials]. So, for example,
rolling 6s six times in a row is not a function of time, but of six
discrete, independent and identically distributed trials. For the computed
probability, in this case, it doesn't matter how long it took to roll the
dice six times.

I am thinking that this is the way your "opponent" is thinking about the
problem and suspect that he has been formally trained to see it this way.
Not the only way but a classical way.

When Eric talks about the historic difference between scientists,
mathematicians, and statisticians practicing probability theory and
statistics, these differences quickly disappeared when the idea of *uncertainty
*bubbled up into the models found in the fields of physics, economics,
measurement theory, decision theory, etc.  No longer could the world be
completely described by the classical system dynamic models.  Maybe before
Gauss even (the late-1700s), who was a polymath to be sure, error terms
were starting to be added to their equations and had to be estimated.

As to my language of "when" an event occurs with some calculated
likelihood, it can be a description or a prediction. The researcher may be
asking like Nick is [kind of?] asking in the other thread, what is the
likelihood of my getting this many 1s in a row if the process is supposedly
generating discrete random numbers between, say, one and five? In this
case, a *psychologically *unexpected event has happened. Or in planning his
experiment in advance, he may just want to set a halting threshold for
determining that any machine that gives him the same N consecutive numbers
in a row to be suspect. In that case, the event hasn't happened but has a
finite potential for happening and we want to detect that if it happens ...
too much.

Those "events" don't _happen_.  They simply _are_


This bit seems more philosophical than something a statistician would
likely [no pun intended] worry about. Admittedly, my choice of
words--throughout my post--could have been more precise, but I would not
have said that "events simply are."  When discussing the nature of time in
a "block universe," maybe that could be said, but I would have been in
Henri Bergson's corner [to my peril, of course] in the 1922 debate between
Bergson and Albert Einstein on the subject of time. :-) Curiously,
Bergson's idea of time is coming back--see *Time Reborn* (2013) by Lee
Smolin.  But this is likely not what you meant. However, you are an
out-of-the-closet Platonist by your own admission. No worries; I have
friends who are Platonists, most of them being mathematicians or
philosophers or believe the brain to be a computer, but not typically
computational scientists and certainly not cognitive scientists. :-) No
such thing as computational philosophy ... yet. Hmmm.

BTW, a Random Variable--continuous or discrete-

Re: [FRIAM] probability vs. statistics (was Re: Model of induction)

2016-12-14 Thread Robert Wall
ld. So
I don't think this is helpful to your cause. But I would be more than
curious to see how you think it might be. I am more an applied
mathematician|statistician than anything like a theoretical mathematician;
though, I have happily worked with many of the latter ... and hopefully the
reverse was true. :-)

Okay, back to your observation: the fact that it is possible to choose a
particular event from the set of all possible events in the event space is
a trivial requirement.  I cannot, for example, pick a black ball--an
impossible event--from the previous urn of only red and white balls.  So
being able to choose three red balls from that urn makes the event
"choosable."  Is that event then distinct from that same event that has
been "chosen?"  At the classical level--as opposed to the quantum level--I
cannot see any meaningful distinction EXCEPT to say that the former event
is a possibility and the second event is a realization ... and that the way
such events get discussed in practical probability and statistics. There is
no spooky agent that needs to get factored into the calculus, IMHO.

Somehow, I still feel I am missing something. Maybe you can figure it out,
but it may not be all that important, and your question may have already
been addressed satisfactorily by the other responses posted to the thread.

Cheers

On Wed, Dec 14, 2016 at 2:41 PM, Frank Wimberly  wrote:

> Don't think about choosing.  The axiom of choice says that there is a
> function from each set (subset) to an element of itself, as I recall.
>
> Frank
>
>
> Frank C. Wimberly
> 140 Calle Ojo Feliz
> Santa Fe, NM 87505
>
> wimber...@gmail.com wimbe...@cal.berkeley.edu
> Phone:  (505) 995-8715  Cell:  (505) 670-9918
>
> -Original Message-----
> From: Friam [mailto:friam-boun...@redfish.com] On Behalf Of glen ?
> Sent: Wednesday, December 14, 2016 11:36 AM
> To: The Friday Morning Applied Complexity Coffee Group
> Subject: Re: [FRIAM] probability vs. statistics (was Re: Model of
> induction)
>
>
> Ha!  Yay!  Yes, now I feel like we're discussing the radicality
> (radicalness?) of Platonic math ... and how weird mathematicians sound (to
> me) when they say we're discovering theorems rather than constructing them.
> 8^)
>
> Perhaps it's helpful to think about the "axiom of choice"?  Is a
> "choosable" element somehow distinct from a "chosen" element?  Does the act
> of choosing change the element in some way I'm unaware of?  Does
> choosability require an agent exist and (eventually) _do_ the choosing?
>
>
>
> On 12/14/2016 10:24 AM, Eric Charles wrote:
> > Ack! Well... I guess now we're in the muck of what the heck probability
> and statistics are for mathematicians vs. scientists. Of note, my
> understanding is that statistics was a field for at least a few decades
> before it was specified in a formal enough way to be invited into the
> hallows of mathematics departments, and that it is still frequently viewed
> with suspicion there.
> >
> > Glen states: /We talk of "selecting" or "choosing" subsets or elements
> > from larger sets.  But such "selection" isn't an action in time.  Such
> > "selection" is an already extant property of that organization of
> > sets./
> >
> > I find such talk quite baffling. When I talk about selecting or choosing
> or assigning, I am talking about an action in time. Often I'm talking about
> an action that I personally performed. "You are in condition A. You are in
> condition B. You are in condition A." etc. Maybe I flip a coin when you
> walk into my lab room, maybe I pre-generated some random numbers, maybe I
> look at the second hand of my watch as soon as you walk in, maybe I write
> down a number "arbitrarily", etc. At any rate, you are not in a condition
> before I put you in one, and whatever it is I want to measure about you
> hasn't happened yet.
> >
> > I fully admit that we can model the system without reference to time,
> > if we want to. Such efforts might yield keen insights. If Glen had
> > said that we can usefully model what we are interested in as an
> > organized set with such-and-such properties, and time no where to be
> > found, that might seem pretty reasonable. But that would be a formal
> > model produced for specific purposes, not the actual phenomenon of
> > interest. Everything interesting that we want to describe as
> > "probable" and all the conclusions we want to come to "statistically"
> > are, for the lab scientist, time dependent phenomena. (I assert.)
>
> --
> ☣ glen
>
>

Re: [FRIAM] probability vs. statistics (was Re: Model of induction)

2016-12-14 Thread glen ☣

Well, sure.  But the point is that the axiom of choice asserts, merely, the 
existence of the ability to choose a subset.  They call them "choice 
functions", as if there exists some "chooser".  But there's no sense of time 
(before the choice function is applied versus after it's applied).  The name 
"choice" is a misleading misnomer.

And that's my point.  Probability theory is a special case of measure theory.  
Calling the set measures "probabilities" is an antiquated, misleading, and 
unfortunate name.

On 12/14/2016 01:41 PM, Frank Wimberly wrote:
> Don't think about choosing.  The axiom of choice says that there is a 
> function from each set (subset) to an element of itself, as I recall.

-- 
☣ glen


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Re: [FRIAM] probability vs. statistics (was Re: Model of induction)

2016-12-14 Thread glen ☣

Well, my question hasn't been addressed satisfactorily.  But I sincerely 
appreciate all the different ways everyone has tried to talk about it.  My 
question is about language, not math or statistics.  I'm adept enough at those. 
 What I'm having trouble with in the argument (the guy's name is Steve, btw) is 
my inability to communicate the measure theory conception of probability theory 
in plain English.  (He's not a mathematician, either.)

I'm especially appreciative of what you, Eric, and Grant have laid out from the 
practical "just get 'er done" perspective.  The reason my initial (failed) joke 
about not understanding what statistics _is_, but claiming to understand what 
probability theory _is_, was a joke, is because both are so heavily applied and 
so lightly ontological.  Were I able to tell the joke so that Steve saw the 
Platonic vs. constructivist, noun vs. verb, (false) dichotomy implied, then I 
wouldn't find myself having to explain it.  I would have avoided the need to 
make the Platonic view explicit ... which would have been good because I'm not 
a Platonist.

On 12/14/2016 05:05 PM, Robert Wall wrote:
> Somehow, I still feel I am missing something. Maybe you can figure it out, 
> but it may not be all that important, and your question may have already been 
> addressed satisfactorily by the other responses posted to the thread. 


-- 
☣ glen


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Re: [FRIAM] probability vs. statistics (was Re: Model of induction)

2016-12-14 Thread Robert Wall
Hey Glen,

Yes, on the first issue with respect to the Axiom of Choice, I think the
word "choice" there does not map one-for-one to the same word used in
probability theory. I think the two concepts are mutually exclusive, but
this may be beyond my "pay grade" to worry or talk about. 🤐

However, I can most certainly see your point about the beneficial
relationship between measurement theory and probability theory. The notion
sigma algebra is spot on, especially for the mathematics of theoretical
probability. Even though I may be considered an old dog professionally, I
can still resonate with Grant's notion of probability spaces as well.  It's
all good!

You know, I can still have fun while simultaneously being lost in the
forest. This has been fun!  Thanks for letting me play in the sandbox ... 😊

Cheers

On Wed, Dec 14, 2016 at 6:50 PM, glen ☣  wrote:

>
> Well, sure.  But the point is that the axiom of choice asserts, merely,
> the existence of the ability to choose a subset.  They call them "choice
> functions", as if there exists some "chooser".  But there's no sense of
> time (before the choice function is applied versus after it's applied).
> The name "choice" is a misleading misnomer.
>
> And that's my point.  Probability theory is a special case of measure
> theory.  Calling the set measures "probabilities" is an antiquated,
> misleading, and unfortunate name.
>
> On 12/14/2016 01:41 PM, Frank Wimberly wrote:
> > Don't think about choosing.  The axiom of choice says that there is a
> function from each set (subset) to an element of itself, as I recall.
>
> --
> ☣ glen
>
> 
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
>

FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove