On Aug 3, 10:38 pm, [EMAIL PROTECTED] wrote:
> I'm a Python newbie and certainly no expert on statistics, but my wife
> was taking a statistics course this summer and to illustrate that
> sampling random numbers from a distribution and taking an average of
> the samples gives you a random number as the result (bigger sample ->
> smaller variance in the calculated random number, converging in on the
> mean of the original distribution), I threw together this program:
>
...
>
> I added the lo and high stuff to my test program out of fear that I
> was running into something funky in adding up 100 floating point
> numbers.   That would be more of a worry if the sample size was much
> bigger, but lo and high showed apparent bias quite aside from the
> calculation of the mean.
>
> Am I committing some other obvious statistical or Python blunder?
> e.g. Am I mis-understanding what random.normalvariate is supposed to
> do?

Doing some testing with mu=0, sigma=1, and n=1000000 gives me means of

-0.00096407536711885962
-0.0015179019121429708
+6.9223244807378563e-05
+0.0017483897464631625
-0.0011148444018505548
+0.0015367250480148183

There appears to be no consistent bias.

-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to