Raymond Hettinger added the comment:

With anything less than than full seeding of the 19937-bit state space, we lose 
all the guarantees (proved properties) documented at 
http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/ARTICLES/mt.pdf .  It is very 
nice to be able to rely on those properties and I don't think we should abandon 
them because glibc currently has lower standards (the trend is towards making 
seeding stronger rather than weaker).

"Good enough" for random number generators is in the eye of the beholder.  If 
your agenda is to reduce the number of bytes extracted from urandom(), it is 
easy to not care how well the MT is seeded, but there are people who do care 
(or should care).

Note that sampling and shuffling both eat entropy like crazy.  Currently, we 
have enough to shuffle 2000 elements.   With 128 bits, we run out at 34 
elements, not even enough to shuffle a deck of cards and have each permutation 
as a possible outcome.

Over time, the random module has evolved away from "good enough" and 
traded-away speed in return for equi-distribution (i.e. we use _randbelow to 
get a balanced choice over ranges that aren't an exact power-of-two).

We should not allow a regression in quality.  In particular, I object to the 
cavalier assertion, "random.Random doesn't need a very high quality entropy."  
If you're running multiple simulations, this is something you should care 
about.  And to defend against criticism of random samples, it is nice to be 
able to say you were properly seeded (see RFC 1750) and that you have some 
basis for believing that every outcome was equally likely.

----------
assignee:  -> rhettinger
nosy: +rhettinger

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue27272>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to