On Tue, 15 Nov 2022 at 22:41, Chris Angelico <ros...@gmail.com> wrote:
>
> (I'm assuming that you sent this personally by mistake, and am
> redirecting back to the list. My apologies if you specifically didn't
> want this to be public.)
>
> On Tue, 15 Nov 2022 at 22:34, James Johnson <jj126...@gmail.com> wrote:
> >
> > It’s been a couple of years ago, but as I recall the duplicates seemed to 
> > be of two or three responses, not randomly distributed.
> >
> > I looked at my code, and I DID salt the hash at every update.
> >
> > At this point, my curiosity is engaged to know if this s/w solution is as 
> > good as others. I don’t have the training to test how often 9 follows 5, 
> > for example, but I am competitive enough to ask how it holds up against 
> > MTprng. I think it’s possibly very good, for s/w, and I’m emboldened to ask 
> > you to modify the code (it requires you data enter the numbers back to the 
> > machine, allowing time to pass;) to accumulate the results for 2 or 3 
> > million, and see how it holds up. I don’t think the numbers track the bell 
> > curve on distribution . I speculate it’s more square. I suppose this is 
> > desirable in a PRNG?
> >
>
> I'll get you to do the first step of the modification. Turn your code
> into a module that has a randbelow() function which will return a
> random integer from 0 up to the provided argument. (This is equivalent
> to the standard library's random.randrange() function when given just
> one argument.) If you like, provide several of them, as randbelow1,
> randbelow2, etc.
>
> Post that code, and then I'll post a test harness that can do some
> analysis for you.

Here's a simple test harness. There are other tests you could use, but
this one is pretty straight-forward.

https://github.com/Rosuav/shed/blob/master/howrandom.py

For each test, it counts up how many times each possible sequence
shows up, then displays the most and least common, rating them
according to how close they came to a theoretical perfect
distribution. A good random number generator should produce results
that are close to 100% for all these tests, but the definition of
"close" depends on the pool size used (larger means closer, but also
means more CPU time) and the level of analysis done. In my testing,
all of the coin-flip data showed values +/- 1%, and the others never
got beyond 10%.

This is the same kind of analysis that was used in the page that I
linked to earlier, so you can play with it interactively there if
you're curious. It also has better explanations than I would give.

Again, there are plenty of other types of tests you could use, but
this is a pretty easy one. True randomness should show no biases in
any of these results, though there will always be some variance.

ChrisA
_______________________________________________
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/5OWDW5XUE5JYAW3QKMNZYKXH3NNBPNNW/
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to