google/paranoid_crypto has a number of Randomness Tests in Python IIR
>From grep '^#'
https://github.com/google/paranoid_crypto/blob/main/docs/randomness_tests.md
:

```md
# Randomness tests
## Goal of the tests
## Non-goals
## Usage
## Tests
### NIST SP 800-22
#### Frequency (Monobits) Test
#### Frequency Test within a Block
#### Runs Test
#### Test for the Longest Run of Ones in a Block
#### Binary Matrix Rank Test
#### Discrete Fourier Transform (Spectral) Test
#### Non-Overlapping Template Matching Test
#### Overlapping Template Matching Test.
#### Maurer’s “Universal Statistical” Test
#### Linear Complexity Test
#### Serial Test.
#### Approximate Entropy Test
#### Cumulative Sums (Cusum) Test.
#### Random Excursions Test.
#### Random Excursions Variant Test
### Additional tests
#### FindBias
#### LargeBinaryMatrixRank
#### LinearComplexityScatter
## Interface
### Repeating tests
## Testing
### Pseudorandom number generators for testing
#### urandom
#### mt19937
#### gmp_n
#### mwc_n
#### java
#### lcgnist
#### xorshift128+
#### xorshift*
#### xorwow
#### pcg64, philox, sfc64
#### jsf32, jsf64
## Design decisions
```

$ grep '^\s*class'
https://github.com/google/paranoid_crypto/blob/main/paranoid_crypto/lib/randomness_tests/rng.py
:

```python
class Rng:
class Urandom(Rng):
class Shake128(Rng):
class Mt19937(Rng):
class GmpRand(Rng):
class XorShift128plus(Rng):
class XorShiftStar(Rng):
class Xorwow(Rng):
class JavaRandom(Rng):
class LcgNist(Rng):
class Mwc(Rng):
class NumpyRng(Rng):
class Lehmer(Rng):
class Pcg64(NumpyRng):
class Philox(NumpyRng):
class Sfc64(NumpyRng):
class SubsetSum(Rng):
```

From
https://github.com/google/paranoid_crypto/blob/16e5f47fcc11f51d3fb58b50adddd075f4373bbc/paranoid_crypto/lib/randomness_tests/random_test_suite.py#L42-L80
:

```python
NIST_TESTS = [
    (nist_suite.Frequency, []),
    (nist_suite.BlockFrequency, []),
    (nist_suite.Runs, []),
    (nist_suite.LongestRuns, []),
    (nist_suite.BinaryMatrixRank, []),
    (nist_suite.Spectral, []),
    (nist_suite.NonOverlappingTemplateMatching, []),
    (nist_suite.OverlappingTemplateMatching, []),
    (nist_suite.Universal, []),
    (nist_suite.LinearComplexity, [512]),
    (nist_suite.LinearComplexity, [1024]),
    (nist_suite.LinearComplexity, [2048]),
    (nist_suite.LinearComplexity, [4096]),
    (nist_suite.Serial, []),
    (nist_suite.ApproximateEntropy, []),
    (nist_suite.RandomWalk, []),
]


EXTENDED_NIST_TESTS = [
    (extended_nist_suite.LargeBinaryMatrixRank, []),
    # Computing the linear complexity has quadratic complexity.
    # A consequence of this is that LinearComplexityScatter only
    # uses a fraction of the input. A parameter [n, m] means
    # that n m-bit sequences are tested, where the i-th sequence
    # consists of the bits i, i + n, ..., i + (m-1) * m.
    (extended_nist_suite.LinearComplexityScatter, [32, 100000]),
    (extended_nist_suite.LinearComplexityScatter, [64, 50000]),
    (extended_nist_suite.LinearComplexityScatter, [128, 40000]),
]


LATTICE_TESTS = [
    (lattice_suite.FindBias, [256]),
    (lattice_suite.FindBias, [384]),
    (lattice_suite.FindBias, [512]),
    (lattice_suite.FindBias, [1024]),
]


TESTS = NIST_TESTS + EXTENDED_NIST_TESTS + LATTICE_TESTS
```


- [ ] ENH: paranoid_crypto: add a __main__ so that python -m
paranoid_crypto.randomness_tests calls eg:
  - [x]
https://github.com/google/paranoid_crypto/blob/main/examples/randomness.py
  - [ ] REF: examples/randomness.py -> lib/randomness_tests/main.py
  - [ ] ENH: paranoid_crypto.randomness_tests: add a __main__ so that
`python -m paranoid_crypto.randomness_tests -h` works
  - [ ] ENH: setup.py: console_scripts entrypoint for
examples/randomness_tests/main.py

- [ ] DOC:
https://en.wikipedia.org/wiki/Randomness_test#Notable_software_implementations:
link to google/paranoid_crypto



On Tue, Nov 15, 2022 at 7:25 AM Chris Angelico <ros...@gmail.com> wrote:

> On Tue, 15 Nov 2022 at 22:41, Chris Angelico <ros...@gmail.com> wrote:
> >
> > (I'm assuming that you sent this personally by mistake, and am
> > redirecting back to the list. My apologies if you specifically didn't
> > want this to be public.)
> >
> > On Tue, 15 Nov 2022 at 22:34, James Johnson <jj126...@gmail.com> wrote:
> > >
> > > It’s been a couple of years ago, but as I recall the duplicates seemed
> to be of two or three responses, not randomly distributed.
> > >
> > > I looked at my code, and I DID salt the hash at every update.
> > >
> > > At this point, my curiosity is engaged to know if this s/w solution is
> as good as others. I don’t have the training to test how often 9 follows 5,
> for example, but I am competitive enough to ask how it holds up against
> MTprng. I think it’s possibly very good, for s/w, and I’m emboldened to ask
> you to modify the code (it requires you data enter the numbers back to the
> machine, allowing time to pass;) to accumulate the results for 2 or 3
> million, and see how it holds up. I don’t think the numbers track the bell
> curve on distribution . I speculate it’s more square. I suppose this is
> desirable in a PRNG?
> > >
> >
> > I'll get you to do the first step of the modification. Turn your code
> > into a module that has a randbelow() function which will return a
> > random integer from 0 up to the provided argument. (This is equivalent
> > to the standard library's random.randrange() function when given just
> > one argument.) If you like, provide several of them, as randbelow1,
> > randbelow2, etc.
> >
> > Post that code, and then I'll post a test harness that can do some
> > analysis for you.
>
> Here's a simple test harness. There are other tests you could use, but
> this one is pretty straight-forward.
>
> https://github.com/Rosuav/shed/blob/master/howrandom.py
>
> For each test, it counts up how many times each possible sequence
> shows up, then displays the most and least common, rating them
> according to how close they came to a theoretical perfect
> distribution. A good random number generator should produce results
> that are close to 100% for all these tests, but the definition of
> "close" depends on the pool size used (larger means closer, but also
> means more CPU time) and the level of analysis done. In my testing,
> all of the coin-flip data showed values +/- 1%, and the others never
> got beyond 10%.
>
> This is the same kind of analysis that was used in the page that I
> linked to earlier, so you can play with it interactively there if
> you're curious. It also has better explanations than I would give.
>
> Again, there are plenty of other types of tests you could use, but
> this is a pretty easy one. True randomness should show no biases in
> any of these results, though there will always be some variance.
>
> ChrisA
> _______________________________________________
> Python-ideas mailing list -- python-ideas@python.org
> To unsubscribe send an email to python-ideas-le...@python.org
> https://mail.python.org/mailman3/lists/python-ideas.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-ideas@python.org/message/5OWDW5XUE5JYAW3QKMNZYKXH3NNBPNNW/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
_______________________________________________
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/T3CM5FIVVUZRDP7RQ2QRNQTPWUR4WAR7/
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to