Randomness testing Was: On the "randomness" of DNS

2008-08-03 Thread Alexander Klimov
On Thu, 31 Jul 2008, Pierre-Evariste Dagand wrote:
> Just by curiosity, I ran the Diehard tests[...]
>
> Sum-up for /dev/random:
> "Abnormally" high value: 0.993189 [1]
> "Abnormally" low value: 0.010507 [1]
> Total: 2
>
> Sum up for Sha1(n):
> "Abnormally" high values: 0.938376, 0.927501 [2]
> "Abnormally" low values: 0.087107, 0.091750, 0.060212, 0.050921 [4]
> Total: 6
>
> So, I would say that Sha1(n) does not pass DieHard (while
> /dev/random does). But this would require further examination, in
> particular to understand why some tests failed. And, in fact, I have
> no clue why they failed...

See .

Since p-value is supposed to be uniform on the interval [0,1]
for a truly random source, it is no wonder that with so many
p-values some of them are close to 0 and some are close to 1.

If your p-value is smaller than the significance level (say, 1%)
you should repeat the test with different data and see if the
test persistently fails or it was just a fluke.

-- 
Regards,
ASK

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Randomness testing Was: On the "randomness" of DNS

2008-08-04 Thread Stephan Neuhaus


On Aug 3, 2008, at 13:54, Alexander Klimov wrote:


If your p-value is smaller than the significance level (say, 1%)
you should repeat the test with different data and see if the
test persistently fails or it was just a fluke.


Or better still, make many tests and see if your p-values are  
uniformly distributed in (0,1). [Hint: decide on a p-value for that  
last equidistribution test *before* you compute that p-value.]


Best,

Stephan

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Randomness testing Was: On the "randomness" of DNS

2008-08-04 Thread Alexander Klimov
On Mon, 4 Aug 2008, Stephan Neuhaus wrote:
> Or better still, make many tests and see if your p-values are
> uniformly distributed in (0,1). [Hint: decide on a p-value for that
> last equidistribution test *before* you compute that p-value.]

Of course, there are many tests for goodness of fit (Kolmogorov-
Smirnov, chi-square, etc.) and also you can calculate for a given
number of tests how many tests should have p-value below the
significance level. And after making hundred tests you ask yourself
what you gonna do once your test gives "good uniformity", say p-value
is 0.23, but the proportion is 0.95, while the minimum pass rate for
1% is 0.96.

Only a bad statistician cannot justify any predefined answer :-)

-- 
Regards,
ASK

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]