Hmmm...  that makes it sound rather subjective.
If we don't have objective measures, 
then who is to say that one's randomness is better or worse than another?

Understand that user data input may or may not have good entropy.
But, hope that would be the only source of any output entropy, 
given good encryption algorithms and random keys.

Was thinking in terms of how an app with access to alternate random sources,

some which might be from OS or from some software, might choose one over
another.

Michael Hammer
Principal Engineer
[email protected]
Mobile: +1 408-202-9291
500 Yosemite Drive Suite 120
Milpitas, CA 95035 USA


-----Original Message-----
From: Krisztián Pintér [mailto:[email protected]] 
Sent: Thursday, January 23, 2014 2:38 PM
To: Michael Hammer
Cc: [email protected]; [email protected]
Subject: Re: [dsfjdssdfsd] Any plans for drafts or discussions on here?


Michael Hammer (at Thursday, January 23, 2014, 9:49:32 PM):
> This may get off-topic, but are there good software tools for testing 
> entropy, that could help applications determine if the underlying 
> system is giving them good input?

disclaimer: i'm no expert, it is just what i gathered. (i'm pretty much
interested in randomness.)

short answer: no

long answer: in some situations yes. if you are handed a bunch of data, all
you can do is to try different techniques to put an upper limit on the
entropy. for example you can calculate the shannon entropy assuming
independent bits. then you can hypothesize some interdependence, and see if
you can compress the data. you can apply different lossless compression
methods. the better compression you find puts an upper limit on the entropy.
but never a lower limit.

you can only do better if you have an idea about the process that created
the data. for example you might assume that it is mostly thermal noise. you
can assume that thermal noise has some frequency distribution, or energy or
whatever, etc. within this assumption, you can determine the entropy content
by measurements. but at this point, you are pretty much prone to two errors:
1, what if your assumption is wrong and 2, what if your physical model
overestimates the unpredictability of the given system. example for the
former: the signal might be largely controllable by an external EM
interference, and then you measure not noise, but attacker controlled data.
example for the latter: a smartass scientist might come up with a better
physical model for thermal noise.

it is also important to note that entropy is observer dependent. we actually
talk about the entropy as seen by the attacker. but it is not
straightforward to assess what is actually visible to an attacker and what
is not. observation methods improve with time.

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
dsfjdssdfsd mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/dsfjdssdfsd

Reply via email to