Hi,

Zenaan Harkness wrote:
> I should have wrote "/dev/random should be treated as though it is
> the input feed to /dev/urandom" (sorry about that).

But that it isn't. The myth model says that it would be. But the
other quite credible info says that its output stems from the pseudo
random number generator which is a ChaCha20 encryptor with changing key,
if i got it right.

As naive user with no special knowledge about ChaCha20 i would prefer
to get raw random, not a strongly obfuscated but still diluted result. 
But it seems that this kind of random is not available from /dev/random.

I understand Theodore T'so's statements in
  https://lkml.org/lkml/2017/7/20/993
that the key changes on slow entropy input and thus potentially more bits
come out than secret bits are filled in. (Else he could not reject the
proposal to collect more entropy on x86.)


Originally Curt wrote:
> > > https://www.2uo.de/myths-about-urandom

Zenaan Harkness wrote:
> Really great myth-debunking article

Up to now i found no credible expert opinion which would clearly
contradict it. Its language and attitude seems inappropriate for the
sincere topic it discusses, though.

All in all, the effort should go into the man page, even if it gets a
two page discussion what the crypto jargon shall mean to the normal user.
A mere imperative "use urandom unconditionally !" would be clear, too.
But obviously the man page authors do not dare or cannot agree on that.

One should also read the first article mentioned by Curt:
  
https://arstechnica.com/information-technology/2013/05/how-crackers-make-minced-meat-out-of-your-passwords/
which does not discuss the properties of /dev/*random but rather illustrates
the problems of human memorizable passwords together with stolen password
hashes, stolen hash algorithm, and stolen salt.
Other than in the /dev/*random situation, the service cannot destroy these
secrets because they need to be applied each time the password gets
validated.


> Well that debunking article actually backs up the man page

>From the view of social engineering there is a big difference.
Thomas Huehn, whose competence i have no reason to doubt, is a single
person delivering a mix of rant and info.

The man page is reviewed in the kernel community and committed by people
who have power to do harm at other places in the kernel anyways.
So if i trust their code, i can also trust their man page.


> We have awesome API/ functionality abundance in the libre software

A single one, about which i can convince myself to trust, would be enough.
At least for the case of a 128 bit secret which i would generate only
a few times per year.

For now i tend to stay with /dev/random, because in any halfways plausible
model (wrong or right) it is not worse than /dev/urandom.


> > Urm. Your argumentation up to this point was that they differ
> > sigificantly.

> They do.
> And so in general, if you want cryptographically safe random numbers
> for user-space programs, use /dev/urandom - what's not to grok here?

If they differ significantly in respect to being guessed, then i'd like
to have the one with the better reputation.


> but the
> man pages are actually our responsibility to update - perhaps you're
> assuming it was the responsibility of the kernel devs?

In this special case: yes.

(I would raise protest if half educated users would make success-critical
 changes in my own man pages. They can talk to me and i will duely consider
 their arguments.)


> If you're not a software developer, don't worry about it.

First: I am software developer, albeit not directly in cryptography.
Second: Why is a mere user less vulnerable to brute force attacks ?


> If you are, use a library that is widely used and known to have had a
> few (now fixed) bugs in the past.

The original topic was about creating a single or small number of passwords.
Chosing, installing, and understanding a library seems somewhat overdone.


> /dev/urandom was designed to hand out cryptographically secure
> numbers whilst not blocking.

Why then the mumbling in the man page ?
I do not doubt the intention, but obviously some author(s) had doubts
about the result of the implementation.


> > /dev/urandom was originally designed to hand out lower quality random
> > if /dev/random would block, but is now said not to do this any more.

> /dev/random CAN block, which is the whole point of providing
> /dev/urandom.

This statement was about /dev/urandom not blocking and possibly providing
reduced quality, not about /dev/random blocking or not blocking. 

But a main argument of Theodore T'so and others is that there is always
enough entropy in the pool. So /dev/random won't block, will it ?


> I have not read the code, and do not understand the change that's
> happened properly though.

>From my experience with kernel code, where i have understanding of the
topic, i would not expect to get decisive clarity unless i had studied
the topic first. That might last some years and still not exclude the
risk of shooting my foot, given the special circumstances of cryptography.


Have a nice day :)

Thomas

Reply via email to