Re: [RFC][PATCH 0/6] /dev/random - a new approach

2016-05-03 Thread Nikos Mavrogiannopoulos
On Tue, May 3, 2016 at 4:48 PM,   wrote:
> On Tue, May 03, 2016 at 03:57:15PM +0200, Nikos Mavrogiannopoulos wrote:
>> I believe their main concern is that they want to protect applications
>> which do not check error codes of system calls, when running on a
>> kernel which does not provide getrandom().  That way, they have an
>> almost impossible task to simulate getrandom() on kernel which do not
>> support it.
>
> The whole *point* of creating the getrandom(2) system call is that it
> can't be simulated/emulated in userspace.  If it can be, then there's
> no reason why the system call should exist.  This is one of the
> reasons why haven't implemented mysql or TLS inside the kernel.   :-)
> So if their standard is "we need to simulate getrandom(2) on a kernel
> which does not have it", we'll **never** see glibc support for it.  By
> definition, this is *impossible*.

I know, and I share this opinion. To their defense they will have to
provide a call which doesn't make applications fail in the following
scenario:
1. crypto/ssl libraries are compiled to use getrandom() because it is
available in libc and and in kernel
2. everything works fine
3. the administrator downgrades the kernel to a version without
getrandom() because his network card works better with that version
4. Mayhem as applications fail

However I don't see a way to avoid issues - though limited to corner
cases - with any imperfect emulation. It would be much clear for glibc
to just require a kernel with getrandom().

regards,
Nikos


Re: [RFC][PATCH 0/6] /dev/random - a new approach

2016-05-03 Thread Nikos Mavrogiannopoulos
On Tue, May 3, 2016 at 4:48 PM,   wrote:
> On Tue, May 03, 2016 at 03:57:15PM +0200, Nikos Mavrogiannopoulos wrote:
>> I believe their main concern is that they want to protect applications
>> which do not check error codes of system calls, when running on a
>> kernel which does not provide getrandom().  That way, they have an
>> almost impossible task to simulate getrandom() on kernel which do not
>> support it.
>
> The whole *point* of creating the getrandom(2) system call is that it
> can't be simulated/emulated in userspace.  If it can be, then there's
> no reason why the system call should exist.  This is one of the
> reasons why haven't implemented mysql or TLS inside the kernel.   :-)
> So if their standard is "we need to simulate getrandom(2) on a kernel
> which does not have it", we'll **never** see glibc support for it.  By
> definition, this is *impossible*.

I know, and I share this opinion. To their defense they will have to
provide a call which doesn't make applications fail in the following
scenario:
1. crypto/ssl libraries are compiled to use getrandom() because it is
available in libc and and in kernel
2. everything works fine
3. the administrator downgrades the kernel to a version without
getrandom() because his network card works better with that version
4. Mayhem as applications fail

However I don't see a way to avoid issues - though limited to corner
cases - with any imperfect emulation. It would be much clear for glibc
to just require a kernel with getrandom().

regards,
Nikos


Re: [RFC][PATCH 0/6] /dev/random - a new approach

2016-05-03 Thread Austin S. Hemmelgarn

On 2016-05-03 09:57, Nikos Mavrogiannopoulos wrote:

On Tue, Apr 26, 2016 at 3:11 AM, Theodore Ts'o  wrote:

On Mon, Apr 25, 2016 at 10:23:51AM +0200, Nikos Mavrogiannopoulos wrote:

That's far from a solution and I wouldn't recommend to anyone doing
that. We cannot expect each and every program to do glibc's job. The
purpose of a system call like getrandom is to simplify the complex use
of /dev/urandom and eliminate it, not to make code handling randomness
in applications even worse.

Yes, but if glibc is falling down on the job and refusing to export
the system call (I think for political reasons; it's a Linux-only
interface, so Hurd wouldn't have it),


I believe their main concern is that they want to protect applications
which do not check error codes of system calls, when running on a
kernel which does not provide getrandom().  That way, they have an
almost impossible task to simulate getrandom() on kernel which do not
support it.
If it had an existing call, then I could reasonably understand this. 
They have no existing wrapper for it, so this really is just BS.  If an 
application isn't checking error codes, then either:

1. They are intentionally ignoring error codes.
2. It's a bug in the application that needs to be fixed.

The fact that they feel they need to support poorly coded applications 
in _new_ system call wrappers is itself somewhat disturbing.  There are 
no existing applications using this function in glibc because it doesn't 
exist in glibc, which means they can't claim legacy support and 
therefore they want to actively enable people to write stupid programs.


Re: [RFC][PATCH 0/6] /dev/random - a new approach

2016-05-03 Thread Austin S. Hemmelgarn

On 2016-05-03 09:57, Nikos Mavrogiannopoulos wrote:

On Tue, Apr 26, 2016 at 3:11 AM, Theodore Ts'o  wrote:

On Mon, Apr 25, 2016 at 10:23:51AM +0200, Nikos Mavrogiannopoulos wrote:

That's far from a solution and I wouldn't recommend to anyone doing
that. We cannot expect each and every program to do glibc's job. The
purpose of a system call like getrandom is to simplify the complex use
of /dev/urandom and eliminate it, not to make code handling randomness
in applications even worse.

Yes, but if glibc is falling down on the job and refusing to export
the system call (I think for political reasons; it's a Linux-only
interface, so Hurd wouldn't have it),


I believe their main concern is that they want to protect applications
which do not check error codes of system calls, when running on a
kernel which does not provide getrandom().  That way, they have an
almost impossible task to simulate getrandom() on kernel which do not
support it.
If it had an existing call, then I could reasonably understand this. 
They have no existing wrapper for it, so this really is just BS.  If an 
application isn't checking error codes, then either:

1. They are intentionally ignoring error codes.
2. It's a bug in the application that needs to be fixed.

The fact that they feel they need to support poorly coded applications 
in _new_ system call wrappers is itself somewhat disturbing.  There are 
no existing applications using this function in glibc because it doesn't 
exist in glibc, which means they can't claim legacy support and 
therefore they want to actively enable people to write stupid programs.


Re: [RFC][PATCH 0/6] /dev/random - a new approach

2016-05-03 Thread tytso
On Tue, May 03, 2016 at 03:57:15PM +0200, Nikos Mavrogiannopoulos wrote:
> 
> I believe their main concern is that they want to protect applications
> which do not check error codes of system calls, when running on a
> kernel which does not provide getrandom().  That way, they have an
> almost impossible task to simulate getrandom() on kernel which do not
> support it.

The whole *point* of creating the getrandom(2) system call is that it
can't be simulated/emulated in userspace.  If it can be, then there's
no reason why the system call should exist.  This is one of the
reasons why haven't implemented mysql or TLS inside the kernel.   :-)

So if their standard is "we need to simulate getrandom(2) on a kernel
which does not have it", we'll **never** see glibc support for it.  By
definition, this is *impossible*.

What they can do is do something which is as good as you can get for
someone who is open-coding /dev/urandom support in userspace.  That
means that you won't be able to (a) tell if the urandom pool is has
been adequately initialized right after boot, (b) you will need to
somehow deal with the case where the file descriptors have been
exhausted, (c) or if you are running in a chroot where the system
administrator didn't bother to include /dev/urandom.  About the best
you can do is call abort(0), or if you want, you can let the
application author specify some kind of "I want to run in insecure
mode", via some magic glibc setting.  You could probably default this
to "true" without a huge net reduction of security, because most
application authors weren't getting this right anyway.  And then ones
who do care, can set some kind of flag saying, "I promise to check the
error return from getrandom(2) as implemented by glibc".

 - Ted


Re: [RFC][PATCH 0/6] /dev/random - a new approach

2016-05-03 Thread tytso
On Tue, May 03, 2016 at 03:57:15PM +0200, Nikos Mavrogiannopoulos wrote:
> 
> I believe their main concern is that they want to protect applications
> which do not check error codes of system calls, when running on a
> kernel which does not provide getrandom().  That way, they have an
> almost impossible task to simulate getrandom() on kernel which do not
> support it.

The whole *point* of creating the getrandom(2) system call is that it
can't be simulated/emulated in userspace.  If it can be, then there's
no reason why the system call should exist.  This is one of the
reasons why haven't implemented mysql or TLS inside the kernel.   :-)

So if their standard is "we need to simulate getrandom(2) on a kernel
which does not have it", we'll **never** see glibc support for it.  By
definition, this is *impossible*.

What they can do is do something which is as good as you can get for
someone who is open-coding /dev/urandom support in userspace.  That
means that you won't be able to (a) tell if the urandom pool is has
been adequately initialized right after boot, (b) you will need to
somehow deal with the case where the file descriptors have been
exhausted, (c) or if you are running in a chroot where the system
administrator didn't bother to include /dev/urandom.  About the best
you can do is call abort(0), or if you want, you can let the
application author specify some kind of "I want to run in insecure
mode", via some magic glibc setting.  You could probably default this
to "true" without a huge net reduction of security, because most
application authors weren't getting this right anyway.  And then ones
who do care, can set some kind of flag saying, "I promise to check the
error return from getrandom(2) as implemented by glibc".

 - Ted


Re: [RFC][PATCH 0/6] /dev/random - a new approach

2016-05-03 Thread Nikos Mavrogiannopoulos
On Tue, Apr 26, 2016 at 3:11 AM, Theodore Ts'o  wrote:
> On Mon, Apr 25, 2016 at 10:23:51AM +0200, Nikos Mavrogiannopoulos wrote:
>> That's far from a solution and I wouldn't recommend to anyone doing
>> that. We cannot expect each and every program to do glibc's job. The
>> purpose of a system call like getrandom is to simplify the complex use
>> of /dev/urandom and eliminate it, not to make code handling randomness
>> in applications even worse.
> Yes, but if glibc is falling down on the job and refusing to export
> the system call (I think for political reasons; it's a Linux-only
> interface, so Hurd wouldn't have it),

I believe their main concern is that they want to protect applications
which do not check error codes of system calls, when running on a
kernel which does not provide getrandom().  That way, they have an
almost impossible task to simulate getrandom() on kernel which do not
support it.

One may agree with their concerns, but the end result is that we have
not available that system call at all, several years after it is
there.

Anyway it seems that there is some activity now, so hopefully we may
have it sometime soon:
https://sourceware.org/ml/libc-help/2016-04/msg8.html

regards,
Nikos


Re: [RFC][PATCH 0/6] /dev/random - a new approach

2016-05-03 Thread Nikos Mavrogiannopoulos
On Tue, Apr 26, 2016 at 3:11 AM, Theodore Ts'o  wrote:
> On Mon, Apr 25, 2016 at 10:23:51AM +0200, Nikos Mavrogiannopoulos wrote:
>> That's far from a solution and I wouldn't recommend to anyone doing
>> that. We cannot expect each and every program to do glibc's job. The
>> purpose of a system call like getrandom is to simplify the complex use
>> of /dev/urandom and eliminate it, not to make code handling randomness
>> in applications even worse.
> Yes, but if glibc is falling down on the job and refusing to export
> the system call (I think for political reasons; it's a Linux-only
> interface, so Hurd wouldn't have it),

I believe their main concern is that they want to protect applications
which do not check error codes of system calls, when running on a
kernel which does not provide getrandom().  That way, they have an
almost impossible task to simulate getrandom() on kernel which do not
support it.

One may agree with their concerns, but the end result is that we have
not available that system call at all, several years after it is
there.

Anyway it seems that there is some activity now, so hopefully we may
have it sometime soon:
https://sourceware.org/ml/libc-help/2016-04/msg8.html

regards,
Nikos


Re: [RFC][PATCH 0/6] /dev/random - a new approach

2016-04-25 Thread Theodore Ts'o
On Mon, Apr 25, 2016 at 10:23:51AM +0200, Nikos Mavrogiannopoulos wrote:
> That's far from a solution and I wouldn't recommend to anyone doing
> that. We cannot expect each and every program to do glibc's job. The
> purpose of a system call like getrandom is to simplify the complex use
> of /dev/urandom and eliminate it, not to make code handling randomness
> in applications even worse.

Yes, but if glibc is falling down on the job and refusing to export
the system call (I think for political reasons; it's a Linux-only
interface, so Hurd wouldn't have it), then the only solution is to
either use syscall directly (it's not hard for getrandom, since we're
not using 64-bit arguments which gets tricky for some architectures),
or as Peter Avin has suggested, maybe kernel developers will have to
start releasing the libinux library, and then teaching application
authors to add -linux to their linker lines.

- Ted


Re: [RFC][PATCH 0/6] /dev/random - a new approach

2016-04-25 Thread Theodore Ts'o
On Mon, Apr 25, 2016 at 10:23:51AM +0200, Nikos Mavrogiannopoulos wrote:
> That's far from a solution and I wouldn't recommend to anyone doing
> that. We cannot expect each and every program to do glibc's job. The
> purpose of a system call like getrandom is to simplify the complex use
> of /dev/urandom and eliminate it, not to make code handling randomness
> in applications even worse.

Yes, but if glibc is falling down on the job and refusing to export
the system call (I think for political reasons; it's a Linux-only
interface, so Hurd wouldn't have it), then the only solution is to
either use syscall directly (it's not hard for getrandom, since we're
not using 64-bit arguments which gets tricky for some architectures),
or as Peter Avin has suggested, maybe kernel developers will have to
start releasing the libinux library, and then teaching application
authors to add -linux to their linker lines.

- Ted


Re: [RFC][PATCH 0/6] /dev/random - a new approach

2016-04-25 Thread Nikos Mavrogiannopoulos
On Mon, Apr 25, 2016 at 10:02 AM, Stephan Mueller  wrote:
>> > One more item to consider: If you do not want to change to use
>> > getrandom(2), the LRNG provides you with another means.
>> The main problem is not about willing to switch to getrandom() or not,
>> but finding any system where getrandom() exists. Today due to libc not
>> having the call, we can only use /dev/urandom and applications would
>> most likely continue to do so long time after getrandom() is
>> introduced to libc.
> Implement the syscall yourself with syscall(). If you get ENOSYS back, revert
> to your old logic of seeding from /dev/urandom.

That's far from a solution and I wouldn't recommend to anyone doing
that. We cannot expect each and every program to do glibc's job. The
purpose of a system call like getrandom is to simplify the complex use
of /dev/urandom and eliminate it, not to make code handling randomness
in applications even worse.

regards,
Nikos


Re: [RFC][PATCH 0/6] /dev/random - a new approach

2016-04-25 Thread Nikos Mavrogiannopoulos
On Mon, Apr 25, 2016 at 10:02 AM, Stephan Mueller  wrote:
>> > One more item to consider: If you do not want to change to use
>> > getrandom(2), the LRNG provides you with another means.
>> The main problem is not about willing to switch to getrandom() or not,
>> but finding any system where getrandom() exists. Today due to libc not
>> having the call, we can only use /dev/urandom and applications would
>> most likely continue to do so long time after getrandom() is
>> introduced to libc.
> Implement the syscall yourself with syscall(). If you get ENOSYS back, revert
> to your old logic of seeding from /dev/urandom.

That's far from a solution and I wouldn't recommend to anyone doing
that. We cannot expect each and every program to do glibc's job. The
purpose of a system call like getrandom is to simplify the complex use
of /dev/urandom and eliminate it, not to make code handling randomness
in applications even worse.

regards,
Nikos


Re: [RFC][PATCH 0/6] /dev/random - a new approach

2016-04-25 Thread Stephan Mueller
Am Montag, 25. April 2016, 09:55:14 schrieb Nikos Mavrogiannopoulos:

Hi Nikos,

> On Thu, Apr 21, 2016 at 5:16 PM, Stephan Mueller  
wrote:
> >> > ... DRBG is “minimally” seeded with 112^6 bits of entropy.
> >> > This is commonly achieved even before user space is initiated.
> >> 
> >> Unfortunately one of the issues of the /dev/urandom interface is the
> >> fact that it may start providing random numbers even before the
> >> seeding is complete. From the above quote, I understand that this
> >> issue is not addressed by the new interface. That's a serious
> >> limitation (of the current and inherited by the new implementation),
> >> since most/all newly deployed systems from "cloud" images generate
> >> keys using /dev/urandom (for sshd for example) on boot, and it is
> >> unknown to these applications whether they operate with uninitialized
> >> seed.
> > 
> > One more item to consider: If you do not want to change to use
> > getrandom(2), the LRNG provides you with another means.
> 
> The main problem is not about willing to switch to getrandom() or not,
> but finding any system where getrandom() exists. Today due to libc not
> having the call, we can only use /dev/urandom and applications would
> most likely continue to do so long time after getrandom() is
> introduced to libc.

Implement the syscall yourself with syscall(). If you get ENOSYS back, revert 
to your old logic of seeding from /dev/urandom.

If you know you are on kernels >= 3.14, you could use the following steps in 
your library:

- poll /proc/sys/kernel/random/entropy_avail in spaces of, say, one second and 
block your seeding process until that value becomes non-zero

- if you unblock, seed from /dev/urandom and you have the guarantee of having 
a /dev/urandom seeded with 128 bits.
> 
> regards,
> Nikos


Ciao
Stephan


Re: [RFC][PATCH 0/6] /dev/random - a new approach

2016-04-25 Thread Stephan Mueller
Am Montag, 25. April 2016, 09:55:14 schrieb Nikos Mavrogiannopoulos:

Hi Nikos,

> On Thu, Apr 21, 2016 at 5:16 PM, Stephan Mueller  
wrote:
> >> > ... DRBG is “minimally” seeded with 112^6 bits of entropy.
> >> > This is commonly achieved even before user space is initiated.
> >> 
> >> Unfortunately one of the issues of the /dev/urandom interface is the
> >> fact that it may start providing random numbers even before the
> >> seeding is complete. From the above quote, I understand that this
> >> issue is not addressed by the new interface. That's a serious
> >> limitation (of the current and inherited by the new implementation),
> >> since most/all newly deployed systems from "cloud" images generate
> >> keys using /dev/urandom (for sshd for example) on boot, and it is
> >> unknown to these applications whether they operate with uninitialized
> >> seed.
> > 
> > One more item to consider: If you do not want to change to use
> > getrandom(2), the LRNG provides you with another means.
> 
> The main problem is not about willing to switch to getrandom() or not,
> but finding any system where getrandom() exists. Today due to libc not
> having the call, we can only use /dev/urandom and applications would
> most likely continue to do so long time after getrandom() is
> introduced to libc.

Implement the syscall yourself with syscall(). If you get ENOSYS back, revert 
to your old logic of seeding from /dev/urandom.

If you know you are on kernels >= 3.14, you could use the following steps in 
your library:

- poll /proc/sys/kernel/random/entropy_avail in spaces of, say, one second and 
block your seeding process until that value becomes non-zero

- if you unblock, seed from /dev/urandom and you have the guarantee of having 
a /dev/urandom seeded with 128 bits.
> 
> regards,
> Nikos


Ciao
Stephan


Re: [RFC][PATCH 0/6] /dev/random - a new approach

2016-04-25 Thread Nikos Mavrogiannopoulos
On Thu, Apr 21, 2016 at 5:16 PM, Stephan Mueller  wrote:
>> > ... DRBG is “minimally” seeded with 112^6 bits of entropy.
>> > This is commonly achieved even before user space is initiated.
>>
>> Unfortunately one of the issues of the /dev/urandom interface is the
>> fact that it may start providing random numbers even before the
>> seeding is complete. From the above quote, I understand that this
>> issue is not addressed by the new interface. That's a serious
>> limitation (of the current and inherited by the new implementation),
>> since most/all newly deployed systems from "cloud" images generate
>> keys using /dev/urandom (for sshd for example) on boot, and it is
>> unknown to these applications whether they operate with uninitialized
>> seed.
> One more item to consider: If you do not want to change to use getrandom(2),
> the LRNG provides you with another means.

The main problem is not about willing to switch to getrandom() or not,
but finding any system where getrandom() exists. Today due to libc not
having the call, we can only use /dev/urandom and applications would
most likely continue to do so long time after getrandom() is
introduced to libc.

regards,
Nikos


Re: [RFC][PATCH 0/6] /dev/random - a new approach

2016-04-25 Thread Nikos Mavrogiannopoulos
On Thu, Apr 21, 2016 at 5:16 PM, Stephan Mueller  wrote:
>> > ... DRBG is “minimally” seeded with 112^6 bits of entropy.
>> > This is commonly achieved even before user space is initiated.
>>
>> Unfortunately one of the issues of the /dev/urandom interface is the
>> fact that it may start providing random numbers even before the
>> seeding is complete. From the above quote, I understand that this
>> issue is not addressed by the new interface. That's a serious
>> limitation (of the current and inherited by the new implementation),
>> since most/all newly deployed systems from "cloud" images generate
>> keys using /dev/urandom (for sshd for example) on boot, and it is
>> unknown to these applications whether they operate with uninitialized
>> seed.
> One more item to consider: If you do not want to change to use getrandom(2),
> the LRNG provides you with another means.

The main problem is not about willing to switch to getrandom() or not,
but finding any system where getrandom() exists. Today due to libc not
having the call, we can only use /dev/urandom and applications would
most likely continue to do so long time after getrandom() is
introduced to libc.

regards,
Nikos


Re: [RFC][PATCH 0/6] /dev/random - a new approach

2016-04-24 Thread Stephan Mueller
Am Sonntag, 24. April 2016, 23:25:00 schrieb Pavel Machek:

Hi Pavel,

> > /* This RNG does not work if no high-resolution timer is available */
> > BUG_ON(!random_get_entropy() && !random_get_entropy());
> 
> Heh, does this cause BUG() with 2^-64 probability? :-).

No, but for the listed arches, get_cycles would return 0. And I only call the 
function twice to not be tripped by a potential wrap around at the time of 
calling.
> 
> > If there is no high-resolution timer, the LRNG will not produce good
> > entropic random numbers. The current kernel code implements
> > high-resolution timers for all but the following architectures where
> > neither random_get_entropy nor
> > get_cycles are implemented:
> Ok, what about stuff like Intel 486 (no RDTSC)?
> 
> > Thus, for all large-scale architectures, the LRNG would be applicable.
> > 
> > Please note that also the legacy /dev/random will have hard time to obtain
> > entropy for these environments. The majority of the entropy comes
> > from high-
> 
> Understood.
> 
> > Though, the patch I offer leaves the legacy /dev/random in peace for those
> > architectures to not touch the status quo.
> 
> Well -- that's the major problem -- right? Makes it tricky to tell
> what changed, and we had two RNGs to maintain.

I would rather think that even the legacy /dev/random should not return any 
values in those environments. The random numbers that are returned on these 
systems are bogus, considering that the only noise source that could deliver 
some entropy excluding timestamps (if you trust the user) are the HID event 
values. And for those listed systems, I doubt very much that they are used in 
a desktop environment where you have a console.

If everybody agrees, I can surely add some logic to make the LRNG working on 
those systems. But those additions cannot be subjected to a thorough entropy 
analysis. Yet I feel that this is wrong.

My goal with the LRNG is to provide a new design using proven techniques that 
is forward looking. I am aware that the design does not work in circumstances 
where the high-res timer is not present. But do we have to settle on the least 
common denominator knowing that this one will not really work to begin with?

Ciao
Stephan


Re: [RFC][PATCH 0/6] /dev/random - a new approach

2016-04-24 Thread Stephan Mueller
Am Sonntag, 24. April 2016, 23:25:00 schrieb Pavel Machek:

Hi Pavel,

> > /* This RNG does not work if no high-resolution timer is available */
> > BUG_ON(!random_get_entropy() && !random_get_entropy());
> 
> Heh, does this cause BUG() with 2^-64 probability? :-).

No, but for the listed arches, get_cycles would return 0. And I only call the 
function twice to not be tripped by a potential wrap around at the time of 
calling.
> 
> > If there is no high-resolution timer, the LRNG will not produce good
> > entropic random numbers. The current kernel code implements
> > high-resolution timers for all but the following architectures where
> > neither random_get_entropy nor
> > get_cycles are implemented:
> Ok, what about stuff like Intel 486 (no RDTSC)?
> 
> > Thus, for all large-scale architectures, the LRNG would be applicable.
> > 
> > Please note that also the legacy /dev/random will have hard time to obtain
> > entropy for these environments. The majority of the entropy comes
> > from high-
> 
> Understood.
> 
> > Though, the patch I offer leaves the legacy /dev/random in peace for those
> > architectures to not touch the status quo.
> 
> Well -- that's the major problem -- right? Makes it tricky to tell
> what changed, and we had two RNGs to maintain.

I would rather think that even the legacy /dev/random should not return any 
values in those environments. The random numbers that are returned on these 
systems are bogus, considering that the only noise source that could deliver 
some entropy excluding timestamps (if you trust the user) are the HID event 
values. And for those listed systems, I doubt very much that they are used in 
a desktop environment where you have a console.

If everybody agrees, I can surely add some logic to make the LRNG working on 
those systems. But those additions cannot be subjected to a thorough entropy 
analysis. Yet I feel that this is wrong.

My goal with the LRNG is to provide a new design using proven techniques that 
is forward looking. I am aware that the design does not work in circumstances 
where the high-res timer is not present. But do we have to settle on the least 
common denominator knowing that this one will not really work to begin with?

Ciao
Stephan


Re: [RFC][PATCH 0/6] /dev/random - a new approach

2016-04-24 Thread Pavel Machek
Hi!

> > So you are relying on high-resolution timestamps. Ok. then you do kind
> > of the check on the timestamps... ok, why not. But then you mix in the
> > data regardless, saying that "they are not dependent" and thus can't
> > hurt.
> > 
> > But you already know they _are_ dependent, that's what your stuck test
> > told you:
> 
> The stuck test says that there is a pattern, but not that the pattern shows a 
> dependency.
...
> > Now. I could imagine cases where interrupts are correlated... like
> > some hardware may generate two interrupts for each event or something
> > like that...
> 
> But I see what you are referring to and I think you have a valid point in a 
> worst case assessment.
> 
> Thus, any stuck value should not be mixed into the pool.

Thanks.

> /* This RNG does not work if no high-resolution timer is available */
> BUG_ON(!random_get_entropy() && !random_get_entropy());

Heh, does this cause BUG() with 2^-64 probability? :-).

> If there is no high-resolution timer, the LRNG will not produce good entropic 
> random numbers. The current kernel code implements high-resolution timers for 
> all but the following architectures where neither random_get_entropy nor 
> get_cycles are implemented:

Ok, what about stuff like Intel 486 (no RDTSC)?

> Thus, for all large-scale architectures, the LRNG would be applicable.
> 
> Please note that also the legacy /dev/random will have hard time to obtain 
> entropy for these environments. The majority of the entropy comes
> from high-

Understood.

> Though, the patch I offer leaves the legacy /dev/random in peace for those 
> architectures to not touch the status quo.

Well -- that's the major problem -- right? Makes it tricky to tell
what changed, and we had two RNGs to maintain.

Best regards,
Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) 
http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html


Re: [RFC][PATCH 0/6] /dev/random - a new approach

2016-04-24 Thread Pavel Machek
Hi!

> > So you are relying on high-resolution timestamps. Ok. then you do kind
> > of the check on the timestamps... ok, why not. But then you mix in the
> > data regardless, saying that "they are not dependent" and thus can't
> > hurt.
> > 
> > But you already know they _are_ dependent, that's what your stuck test
> > told you:
> 
> The stuck test says that there is a pattern, but not that the pattern shows a 
> dependency.
...
> > Now. I could imagine cases where interrupts are correlated... like
> > some hardware may generate two interrupts for each event or something
> > like that...
> 
> But I see what you are referring to and I think you have a valid point in a 
> worst case assessment.
> 
> Thus, any stuck value should not be mixed into the pool.

Thanks.

> /* This RNG does not work if no high-resolution timer is available */
> BUG_ON(!random_get_entropy() && !random_get_entropy());

Heh, does this cause BUG() with 2^-64 probability? :-).

> If there is no high-resolution timer, the LRNG will not produce good entropic 
> random numbers. The current kernel code implements high-resolution timers for 
> all but the following architectures where neither random_get_entropy nor 
> get_cycles are implemented:

Ok, what about stuff like Intel 486 (no RDTSC)?

> Thus, for all large-scale architectures, the LRNG would be applicable.
> 
> Please note that also the legacy /dev/random will have hard time to obtain 
> entropy for these environments. The majority of the entropy comes
> from high-

Understood.

> Though, the patch I offer leaves the legacy /dev/random in peace for those 
> architectures to not touch the status quo.

Well -- that's the major problem -- right? Makes it tricky to tell
what changed, and we had two RNGs to maintain.

Best regards,
Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) 
http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html


Re: [RFC][PATCH 0/6] /dev/random - a new approach

2016-04-24 Thread Stephan Mueller
Am Sonntag, 24. April 2016, 17:21:09 schrieb Pavel Machek:

Hi Pavel,

> Hi!
> 
> > Please find in [1] the full design discussion covering qualitative
> > assessments of the entropy collection and entropy flow. Furthermore, a
> > full
> > testing of the
> 
> I don't get it.
> 
> # The
> # idea is that only after obtaining LRNG_POOL_SIZE_BITS healthy bits,
> # the
> #entropy pool is completely changed with new bits. Yet, the stuck bit
> # is not
> # discarded as it may still contain some entropy. Hence, it is simply
> # XORed
> # with the previous bit as the XOR operation maintains the entropy since
> # the previous time stamp and the current time stamp are not dependent
> # on each other.
> 
> So you are relying on high-resolution timestamps. Ok. then you do kind
> of the check on the timestamps... ok, why not. But then you mix in the
> data regardless, saying that "they are not dependent" and thus can't
> hurt.
> 
> But you already know they _are_ dependent, that's what your stuck test
> told you:

The stuck test says that there is a pattern, but not that the pattern shows a 
dependency.
> 
> # Thus, the stuck test
> # ensures that:
> # (a) variations exist in the time deltas,
> # (b) variations of time deltas do not have a simple repeating pattern,
> # and
> # (c) variations do not have a linearly changing patterns (e.g. 1 - 2 -
> # 4 - 7
> # - 11 - 16).
> 
> 
> Now. I could imagine cases where interrupts are correlated... like
> some hardware may generate two interrupts for each event or something
> like that...

But I see what you are referring to and I think you have a valid point in a 
worst case assessment.

Thus, any stuck value should not be mixed into the pool.

I have changed the code accordingly.

> 
> What goes on if high resolution timer is not available?

See lrng_init:

/* This RNG does not work if no high-resolution timer is available */
BUG_ON(!random_get_entropy() && !random_get_entropy());

If there is no high-resolution timer, the LRNG will not produce good entropic 
random numbers. The current kernel code implements high-resolution timers for 
all but the following architectures where neither random_get_entropy nor 
get_cycles are implemented:

- AVR32

- CRIS

- FR-V

- H8300

- Hexagon

- M32R

- METAG

- Microblaze

- SPARC 32

- Score

- SH

- UM

- Unicore32

- Xtensa

Thus, for all large-scale architectures, the LRNG would be applicable.

Please note that also the legacy /dev/random will have hard time to obtain 
entropy for these environments. The majority of the entropy comes from high-
resolution time stamps. If you do not have them and you rely on Jiffies, an 
attacker has the ability to predict the events mixed into the pools with a 
high accuracy. Please remember the outcry when MIPS was identified to have no 
get_cycles about two or three years back.

Though, the patch I offer leaves the legacy /dev/random in peace for those 
architectures to not touch the status quo.

Ciao
Stephan


Re: [RFC][PATCH 0/6] /dev/random - a new approach

2016-04-24 Thread Stephan Mueller
Am Sonntag, 24. April 2016, 17:21:09 schrieb Pavel Machek:

Hi Pavel,

> Hi!
> 
> > Please find in [1] the full design discussion covering qualitative
> > assessments of the entropy collection and entropy flow. Furthermore, a
> > full
> > testing of the
> 
> I don't get it.
> 
> # The
> # idea is that only after obtaining LRNG_POOL_SIZE_BITS healthy bits,
> # the
> #entropy pool is completely changed with new bits. Yet, the stuck bit
> # is not
> # discarded as it may still contain some entropy. Hence, it is simply
> # XORed
> # with the previous bit as the XOR operation maintains the entropy since
> # the previous time stamp and the current time stamp are not dependent
> # on each other.
> 
> So you are relying on high-resolution timestamps. Ok. then you do kind
> of the check on the timestamps... ok, why not. But then you mix in the
> data regardless, saying that "they are not dependent" and thus can't
> hurt.
> 
> But you already know they _are_ dependent, that's what your stuck test
> told you:

The stuck test says that there is a pattern, but not that the pattern shows a 
dependency.
> 
> # Thus, the stuck test
> # ensures that:
> # (a) variations exist in the time deltas,
> # (b) variations of time deltas do not have a simple repeating pattern,
> # and
> # (c) variations do not have a linearly changing patterns (e.g. 1 - 2 -
> # 4 - 7
> # - 11 - 16).
> 
> 
> Now. I could imagine cases where interrupts are correlated... like
> some hardware may generate two interrupts for each event or something
> like that...

But I see what you are referring to and I think you have a valid point in a 
worst case assessment.

Thus, any stuck value should not be mixed into the pool.

I have changed the code accordingly.

> 
> What goes on if high resolution timer is not available?

See lrng_init:

/* This RNG does not work if no high-resolution timer is available */
BUG_ON(!random_get_entropy() && !random_get_entropy());

If there is no high-resolution timer, the LRNG will not produce good entropic 
random numbers. The current kernel code implements high-resolution timers for 
all but the following architectures where neither random_get_entropy nor 
get_cycles are implemented:

- AVR32

- CRIS

- FR-V

- H8300

- Hexagon

- M32R

- METAG

- Microblaze

- SPARC 32

- Score

- SH

- UM

- Unicore32

- Xtensa

Thus, for all large-scale architectures, the LRNG would be applicable.

Please note that also the legacy /dev/random will have hard time to obtain 
entropy for these environments. The majority of the entropy comes from high-
resolution time stamps. If you do not have them and you rely on Jiffies, an 
attacker has the ability to predict the events mixed into the pools with a 
high accuracy. Please remember the outcry when MIPS was identified to have no 
get_cycles about two or three years back.

Though, the patch I offer leaves the legacy /dev/random in peace for those 
architectures to not touch the status quo.

Ciao
Stephan


Re: [RFC][PATCH 0/6] /dev/random - a new approach

2016-04-24 Thread Pavel Machek
Hi!

> Please find in [1] the full design discussion covering qualitative assessments
> of the entropy collection and entropy flow. Furthermore, a full
> testing of the

I don't get it.

# The
# idea is that only after obtaining LRNG_POOL_SIZE_BITS healthy bits,
# the
#entropy pool is completely changed with new bits. Yet, the stuck bit
# is not
# discarded as it may still contain some entropy. Hence, it is simply
# XORed
# with the previous bit as the XOR operation maintains the entropy since
# the previous time stamp and the current time stamp are not dependent
# on each other.

So you are relying on high-resolution timestamps. Ok. then you do kind
of the check on the timestamps... ok, why not. But then you mix in the
data regardless, saying that "they are not dependent" and thus can't
hurt.

But you already know they _are_ dependent, that's what your stuck test
told you:

# Thus, the stuck test
# ensures that:
# (a) variations exist in the time deltas,
# (b) variations of time deltas do not have a simple repeating pattern,
# and
# (c) variations do not have a linearly changing patterns (e.g. 1 - 2 -
# 4 - 7
# - 11 - 16).


Now. I could imagine cases where interrupts are correlated... like
some hardware may generate two interrupts for each event or something
like that...

What goes on if high resolution timer is not available?

Best regards,
Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) 
http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html


Re: [RFC][PATCH 0/6] /dev/random - a new approach

2016-04-24 Thread Pavel Machek
Hi!

> Please find in [1] the full design discussion covering qualitative assessments
> of the entropy collection and entropy flow. Furthermore, a full
> testing of the

I don't get it.

# The
# idea is that only after obtaining LRNG_POOL_SIZE_BITS healthy bits,
# the
#entropy pool is completely changed with new bits. Yet, the stuck bit
# is not
# discarded as it may still contain some entropy. Hence, it is simply
# XORed
# with the previous bit as the XOR operation maintains the entropy since
# the previous time stamp and the current time stamp are not dependent
# on each other.

So you are relying on high-resolution timestamps. Ok. then you do kind
of the check on the timestamps... ok, why not. But then you mix in the
data regardless, saying that "they are not dependent" and thus can't
hurt.

But you already know they _are_ dependent, that's what your stuck test
told you:

# Thus, the stuck test
# ensures that:
# (a) variations exist in the time deltas,
# (b) variations of time deltas do not have a simple repeating pattern,
# and
# (c) variations do not have a linearly changing patterns (e.g. 1 - 2 -
# 4 - 7
# - 11 - 16).


Now. I could imagine cases where interrupts are correlated... like
some hardware may generate two interrupts for each event or something
like that...

What goes on if high resolution timer is not available?

Best regards,
Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) 
http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html


Re: [RFC][PATCH 0/6] /dev/random - a new approach

2016-04-22 Thread Sandy Harris
On Thu, Apr 21, 2016 at 10:51 PM, Theodore Ts'o  wrote:

> I still have a massive problem with the claims that the "Jitter" RNG
> provides any amount of entropy.  Just because you and I might not be
> able to analyze it doesn't mean that somebody else couldn't.  After
> all, DUAL-EC DRNG was very complicated and hard to analyze.  So would
> be something like
>
>AES(NSA_KEY, COUNTER++)
>
> Very hard to analyze indeed.  Shall we run statistical tests?  They'll
> pass with flying colors.
>
> Secure?  Not so much.
>
> - Ted

Jitter, havege and my maxwell(8) all claim to get entropy from
variations in timing of simple calculations, and the docs for
all three give arguments that there really is some entropy
there.

Some of those arguments are quite strong. Mine are in
the PDF at:
https://github.com/sandy-harris/maxwell

I find any of those plausible as an external RNG feeding
random(4), though a hardware RNG or Turbid is preferable.


Re: [RFC][PATCH 0/6] /dev/random - a new approach

2016-04-22 Thread Sandy Harris
On Thu, Apr 21, 2016 at 10:51 PM, Theodore Ts'o  wrote:

> I still have a massive problem with the claims that the "Jitter" RNG
> provides any amount of entropy.  Just because you and I might not be
> able to analyze it doesn't mean that somebody else couldn't.  After
> all, DUAL-EC DRNG was very complicated and hard to analyze.  So would
> be something like
>
>AES(NSA_KEY, COUNTER++)
>
> Very hard to analyze indeed.  Shall we run statistical tests?  They'll
> pass with flying colors.
>
> Secure?  Not so much.
>
> - Ted

Jitter, havege and my maxwell(8) all claim to get entropy from
variations in timing of simple calculations, and the docs for
all three give arguments that there really is some entropy
there.

Some of those arguments are quite strong. Mine are in
the PDF at:
https://github.com/sandy-harris/maxwell

I find any of those plausible as an external RNG feeding
random(4), though a hardware RNG or Turbid is preferable.


Re: [RFC][PATCH 0/6] /dev/random - a new approach

2016-04-21 Thread Stephan Mueller
Am Donnerstag, 21. April 2016, 22:51:55 schrieb Theodore Ts'o:

Hi Theodore,

> I still have a massive problem with the claims that the "Jitter" RNG
> provides any amount of entropy.  Just because you and I might not be
> able to analyze it doesn't mean that somebody else couldn't.  After
> all, DUAL-EC DRNG was very complicated and hard to analyze.  So would
> be something like
> 
>AES(NSA_KEY, COUNTER++)
> 
> Very hard to analyze indeed.  Shall we run statistical tests?  They'll
> pass with flying colors.
> 
> Secure?  Not so much.

If you are concerned with that RNG, we can easily drop it from the LRNG. The 
testing documented in the writeup disable the Jitter RNG to ensure that only 
the LRNG IRQ collection is tested.

The conclusions regarding timeliness of the seeding, the prevention of 
draining the entropy pool are performed without the Jitter RNG which implies 
that the Jitter RNG can be dropped without harm.

Ciao
Stephan


Re: [RFC][PATCH 0/6] /dev/random - a new approach

2016-04-21 Thread Stephan Mueller
Am Donnerstag, 21. April 2016, 22:51:55 schrieb Theodore Ts'o:

Hi Theodore,

> I still have a massive problem with the claims that the "Jitter" RNG
> provides any amount of entropy.  Just because you and I might not be
> able to analyze it doesn't mean that somebody else couldn't.  After
> all, DUAL-EC DRNG was very complicated and hard to analyze.  So would
> be something like
> 
>AES(NSA_KEY, COUNTER++)
> 
> Very hard to analyze indeed.  Shall we run statistical tests?  They'll
> pass with flying colors.
> 
> Secure?  Not so much.

If you are concerned with that RNG, we can easily drop it from the LRNG. The 
testing documented in the writeup disable the Jitter RNG to ensure that only 
the LRNG IRQ collection is tested.

The conclusions regarding timeliness of the seeding, the prevention of 
draining the entropy pool are performed without the Jitter RNG which implies 
that the Jitter RNG can be dropped without harm.

Ciao
Stephan


Re: [RFC][PATCH 0/6] /dev/random - a new approach

2016-04-21 Thread Theodore Ts'o
I still have a massive problem with the claims that the "Jitter" RNG
provides any amount of entropy.  Just because you and I might not be
able to analyze it doesn't mean that somebody else couldn't.  After
all, DUAL-EC DRNG was very complicated and hard to analyze.  So would
be something like

   AES(NSA_KEY, COUNTER++)

Very hard to analyze indeed.  Shall we run statistical tests?  They'll
pass with flying colors.

Secure?  Not so much.

- Ted


Re: [RFC][PATCH 0/6] /dev/random - a new approach

2016-04-21 Thread Theodore Ts'o
I still have a massive problem with the claims that the "Jitter" RNG
provides any amount of entropy.  Just because you and I might not be
able to analyze it doesn't mean that somebody else couldn't.  After
all, DUAL-EC DRNG was very complicated and hard to analyze.  So would
be something like

   AES(NSA_KEY, COUNTER++)

Very hard to analyze indeed.  Shall we run statistical tests?  They'll
pass with flying colors.

Secure?  Not so much.

- Ted


Re: [RFC][PATCH 0/6] /dev/random - a new approach

2016-04-21 Thread Stephan Mueller
Am Donnerstag, 21. April 2016, 15:03:37 schrieb Nikos Mavrogiannopoulos:

Hi Nikos,
> 
> [quote from pdf]
> 
> > ... DRBG is “minimally” seeded with 112^6 bits of entropy.
> > This is commonly achieved even before user space is initiated.
> 
> Unfortunately one of the issues of the /dev/urandom interface is the
> fact that it may start providing random numbers even before the
> seeding is complete. From the above quote, I understand that this
> issue is not addressed by the new interface. That's a serious
> limitation (of the current and inherited by the new implementation),
> since most/all newly deployed systems from "cloud" images generate
> keys using /dev/urandom (for sshd for example) on boot, and it is
> unknown to these applications whether they operate with uninitialized
> seed.

One more item to consider: If you do not want to change to use getrandom(2), 
the LRNG provides you with another means. You may use the 
/proc/sys/kernel/random/drbg_minimally_seeded or drbg_fully_seeded booleans. 
If you poll on those, you will obtain the indication whether the secondary 
DRBG feeding /dev/random is seeded with 112 bits (drbg_minimally_seeded or 256 
bits (drbg_fully_seeded).

Those two booleans are exported for exactly that purpose: allow user space to 
know about initial seeding status of the LRNG.

Ciao
Stephan


Re: [RFC][PATCH 0/6] /dev/random - a new approach

2016-04-21 Thread Stephan Mueller
Am Donnerstag, 21. April 2016, 15:03:37 schrieb Nikos Mavrogiannopoulos:

Hi Nikos,
> 
> [quote from pdf]
> 
> > ... DRBG is “minimally” seeded with 112^6 bits of entropy.
> > This is commonly achieved even before user space is initiated.
> 
> Unfortunately one of the issues of the /dev/urandom interface is the
> fact that it may start providing random numbers even before the
> seeding is complete. From the above quote, I understand that this
> issue is not addressed by the new interface. That's a serious
> limitation (of the current and inherited by the new implementation),
> since most/all newly deployed systems from "cloud" images generate
> keys using /dev/urandom (for sshd for example) on boot, and it is
> unknown to these applications whether they operate with uninitialized
> seed.

One more item to consider: If you do not want to change to use getrandom(2), 
the LRNG provides you with another means. You may use the 
/proc/sys/kernel/random/drbg_minimally_seeded or drbg_fully_seeded booleans. 
If you poll on those, you will obtain the indication whether the secondary 
DRBG feeding /dev/random is seeded with 112 bits (drbg_minimally_seeded or 256 
bits (drbg_fully_seeded).

Those two booleans are exported for exactly that purpose: allow user space to 
know about initial seeding status of the LRNG.

Ciao
Stephan


Re: [RFC][PATCH 0/6] /dev/random - a new approach

2016-04-21 Thread Stephan Mueller
Am Donnerstag, 21. April 2016, 15:03:37 schrieb Nikos Mavrogiannopoulos:

Hi Nikos,

> On Thu, Apr 21, 2016 at 11:11 AM, Stephan Mueller  
wrote:
> > Hi Herbert, Ted,
> > 
> > The venerable Linux /dev/random served users of cryptographic mechanisms
> > well for a long time. Its behavior is well understood to deliver entropic
> > data. In the last years, however, the Linux /dev/random showed signs of
> > age where it has challenges to cope with modern computing environments
> > ranging from tiny embedded systems, over new hardware resources such as
> > SSDs, up to massive parallel systems as well as virtualized environments.
> > 
> > With the experience gained during numerous studies of /dev/random, entropy
> > assessments of different noise source designs and assessing entropy
> > behavior in virtual machines and other special environments, I felt to do
> > something about it.
> > I developed a different approach, which I call Linux Random Number
> > Generator (LRNG) to collect entropy within the Linux kernel. The main
> > improvements compared to the legacy /dev/random is to provide sufficient
> > entropy during boot time as well as in virtual environments and when
> > using SSDs. A secondary design goal is to limit the impact of the entropy
> > collection on massive parallel systems and also allow the use accelerated
> > cryptographic primitives. Also, all steps of the entropic data processing
> > are testable. Finally massive performance improvements are visible at
> > /dev/urandom / get_random_bytes.
> 
> [quote from pdf]
> 
> > ... DRBG is “minimally” seeded with 112^6 bits of entropy.
> > This is commonly achieved even before user space is initiated.
> 
> Unfortunately one of the issues of the /dev/urandom interface is the
> fact that it may start providing random numbers even before the
> seeding is complete. From the above quote, I understand that this
> issue is not addressed by the new interface. That's a serious
> limitation (of the current and inherited by the new implementation),
> since most/all newly deployed systems from "cloud" images generate
> keys using /dev/urandom (for sshd for example) on boot, and it is
> unknown to these applications whether they operate with uninitialized
> seed.

That limitation is addressed with the getrandom system call. This call will 
block until the initial seeding is provided. After the initial seeding, 
getrandom behaves like /dev/urandom. This behavior is implemented alredy with 
the legacy /dev/random and is preserved with the LRNG.
> 
> While one could argue for using /dev/random, the unpredictability of
> the delay it incurs is prohibitive for any practical use. Thus I'd
> expect any new interface to provide a better /dev/urandom, by ensuring
> that the kernel seed buffer is fully seeded prior to switching to
> userspace.
> 
> About the rest of the design, I think it is quite clean. I think the
> DRBG choice is quite natural given the NIST recommendations, but have
> you considered using a stream cipher instead like chacha20 which in
> most of cases it would outperform the DRBG based on AES?

This can easily be covered by changing the DRBG implementation -- the current 
DRBG implementation in the kernel crypto API is implemented to operate like a 
"block chaining mode" on top of the raw cipher. Thus, such change can be 
easily rolled in.

Ciao
Stephan


Re: [RFC][PATCH 0/6] /dev/random - a new approach

2016-04-21 Thread Stephan Mueller
Am Donnerstag, 21. April 2016, 15:03:37 schrieb Nikos Mavrogiannopoulos:

Hi Nikos,

> On Thu, Apr 21, 2016 at 11:11 AM, Stephan Mueller  
wrote:
> > Hi Herbert, Ted,
> > 
> > The venerable Linux /dev/random served users of cryptographic mechanisms
> > well for a long time. Its behavior is well understood to deliver entropic
> > data. In the last years, however, the Linux /dev/random showed signs of
> > age where it has challenges to cope with modern computing environments
> > ranging from tiny embedded systems, over new hardware resources such as
> > SSDs, up to massive parallel systems as well as virtualized environments.
> > 
> > With the experience gained during numerous studies of /dev/random, entropy
> > assessments of different noise source designs and assessing entropy
> > behavior in virtual machines and other special environments, I felt to do
> > something about it.
> > I developed a different approach, which I call Linux Random Number
> > Generator (LRNG) to collect entropy within the Linux kernel. The main
> > improvements compared to the legacy /dev/random is to provide sufficient
> > entropy during boot time as well as in virtual environments and when
> > using SSDs. A secondary design goal is to limit the impact of the entropy
> > collection on massive parallel systems and also allow the use accelerated
> > cryptographic primitives. Also, all steps of the entropic data processing
> > are testable. Finally massive performance improvements are visible at
> > /dev/urandom / get_random_bytes.
> 
> [quote from pdf]
> 
> > ... DRBG is “minimally” seeded with 112^6 bits of entropy.
> > This is commonly achieved even before user space is initiated.
> 
> Unfortunately one of the issues of the /dev/urandom interface is the
> fact that it may start providing random numbers even before the
> seeding is complete. From the above quote, I understand that this
> issue is not addressed by the new interface. That's a serious
> limitation (of the current and inherited by the new implementation),
> since most/all newly deployed systems from "cloud" images generate
> keys using /dev/urandom (for sshd for example) on boot, and it is
> unknown to these applications whether they operate with uninitialized
> seed.

That limitation is addressed with the getrandom system call. This call will 
block until the initial seeding is provided. After the initial seeding, 
getrandom behaves like /dev/urandom. This behavior is implemented alredy with 
the legacy /dev/random and is preserved with the LRNG.
> 
> While one could argue for using /dev/random, the unpredictability of
> the delay it incurs is prohibitive for any practical use. Thus I'd
> expect any new interface to provide a better /dev/urandom, by ensuring
> that the kernel seed buffer is fully seeded prior to switching to
> userspace.
> 
> About the rest of the design, I think it is quite clean. I think the
> DRBG choice is quite natural given the NIST recommendations, but have
> you considered using a stream cipher instead like chacha20 which in
> most of cases it would outperform the DRBG based on AES?

This can easily be covered by changing the DRBG implementation -- the current 
DRBG implementation in the kernel crypto API is implemented to operate like a 
"block chaining mode" on top of the raw cipher. Thus, such change can be 
easily rolled in.

Ciao
Stephan


Re: [RFC][PATCH 0/6] /dev/random - a new approach

2016-04-21 Thread Nikos Mavrogiannopoulos
On Thu, Apr 21, 2016 at 11:11 AM, Stephan Mueller  wrote:
> Hi Herbert, Ted,
>
> The venerable Linux /dev/random served users of cryptographic mechanisms well
> for a long time. Its behavior is well understood to deliver entropic data. In
> the last years, however, the Linux /dev/random showed signs of age where it 
> has
> challenges to cope with modern computing environments ranging from tiny 
> embedded
> systems, over new hardware resources such as SSDs, up to massive parallel
> systems as well as virtualized environments.
>
> With the experience gained during numerous studies of /dev/random, entropy
> assessments of different noise source designs and assessing entropy behavior 
> in
> virtual machines and other special environments, I felt to do something about
> it.
> I developed a different approach, which I call Linux Random Number Generator
> (LRNG) to collect entropy within the Linux kernel. The main improvements
> compared to the legacy /dev/random is to provide sufficient entropy during 
> boot
> time as well as in virtual environments and when using SSDs. A secondary 
> design
> goal is to limit the impact of the entropy collection on massive parallel
> systems and also allow the use accelerated cryptographic primitives. Also, all
> steps of the entropic data processing are testable. Finally massive 
> performance
> improvements are visible at /dev/urandom / get_random_bytes.

[quote from pdf]
> ... DRBG is “minimally” seeded with 112^6 bits of entropy.
> This is commonly achieved even before user space is initiated.

Unfortunately one of the issues of the /dev/urandom interface is the
fact that it may start providing random numbers even before the
seeding is complete. From the above quote, I understand that this
issue is not addressed by the new interface. That's a serious
limitation (of the current and inherited by the new implementation),
since most/all newly deployed systems from "cloud" images generate
keys using /dev/urandom (for sshd for example) on boot, and it is
unknown to these applications whether they operate with uninitialized
seed.

While one could argue for using /dev/random, the unpredictability of
the delay it incurs is prohibitive for any practical use. Thus I'd
expect any new interface to provide a better /dev/urandom, by ensuring
that the kernel seed buffer is fully seeded prior to switching to
userspace.

About the rest of the design, I think it is quite clean. I think the
DRBG choice is quite natural given the NIST recommendations, but have
you considered using a stream cipher instead like chacha20 which in
most of cases it would outperform the DRBG based on AES?

regards,
Nikos


Re: [RFC][PATCH 0/6] /dev/random - a new approach

2016-04-21 Thread Nikos Mavrogiannopoulos
On Thu, Apr 21, 2016 at 11:11 AM, Stephan Mueller  wrote:
> Hi Herbert, Ted,
>
> The venerable Linux /dev/random served users of cryptographic mechanisms well
> for a long time. Its behavior is well understood to deliver entropic data. In
> the last years, however, the Linux /dev/random showed signs of age where it 
> has
> challenges to cope with modern computing environments ranging from tiny 
> embedded
> systems, over new hardware resources such as SSDs, up to massive parallel
> systems as well as virtualized environments.
>
> With the experience gained during numerous studies of /dev/random, entropy
> assessments of different noise source designs and assessing entropy behavior 
> in
> virtual machines and other special environments, I felt to do something about
> it.
> I developed a different approach, which I call Linux Random Number Generator
> (LRNG) to collect entropy within the Linux kernel. The main improvements
> compared to the legacy /dev/random is to provide sufficient entropy during 
> boot
> time as well as in virtual environments and when using SSDs. A secondary 
> design
> goal is to limit the impact of the entropy collection on massive parallel
> systems and also allow the use accelerated cryptographic primitives. Also, all
> steps of the entropic data processing are testable. Finally massive 
> performance
> improvements are visible at /dev/urandom / get_random_bytes.

[quote from pdf]
> ... DRBG is “minimally” seeded with 112^6 bits of entropy.
> This is commonly achieved even before user space is initiated.

Unfortunately one of the issues of the /dev/urandom interface is the
fact that it may start providing random numbers even before the
seeding is complete. From the above quote, I understand that this
issue is not addressed by the new interface. That's a serious
limitation (of the current and inherited by the new implementation),
since most/all newly deployed systems from "cloud" images generate
keys using /dev/urandom (for sshd for example) on boot, and it is
unknown to these applications whether they operate with uninitialized
seed.

While one could argue for using /dev/random, the unpredictability of
the delay it incurs is prohibitive for any practical use. Thus I'd
expect any new interface to provide a better /dev/urandom, by ensuring
that the kernel seed buffer is fully seeded prior to switching to
userspace.

About the rest of the design, I think it is quite clean. I think the
DRBG choice is quite natural given the NIST recommendations, but have
you considered using a stream cipher instead like chacha20 which in
most of cases it would outperform the DRBG based on AES?

regards,
Nikos


[RFC][PATCH 0/6] /dev/random - a new approach

2016-04-21 Thread Stephan Mueller
Hi Herbert, Ted,

The venerable Linux /dev/random served users of cryptographic mechanisms well
for a long time. Its behavior is well understood to deliver entropic data. In
the last years, however, the Linux /dev/random showed signs of age where it has
challenges to cope with modern computing environments ranging from tiny embedded
systems, over new hardware resources such as SSDs, up to massive parallel
systems as well as virtualized environments.

With the experience gained during numerous studies of /dev/random, entropy
assessments of different noise source designs and assessing entropy behavior in
virtual machines and other special environments, I felt to do something about
it.

I developed a different approach, which I call Linux Random Number Generator
(LRNG) to collect entropy within the Linux kernel. The main improvements
compared to the legacy /dev/random is to provide sufficient entropy during boot
time as well as in virtual environments and when using SSDs. A secondary design
goal is to limit the impact of the entropy collection on massive parallel
systems and also allow the use accelerated cryptographic primitives. Also, all
steps of the entropic data processing are testable. Finally massive performance
improvements are visible at /dev/urandom / get_random_bytes.

The design and implementation is driven by a set of goals described in [1]
that the LRNG completely implements. Furthermore, [1] includes a
comparison with RNG design suggestions such as SP800-90B, SP800-90C, and
AIS20/31.

Please find in [1] the full design discussion covering qualitative assessments
of the entropy collection and entropy flow. Furthermore, a full testing of the
data collection and data processing is performed. The testing focuses on the
calculation of different types of minimum entropy values of raw noise data.
All used test code and supportive tools are provided with [2]. The testing
is concluded with a comparison to the legacy /dev/random implementation
regarding performance and delivery time of entropic random data.

To support a proper review of the code without interfering with the current
functionality, the attached patch adds the LRNG to the cryptodev-2.6 tree as
an option. The patches do not replace or even alter the legacy /dev/random
implementation but allows the user to enable the LRNG at compile time. If it is
enabled, the legacy /dev/random implementation is not compiled. On the other
hand, if the LRNG support is disabled, the legacy /dev/random code is
compiled unchanged. With this approach you see that the LRNG is API and ABI
compatible with the legacy implementation.

Stability tests were executed on 64 and 32 bit systems where the test KVM with 4
vCPUs on 4 hyperthreads compiled the Linux kernel with make -j4 over and over
for half a day. In addition, parallel cat /dev/urandom > /dev/null were
exercised for a couple of hours. Also, stability tests by generating 500
million interrupts were performed.

[1] http://www.chronox.de/lrng/doc/lrng.pdf

[2] http://www.chronox.de/lrng.html

Stephan Mueller (6):
  crypto: DRBG - externalize DRBG functions for LRNG
  random: conditionally compile code depending on LRNG
  crypto: Linux Random Number Generator
  crypto: LRNG - enable compile
  crypto: LRNG - hook LRNG into interrupt handler
  hyperv IRQ handler: trigger LRNG

 crypto/Kconfig |   10 +
 crypto/Makefile|1 +
 crypto/drbg.c  |   11 +-
 crypto/lrng.c  | 1803 
 drivers/char/random.c  |8 +
 drivers/hv/vmbus_drv.c |3 +
 include/crypto/drbg.h  |7 +
 include/linux/genhd.h  |5 +
 include/linux/random.h |8 +
 kernel/irq/handle.c|1 +
 10 files changed, 1851 insertions(+), 6 deletions(-)
 create mode 100644 crypto/lrng.c

-- 
2.5.5

,

Ciao
Stephan


[RFC][PATCH 0/6] /dev/random - a new approach

2016-04-21 Thread Stephan Mueller
Hi Herbert, Ted,

The venerable Linux /dev/random served users of cryptographic mechanisms well
for a long time. Its behavior is well understood to deliver entropic data. In
the last years, however, the Linux /dev/random showed signs of age where it has
challenges to cope with modern computing environments ranging from tiny embedded
systems, over new hardware resources such as SSDs, up to massive parallel
systems as well as virtualized environments.

With the experience gained during numerous studies of /dev/random, entropy
assessments of different noise source designs and assessing entropy behavior in
virtual machines and other special environments, I felt to do something about
it.

I developed a different approach, which I call Linux Random Number Generator
(LRNG) to collect entropy within the Linux kernel. The main improvements
compared to the legacy /dev/random is to provide sufficient entropy during boot
time as well as in virtual environments and when using SSDs. A secondary design
goal is to limit the impact of the entropy collection on massive parallel
systems and also allow the use accelerated cryptographic primitives. Also, all
steps of the entropic data processing are testable. Finally massive performance
improvements are visible at /dev/urandom / get_random_bytes.

The design and implementation is driven by a set of goals described in [1]
that the LRNG completely implements. Furthermore, [1] includes a
comparison with RNG design suggestions such as SP800-90B, SP800-90C, and
AIS20/31.

Please find in [1] the full design discussion covering qualitative assessments
of the entropy collection and entropy flow. Furthermore, a full testing of the
data collection and data processing is performed. The testing focuses on the
calculation of different types of minimum entropy values of raw noise data.
All used test code and supportive tools are provided with [2]. The testing
is concluded with a comparison to the legacy /dev/random implementation
regarding performance and delivery time of entropic random data.

To support a proper review of the code without interfering with the current
functionality, the attached patch adds the LRNG to the cryptodev-2.6 tree as
an option. The patches do not replace or even alter the legacy /dev/random
implementation but allows the user to enable the LRNG at compile time. If it is
enabled, the legacy /dev/random implementation is not compiled. On the other
hand, if the LRNG support is disabled, the legacy /dev/random code is
compiled unchanged. With this approach you see that the LRNG is API and ABI
compatible with the legacy implementation.

Stability tests were executed on 64 and 32 bit systems where the test KVM with 4
vCPUs on 4 hyperthreads compiled the Linux kernel with make -j4 over and over
for half a day. In addition, parallel cat /dev/urandom > /dev/null were
exercised for a couple of hours. Also, stability tests by generating 500
million interrupts were performed.

[1] http://www.chronox.de/lrng/doc/lrng.pdf

[2] http://www.chronox.de/lrng.html

Stephan Mueller (6):
  crypto: DRBG - externalize DRBG functions for LRNG
  random: conditionally compile code depending on LRNG
  crypto: Linux Random Number Generator
  crypto: LRNG - enable compile
  crypto: LRNG - hook LRNG into interrupt handler
  hyperv IRQ handler: trigger LRNG

 crypto/Kconfig |   10 +
 crypto/Makefile|1 +
 crypto/drbg.c  |   11 +-
 crypto/lrng.c  | 1803 
 drivers/char/random.c  |8 +
 drivers/hv/vmbus_drv.c |3 +
 include/crypto/drbg.h  |7 +
 include/linux/genhd.h  |5 +
 include/linux/random.h |8 +
 kernel/irq/handle.c|1 +
 10 files changed, 1851 insertions(+), 6 deletions(-)
 create mode 100644 crypto/lrng.c

-- 
2.5.5

,

Ciao
Stephan