On 7/12/2016 5:53 PM, Rich Felker wrote:
On Tue, Jul 12, 2016 at 01:09:42PM -0400, Michael Conrad wrote:
On 7/7/2016 11:49 AM, Rob Landley wrote:
On 07/06/2016 11:41 AM, Etienne Champetier wrote:
Now you really hate the fact that getrandom() is a syscall.
I do not hate the fact getrandom is a syscall. I'm asking what the point
is of a new applet to call this syscall. You have suggested it could
block to show when /dev/urandom is producing a higher grade of
randomness than it does before being properly seeded. That is, as far as
I can tell, the only reason for your new applet to exist rather than
upgrading $RANDOM in ash/hush.

Actually in my opinion the syscall is inferior to a new character
device, because blocking syscalls interfere with event-driven
programming.

Suppose you want to write a single-thread event-driven web server
which initializes its SSL library with this randomness source (i.e.
won't allow SSL until enough entropy is collected for a good
initialization) but you still want it to be able to accept non-SSL
connections.  In order to use the syscall you need a thread, or
child process.  (haha, such as pipe to a "getrandom" applet...)
Thanks for contributing your ideas about what color the bikeshed
should be, but getrandom's already got you covered. Until your csPRNG
is initialized, just call getrandom with the GRND_NONBLOCK once each
time you get any event (i.e. each time poll() returns). There's no
need to busy-wait or periodically check with timeouts since you don't
need the results until there's an event to act on, anyway.

Not unless they add a feature to deliver a signal when it becomes un-blocked.

And they don't have me covered unless the event library I'm using adds explicit support for it. If I'm using libevent or glib and was asked to write such a program today, I'd probably have to settle for polling with a timer, or a thread.

There are very good reasons it's a syscall rather than a device: many
use cases require a never-fails entropy source, and with the device
node approach they're vulnerable to fd-exhaustion attacks. Most
existing bad code, when faced with such a situation, falls back to
some completely insecure seed like time(). The only reliable way to
prevent such idiocy was to provide an interface that can't fail.

It's trivial to open it at program start, and abort if you can't. If the FD is already open it will never fail.

The "unix way" is to make things waitable via file handles, or via signals. I have a personal grudge against blocking system calls.

_______________________________________________
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox

Reply via email to