Commit 4af712e8df ("random32: add prandom_reseed_late() and call when nonblocking pool becomes initialized") has added a late reseed stage that happens as soon as the nonblocking pool is marked as initialized.
This fails in the case that the nonblocking pool gets initialized during __prandom_reseed()'s call to get_random_bytes(). In that case we'd double back into __prandom_reseed() in an attempt to do a late reseed - deadlocking on 'lock' early on in the boot process. Instead, just avoid even waiting to do a reseed if a reseed is already occuring. Signed-off-by: Sasha Levin <sasha.le...@oracle.com> --- lib/random32.c | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) diff --git a/lib/random32.c b/lib/random32.c index 1e5b2df..b59da12 100644 --- a/lib/random32.c +++ b/lib/random32.c @@ -241,14 +241,27 @@ static void __prandom_reseed(bool late) { int i; unsigned long flags; - static bool latch = false; + static bool latch = false, reseeding = false; static DEFINE_SPINLOCK(lock); + /* + * Asking for random bytes might result in bytes getting + * moved into the nonblocking pool and thus marking it + * as initialized. In this case we would double back into + * this function and attempt to do a late reseed. + * Ignore the pointless attempt to reseed again if we're + * already waiting for bytes when the nonblocking pool + * got initialized + */ + if (reseeding) + return; + /* only allow initial seeding (late == false) once */ spin_lock_irqsave(&lock, flags); if (latch && !late) goto out; latch = true; + reseeding = true; for_each_possible_cpu(i) { struct rnd_state *state = &per_cpu(net_rand_state,i); @@ -263,6 +276,7 @@ static void __prandom_reseed(bool late) prandom_warmup(state); } out: + reseeding = false; spin_unlock_irqrestore(&lock, flags); } -- 1.8.3.2 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/