On Thu 2007-12-20 15:36:01, Theodore Tso wrote:
> On Wed, Dec 19, 2007 at 11:18:54PM -0500, Andrew Lutomirski wrote:
> > I understand that there's no way that /dev/random can provide good
> > output if there's insufficient entropy. But it still shouldn't leak
> > arbitrary bits of user data that
On Thu 2007-12-20 15:36:01, Theodore Tso wrote:
On Wed, Dec 19, 2007 at 11:18:54PM -0500, Andrew Lutomirski wrote:
I understand that there's no way that /dev/random can provide good
output if there's insufficient entropy. But it still shouldn't leak
arbitrary bits of user data that were
Andrew Lutomirski wrote:
No, it's there, and if there's little enough entropy around it can be
recovered by brute force.
A little entropy is enough to prevent a brute force attack. You would
have to have ZERO entropy after a cold boot so the attacker would know
exactly the contents of the
Andrew Lutomirski wrote:
No, it's there, and if there's little enough entropy around it can be
recovered by brute force.
A little entropy is enough to prevent a brute force attack. You would
have to have ZERO entropy after a cold boot so the attacker would know
exactly the contents of the
On Fri, Dec 21, 2007 at 11:10:36AM -0500, Andrew Lutomirski wrote:
> > > Step 1: Boot a system without a usable entropy source.
> > > Step 2: add some (predictable) "entropy" from userspace which isn't a
> > > multiple of 4, so up to three extra bytes get added.
> > > Step 3: Read a few bytes of
On Dec 20, 2007 3:17 PM, Phillip Susi <[EMAIL PROTECTED]> wrote:
> Andrew Lutomirski wrote:
> > I understand that there's no way that /dev/random can provide good
> > output if there's insufficient entropy. But it still shouldn't leak
> > arbitrary bits of user data that were never meant to be
On Dec 20, 2007 3:17 PM, Phillip Susi [EMAIL PROTECTED] wrote:
Andrew Lutomirski wrote:
I understand that there's no way that /dev/random can provide good
output if there's insufficient entropy. But it still shouldn't leak
arbitrary bits of user data that were never meant to be put into
On Fri, Dec 21, 2007 at 11:10:36AM -0500, Andrew Lutomirski wrote:
Step 1: Boot a system without a usable entropy source.
Step 2: add some (predictable) entropy from userspace which isn't a
multiple of 4, so up to three extra bytes get added.
Step 3: Read a few bytes of /dev/random and
On Wed, Dec 19, 2007 at 11:18:54PM -0500, Andrew Lutomirski wrote:
> I understand that there's no way that /dev/random can provide good
> output if there's insufficient entropy. But it still shouldn't leak
> arbitrary bits of user data that were never meant to be put into the
> pool at all.
Andrew Lutomirski wrote:
I understand that there's no way that /dev/random can provide good
output if there's insufficient entropy. But it still shouldn't leak
arbitrary bits of user data that were never meant to be put into the
pool at all.
It doesn't leak it though, it consumes it, and it
Andrew Lutomirski wrote:
I understand that there's no way that /dev/random can provide good
output if there's insufficient entropy. But it still shouldn't leak
arbitrary bits of user data that were never meant to be put into the
pool at all.
It doesn't leak it though, it consumes it, and it
On Wed, Dec 19, 2007 at 11:18:54PM -0500, Andrew Lutomirski wrote:
I understand that there's no way that /dev/random can provide good
output if there's insufficient entropy. But it still shouldn't leak
arbitrary bits of user data that were never meant to be put into the
pool at all.
Let's be
On Dec 17, 2007 10:46 PM, Theodore Tso <[EMAIL PROTECTED]> wrote:
> If you have a system with insufficent entropy inputs, you're in
> trouble, of course. There are "catastrophic reseeds" that attempt to
> mitigrate some of worse attacks, but at the end of the day,
> /dev/random isn't magic.
>
I
Theodore Tso wrote:
On Tue, Dec 18, 2007 at 02:39:00PM +1030, David Newall wrote:
Thus, the entropy saved at shutdown can be known at boot-time. (You can
examine the saved entropy on disk.)
If you can examine the saved entropy on disk, you can also introduce a
trojan horse kernel that logs
David Newall wrote:
Theodore Tso wrote:
On Tue, Dec 18, 2007 at 01:43:28PM +1030, David Newall wrote:
On a server, keyboard and mouse are rarely used. As you've described
it, that leaves only the disk, and during the boot process, disk
accesses and timing are somewhat predictable. Whether
Theodore Tso wrote:
On Tue, Dec 18, 2007 at 02:39:00PM +1030, David Newall wrote:
Thus, the entropy saved at shutdown can be known at boot-time. (You can
examine the saved entropy on disk.)
If you can examine the saved entropy on disk, you can also introduce a
trojan horse kernel that logs
David Newall wrote:
Theodore Tso wrote:
On Tue, Dec 18, 2007 at 01:43:28PM +1030, David Newall wrote:
On a server, keyboard and mouse are rarely used. As you've described
it, that leaves only the disk, and during the boot process, disk
accesses and timing are somewhat predictable. Whether
On Dec 17, 2007 10:46 PM, Theodore Tso [EMAIL PROTECTED] wrote:
If you have a system with insufficent entropy inputs, you're in
trouble, of course. There are catastrophic reseeds that attempt to
mitigrate some of worse attacks, but at the end of the day,
/dev/random isn't magic.
I
> Has anyone *proven* that using uninitialized data this way is safe?
You can probably find dozens of things in the Linux kernel that have not
been proven to be safe. That means nothing.
> As
> a *user* of this stuff, I'm *very* hesitant to trust Linux's RNG when I
> hear things like this.
On Tue, Dec 18, 2007 at 02:39:00PM +1030, David Newall wrote:
>
> Thus, the entropy saved at shutdown can be known at boot-time. (You can
> examine the saved entropy on disk.)
>
If you can examine the saved entropy on disk, you can also introduce a
trojan horse kernel that logs all keystrokes
Theodore Tso wrote:
On Tue, Dec 18, 2007 at 01:43:28PM +1030, David Newall wrote:
On a server, keyboard and mouse are rarely used. As you've described it,
that leaves only the disk, and during the boot process, disk accesses and
timing are somewhat predictable. Whether this is sufficient
On Tue, Dec 18, 2007 at 01:43:28PM +1030, David Newall wrote:
> On a server, keyboard and mouse are rarely used. As you've described it,
> that leaves only the disk, and during the boot process, disk accesses and
> timing are somewhat predictable. Whether this is sufficient to break the
> RNG
Theodore Tso wrote:
On Mon, Dec 17, 2007 at 07:52:53PM -0500, Andy Lutomirski wrote:
It runs on a freshly booted machine (no
DSA involved, so we're not automatically hosed), so an attacker knows the
initial pool state.
Not just a freshly booted system. The system has to be a
On Mon, Dec 17, 2007 at 07:52:53PM -0500, Andy Lutomirski wrote:
> It runs on a freshly booted machine (no
> DSA involved, so we're not automatically hosed), so an attacker knows the
> initial pool state.
Not just a freshly booted system. The system has to be a freshly
booted, AND freshly
Theodore Tso wrote:
On Mon, Dec 17, 2007 at 08:30:05AM -0800, John Reiser wrote:
[You have yet to show that...]
There is a path that goes from user data into the pool.
Note particularly that the path includes data from other users.
Under the current implementation, anyone who accesses
> The bottom line: At a cost of at most three unpredictable branches
> (whether to clear the bytes in the last word with indices congruent
> to 1, 2, or 3 modulo 4), then the code can reduce the risk from something
> small but positive, to zero. This is very inexpensive insurance.
> John
On Mon, Dec 17, 2007 at 10:28:38AM -0800, Linus Torvalds wrote:
> [ So Al, when you said that
>
> (a-b)
>
> is equivalent to
>
> ((char *)a-(char *)b)/4
>
> for a "int *" a and b, you're right in the sense that the *result* is
> the same, but the code generation likely
On Mon, 17 Dec 2007, Eric Dumazet wrote:
>
> while
>
> long *mid(long *a, long *b)
> {
> return ((a - b) / 2u + a);
> }
This is exactly what I'm talking about. That "2u" is TOTALLY POINTLESS.
It's an "unsigned int", but since (a-b) will be of type ptrdiff_t, and is
*wider* on a
On Mon, Dec 17, 2007 at 08:30:05AM -0800, John Reiser wrote:
> >>[You have yet to show that...]
> >>There is a path that goes from user data into the pool.
>
> Note particularly that the path includes data from other users.
> Under the current implementation, anyone who accesses /dev/urandom
> is
Theodore Tso wrote:
> On Fri, Dec 14, 2007 at 04:30:08PM -0800, John Reiser wrote:
>
>>There is a path that goes from user data into the pool.
Note particularly that the path includes data from other users.
Under the current implementation, anyone who accesses /dev/urandom
is subject to having
Theodore Tso wrote:
On Fri, Dec 14, 2007 at 04:30:08PM -0800, John Reiser wrote:
There is a path that goes from user data into the pool.
Note particularly that the path includes data from other users.
Under the current implementation, anyone who accesses /dev/urandom
is subject to having some
On Mon, Dec 17, 2007 at 08:30:05AM -0800, John Reiser wrote:
[You have yet to show that...]
There is a path that goes from user data into the pool.
Note particularly that the path includes data from other users.
Under the current implementation, anyone who accesses /dev/urandom
is subject
On Mon, 17 Dec 2007, Eric Dumazet wrote:
while
long *mid(long *a, long *b)
{
return ((a - b) / 2u + a);
}
This is exactly what I'm talking about. That 2u is TOTALLY POINTLESS.
It's an unsigned int, but since (a-b) will be of type ptrdiff_t, and is
*wider* on a 64-bit
On Mon, Dec 17, 2007 at 10:28:38AM -0800, Linus Torvalds wrote:
[ So Al, when you said that
(a-b)
is equivalent to
((char *)a-(char *)b)/4
for a int * a and b, you're right in the sense that the *result* is
the same, but the code generation likely isn't. The a-b
The bottom line: At a cost of at most three unpredictable branches
(whether to clear the bytes in the last word with indices congruent
to 1, 2, or 3 modulo 4), then the code can reduce the risk from something
small but positive, to zero. This is very inexpensive insurance.
John Reiser,
Theodore Tso wrote:
On Mon, Dec 17, 2007 at 08:30:05AM -0800, John Reiser wrote:
[You have yet to show that...]
There is a path that goes from user data into the pool.
Note particularly that the path includes data from other users.
Under the current implementation, anyone who accesses
On Mon, Dec 17, 2007 at 07:52:53PM -0500, Andy Lutomirski wrote:
It runs on a freshly booted machine (no
DSA involved, so we're not automatically hosed), so an attacker knows the
initial pool state.
Not just a freshly booted system. The system has to be a freshly
booted, AND freshly
Theodore Tso wrote:
On Mon, Dec 17, 2007 at 07:52:53PM -0500, Andy Lutomirski wrote:
It runs on a freshly booted machine (no
DSA involved, so we're not automatically hosed), so an attacker knows the
initial pool state.
Not just a freshly booted system. The system has to be a
On Tue, Dec 18, 2007 at 01:43:28PM +1030, David Newall wrote:
On a server, keyboard and mouse are rarely used. As you've described it,
that leaves only the disk, and during the boot process, disk accesses and
timing are somewhat predictable. Whether this is sufficient to break the
RNG is
Theodore Tso wrote:
On Tue, Dec 18, 2007 at 01:43:28PM +1030, David Newall wrote:
On a server, keyboard and mouse are rarely used. As you've described it,
that leaves only the disk, and during the boot process, disk accesses and
timing are somewhat predictable. Whether this is sufficient
On Tue, Dec 18, 2007 at 02:39:00PM +1030, David Newall wrote:
Thus, the entropy saved at shutdown can be known at boot-time. (You can
examine the saved entropy on disk.)
If you can examine the saved entropy on disk, you can also introduce a
trojan horse kernel that logs all keystrokes and
Has anyone *proven* that using uninitialized data this way is safe?
You can probably find dozens of things in the Linux kernel that have not
been proven to be safe. That means nothing.
As
a *user* of this stuff, I'm *very* hesitant to trust Linux's RNG when I
hear things like this. (Hint:
>> There is a path that goes from user data into the pool. This path
>> is subject to manipulation by an attacker, for both reading and
>> writing. Are you going to guarantee that in five years nobody
>> will discover a way to take advantage of it? Five years ago
>> there were no public attacks
On Sat, Dec 15, 2007 at 03:13:19PM +0800, Herbert Xu wrote:
> John Reiser <[EMAIL PROTECTED]> wrote:
> >
> > If speed matters that much, then please recoup 33 cycles on x86
> > by using shifts instead of three divides, such as (gcc 4.1.2):
> >
> >add_entropy_words(r, tmp, (bytes +
On Sat, Dec 15, 2007 at 03:13:19PM +0800, Herbert Xu wrote:
John Reiser [EMAIL PROTECTED] wrote:
If speed matters that much, then please recoup 33 cycles on x86
by using shifts instead of three divides, such as (gcc 4.1.2):
add_entropy_words(r, tmp, (bytes + 3) / 4);
There is a path that goes from user data into the pool. This path
is subject to manipulation by an attacker, for both reading and
writing. Are you going to guarantee that in five years nobody
will discover a way to take advantage of it? Five years ago
there were no public attacks against
On Fri, 14 Dec 2007 23:20:30 PST, Matti Linnanvuori said:
> From: Matti Linnanvuori <[EMAIL PROTECTED]>
>
> /dev/urandom use no uninit bytes, leak no user data
>
> Signed-off-by: Matti Linnanvuori <[EMAIL PROTECTED]>
>
> ---
>
> --- a/drivers/char/random.c 2007-12-15 09:09:37.895414000 +0200
John Reiser <[EMAIL PROTECTED]> wrote:
>
> If speed matters that much, then please recoup 33 cycles on x86
> by using shifts instead of three divides, such as (gcc 4.1.2):
>
>add_entropy_words(r, tmp, (bytes + 3) / 4);
>
> 0x8140689 :lea0x3(%esi),%eax
> 0x814068c :mov
On Fri, Dec 14, 2007 at 04:30:08PM -0800, John Reiser wrote:
> There is a path that goes from user data into the pool. This path
> is subject to manipulation by an attacker, for both reading and
> writing. Are you going to guarantee that in five years nobody
> will discover a way to take
Theodore Tso wrote:
> On Fri, Dec 14, 2007 at 12:45:23PM -0800, John Reiser wrote:
>
>>>It's getting folded into the random number pool, where it will be
>>>impossible to recover it unless you already know what was in the
>>>pool. And if you know what's in the pool, you've already broken into
On Fri, Dec 14, 2007 at 12:45:23PM -0800, John Reiser wrote:
> > It's getting folded into the random number pool, where it will be
> > impossible to recover it unless you already know what was in the
> > pool. And if you know what's in the pool, you've already broken into
> > the kernel.
>
> The
Matt Mackall wrote:
> Yes, we use uninitialized data. But it's not a leak in any useful
> sense. To the extent the previous data is secret, this actually
> improves our entropy.
>
> It's getting folded into the random number pool, where it will be
> impossible to recover it unless you already
On Fri, Dec 14, 2007 at 11:34:09AM -0800, John Reiser wrote:
> xfer_secondary_pool() in drivers/char/random.c tells add_entropy_words()
> to use uninitialized tmp[] whenever bytes is not a multiple of 4.
> Besides being unfriendly to automated dynamic checkers, this is a
> potential leak of user
On Fri, Dec 14, 2007 at 11:34:09AM -0800, John Reiser wrote:
xfer_secondary_pool() in drivers/char/random.c tells add_entropy_words()
to use uninitialized tmp[] whenever bytes is not a multiple of 4.
Besides being unfriendly to automated dynamic checkers, this is a
potential leak of user data
Matt Mackall wrote:
Yes, we use uninitialized data. But it's not a leak in any useful
sense. To the extent the previous data is secret, this actually
improves our entropy.
It's getting folded into the random number pool, where it will be
impossible to recover it unless you already know what
On Fri, Dec 14, 2007 at 12:45:23PM -0800, John Reiser wrote:
It's getting folded into the random number pool, where it will be
impossible to recover it unless you already know what was in the
pool. And if you know what's in the pool, you've already broken into
the kernel.
The
Theodore Tso wrote:
On Fri, Dec 14, 2007 at 12:45:23PM -0800, John Reiser wrote:
It's getting folded into the random number pool, where it will be
impossible to recover it unless you already know what was in the
pool. And if you know what's in the pool, you've already broken into
the kernel.
On Fri, Dec 14, 2007 at 04:30:08PM -0800, John Reiser wrote:
There is a path that goes from user data into the pool. This path
is subject to manipulation by an attacker, for both reading and
writing. Are you going to guarantee that in five years nobody
will discover a way to take advantage
John Reiser [EMAIL PROTECTED] wrote:
If speed matters that much, then please recoup 33 cycles on x86
by using shifts instead of three divides, such as (gcc 4.1.2):
add_entropy_words(r, tmp, (bytes + 3) / 4);
0x8140689 xfer_secondary_pool+206:lea0x3(%esi),%eax
On Fri, 14 Dec 2007 23:20:30 PST, Matti Linnanvuori said:
From: Matti Linnanvuori [EMAIL PROTECTED]
/dev/urandom use no uninit bytes, leak no user data
Signed-off-by: Matti Linnanvuori [EMAIL PROTECTED]
---
--- a/drivers/char/random.c 2007-12-15 09:09:37.895414000 +0200
+++
60 matches
Mail list logo