Re: Re: Two ideas for random number generation

2002-04-29 Thread Jim Choate


On Wed, 24 Apr 2002 [EMAIL PROTECTED] wrote:

> That is, to get the infinite cycle, you'd have to have some method of
> generating a uniform random integer 0 to infinity for the
> initial state, and you'd need an infinite amount of memory
> to store  the current internal state.  Neither of which
> is acheivable ion practice.

Actually neither of these are requirement for RNG's. You're fist comment
about 'some method' tends to shoot your argument in the foot right off.

As to the second, no you don't need infinite ram to create a RNG. Please
demonstrate the proof. What you need is -continuity-, not -infinity-.
They're not the same thing. If all it took was infinite memory then a
standard Turing machine could, by your thesis, generate a RNG and we know
from basic principles that isn't possible. Why? Their tape is of infinite
length and zero access time.

> Conversely, a PRNG whose cycle is "only" 2^256 bits long
> will never repeat itself during the lifetime of the device, or
> the lifetime of the universe for that matter.

Depends on the key, in general the modulus of the PRNG is key dependent.
This is another weakness PRNG's.

In addition, the vast majority of PRNG's have a modulus considerably below
2^256. Most of them I've worked with have had modulus lengths less than a
couple of million.


 --


 The law is applied philosophy and a philosphical system is
 only as valid as its first principles.
 
James Patrick Kelly - "Wildlife"
   
 [EMAIL PROTECTED] www.ssz.com
 [EMAIL PROTECTED]  www.open-forge.org





Re: RE: Re: disk encryption modes (Re: RE: Two ideas for random number generation)

2002-04-27 Thread Joseph Ashwood
Title: RE: Re: disk encryption modes (Re: RE: Two ideas for random number generation)



 

  - Original Message - 
  From: 
  [EMAIL PROTECTED] 
  To: [EMAIL PROTECTED] 
  
  Sent: Saturday, April 27, 2002 12:11 
  PM
  Subject: CDR: RE: Re: disk encryption 
  modes (Re: RE: Two ideas for random number generation)
  
  Instead of adding 16 bytes to the size of each sector for 
  sector IV's how about having a separate file (which could be stored on a 
  compact flash card, CDRW or other portable media) that contains the IV's for 
  each disk sector? 
Not a very good solution.

   
  You could effectively wipe the encrypted disk merely by wiping 
  the IV file, which would be much faster than securely erasing the entire disk. 
  
   
Actually that wouldn't work, at least not in CBC mode 
(which is certainly my, and seems to be generally favored for disk encryption). 
In CBC mode, not having the IV (setting the IV to 0) only destroys the first 
block, after that everything decrypts normally, so the only wiped portion of the 
sector is the first block.

   
  If the IV file was not available, decryption would be 
  impossible even if the main encryption key was rubberhosed it otherwise 
  leaked. This could be a very desirable feature for the tinfoil-hat-LINUX 
  crowd--as long as you have posession if the compact flash card with the IV 
  file, an attacker with your laptop isn't going to get far cracking your 
  encryption, especially if you have the driver constructed to use a dummy IV 
  file on the laptop somewhere after X number of failed passphrase entries to 
  provide plausible deniability for the existence of the compact flash 
  card.
   
And then the attacker would just get all of your file 
except the first block (assuming the decryption key is found).

   
  To keep the IV file size reasonable, you might want to encrypt 
  logical blocks (1K-8K, depending on disk size, OS, and file system used, vs 
  512 bytes) instead of individual sectors, especially if the file system thinks 
  in terms of blocks instead of sectors. I don't see the value of encrypting 
  below the granularity of what the OS is ever going to write to 
disk.
 
That is a possibility, and actually I'm sure it's 
occurred to the hard drive manufacturers that the next time they do a full 
overhaul of the wire protocol they should enable larger blocks (if they haven't 
already, like I said before, I'm not a hard drive person). This would serve them 
very well as they would have to store less information increasing the disk size 
producible per cost (even if not by much every penny counts when you sell a 
billion devices). Regardless this could be useful for the disk encryption, but 
assuming worst case won't lose us anything in the long run, and should enable 
the best case to be done more easily, so for the sake of simplicity, and 
satisfying the worst case, I'll keep on calling them sectors until there's a 
reason not to.
        
                
        Joe


RE: Re: disk encryption modes (Re: RE: Two ideas for random number generation)

2002-04-27 Thread JonathanW
Title: RE: Re: disk encryption modes (Re: RE: Two ideas for random number generation)





Instead of adding 16 bytes to the size of each sector for sector IV's how about having a separate file (which could be stored on a compact flash card, CDRW or other portable media) that contains the IV's for each disk sector? You could effectively wipe the encrypted disk merely by wiping the IV file, which would be much faster than securely erasing the entire disk. If the IV file was not available, decryption would be impossible even if the main encryption key was rubberhosed it otherwise leaked. This could be a very desirable feature for the tinfoil-hat-LINUX crowd--as long as you have posession if the compact flash card with the IV file, an attacker with your laptop isn't going to get far cracking your encryption, especially if you have the driver constructed to use a dummy IV file on the laptop somewhere after X number of failed passphrase entries to provide plausible deniability for the existence of the compact flash card.

To keep the IV file size reasonable, you might want to encrypt logical blocks (1K-8K, depending on disk size, OS, and file system used, vs 512 bytes) instead of individual sectors, especially if the file system thinks in terms of blocks instead of sectors. I don't see the value of encrypting below the granularity of what the OS is ever going to write to disk.




Re: Re: disk encryption modes (Re: RE: Two ideas for random number generation)

2002-04-27 Thread Joseph Ashwood

- Original Message -
From: "Adam Back" <[EMAIL PROTECTED]>

> Joseph Ashwood wrote:
> > Actually I was referring to changing the data portion of the block
> > from {data} to {IV, data}
>
> Yes I gathered, but this what I was referring to when I said not
> possible.  The OSes have 512Kbytes ingrained into them.  I think you'd
> have a hard time changing it.  If you _could_ change that magic
> number, that'd be a big win and make the security easy: just pick a
> new CPRNG generated IV everytime you encrypt a block.  (CPRNG based on
> SHA1 or RC4 is pretty fast, or less cryptographic could be
> sufficient depending on threat model).

>From what I've seen of a few OSs there really isn't that much binding to 512
Kbytes in the OS per se, but the file system depends on it completely.
Regardless the logic place IMO to change this is at the disk level, if the
drive manufacturers can be convinced to produce drives that offer 512K+16
byte sectors. Once that initial break happens, all the OSs will play catchup
to support the drive, that will break the hardwiring and give us our extra
space. Of course convincing the hardware vendors to do this without a
substantial hardware reason will be extremely difficult. On our side though
is that I know that hard disks store more than just the data, they also
store a checksum, and some sector reassignment information (SCSI drives are
especially good at this, IDE does it under the hood if at all), I'm sure
there's other information, if this could be expanded by 16 bytes, that'd
supply the necessary room. Again convincing the vendors to supply this would
be a difficult task, and would require the addition of functionality to the
hard drive to either decrypt on the fly, or hand the key over to the driver.

> > Yeah the defragmentation would have to be smart, it can't simply copy
the
> > di[s]k block (with the disk block based IV) to a new location.
>
> Well with the sector level encryption, the encryption is below the
> defragmentation so file chunks get decrypted and re-encrypted as
> they're defragmented.
>
> With the file system level stuff the offset is likley logical (file
> offset etc) rather than absolute so you don't mind if the physical
> address changes.  (eg. loopback in a file, or file system APIs on
> windows).

That's true, I was thinking more as something that will for now run in
software and in the future gets pushed down to the hardware and we can use a
smartcard/USBKey/whatever comes out next to feed it the key. A
meta-filesystem would be useful as a short term measure, but it still keeps
all the keys in system memory where programs can access them, if we can
maintain the option of moving it to hardware later on, I think that would be
a better solution (although also a harder one).

I feel like I'm missing something that'll be obvious once I've found it.
Hmm, maybe there is a halfway decent solution (although not at all along the
same lines). For some reason I was just remembering SAN networks, it's a
fairly known problem to design and build secure file system protocols
(although they don't get used much). So it might actually be a simpler
concept to build a storage area network using whatever extra hardened OSs we
need, with only the BIOS being available without a smartcard, put the smart
card in, the smartcard itself decrypts/encrypts sector keys (or maybe some
larger grouping), the SAN host decrypts the rest. Pull out the smartcard,
the host can detect that, flush all caches and shut itself off. This has
some of the same problems, but at least we're not going to have to design a
hard drive, and since it's a remote file system I believe most OSs assume
very little about sector sizes. Of course as far as I'm concerned this
should still be just a stopgap measure until we can move that entire SAN
host inside the client computer.

Now for the biggest question, how do we get Joe Public to actually use this
correctly (take the smart card with them, or even not choose weak
passwords)?
Joe




Re: disk encryption modes (Re: RE: Two ideas for random number generation)

2002-04-27 Thread Adam Back

Joseph Ashwood wrote:
> Adam Back Wrote:
> > > This becomes completely redoable (or if you're willing to sacrifice
> > > a small portion of each block you can even explicitly stor ethe IV.
> >
> > That's typically not practical, not possible, or anyway very
> > undesirable for performance (two disk hits instead of one),
> > reliability (write one without the other and you lose data).
> 
> Actually I was referring to changing the data portion of the block
> from {data} to {IV, data}

Yes I gathered, but this what I was referring to when I said not
possible.  The OSes have 512Kbytes ingrained into them.  I think you'd
have a hard time changing it.  If you _could_ change that magic
number, that'd be a big win and make the security easy: just pick a
new CPRNG generated IV everytime you encrypt a block.  (CPRNG based on
SHA1 or RC4 is pretty fast, or less cryptographic could be
sufficient depending on threat model).

> placing all the IVs at the head of every read. This of course will
> sacrifice k bits of the data space for little reason.

Security / space trade off with no performance hit (other than needing
to write 7% or 14% more data depending on size of IV) is probably more
desirable than having to doubly encrypt the block and take a 2x cpu
overhead hit.  However as I mentioned I don't think it's practical /
possible due to OS design.

> > Note in the file system level scenario an additional problem is file
> > system journaling, and on-the-fly disk defragmentation -- this can
> > result in the file system intentionally leaving copies of previous or
> > the same plaintexts encrypted with the same key and logical position
> > within a file.
> 
> Yeah the defragmentation would have to be smart, it can't simply copy the
> dick block (with the disk block based IV) to a new location. 

Well with the sector level encryption, the encryption is below the
defragmentation so file chunks get decrypted and re-encrypted as
they're defragmented.

With the file system level stuff the offset is likley logical (file
offset etc) rather than absolute so you don't mind if the physical
address changes.  (eg. loopback in a file, or file system APIs on
windows).

> > Another approach was Paul Crowley's Mercy cipher which has a 4Kbit
> > block size (= 512KB = sector sized).  But it's a new cipher and I
> > think already had some problems, though performance is much better
> > than eg AES with double CBC, and it means you can use ECB mode per
> > block and key derived with a key-derivation function salted by the
> > block-number (the cipher includes such a concept directly in it's
> > key-schedule), or CBC mode with an IV derived from the block number
> > and only one block, so you don't get the low-tide mark of edits you
> > get with CBC.
> 
> It's worse than that, there's actually an attack on the cipher. Paul details
> this fairly well on his page about Mercy.

Yes, that's what I was referring to by "already had some problems".

Adam
--
http://www.cypherspace.org/adam/



Re: disk encryption modes (Re: RE: Two ideas for random number generation)

2002-04-27 Thread Joseph Ashwood

- Original Message -
From: "Adam Back" <[EMAIL PROTECTED]>

> On Fri, Apr 26, 2002 at 11:48:11AM -0700, Joseph Ashwood wrote:
> > From: "Bill Stewart" <[EMAIL PROTECTED]>
> > > I've been thinking about a somewhat different but related problem
lately,
> > > which is encrypted disk drives.  You could encrypt each block of the
disk
> > > with a block cypher using the same key (presumably in CBC or some
similar
> > > mode), but that just feels weak.
> >
> > Why does it feel weak? CBC is provably as secure as the block cipher
(when
> > used properly), and a disk drive is really no different from many
others. Of
> > course you have to perform various gyrations to synchronise everything
> > correctly, but it's doable.
>
> The weakness is not catastrophic, but depending on your threat model
> the attacker may see the ciphertexts from multiple versions of the
> plaintext in the edit, save cycle.

That could be a problem, you pointed out more information in your other
message, but obviously this would have to be dealt with somehow. I was goign
to suggest that maybe it would be better to encrypt at the file level, but
this can very often leak more information, and depending on how you do it,
will leak directory stucture. There has to be a better solution.

> > Well it's not all the complicated. That same key, and encrypt the disk
> > block number, or address or anything else.
>
> Performance is often at a premium in disk driver software --
> everything moving to-and-from the disk goes through these drivers.
>
> Encrypt could be slow, encrypt for IV is probably overkill.  IV
> doesn't have to be unique, just different, or relatively random
> depending on the mode.
>
> The performance hit for computing IV depends on the driver type.
>
> Where the driver is encrypting disk block at a time, then say 512KB
> divided (standard smallest disk block size) into AES block sized
> chunks 16 bytes each is 32 encrypts per IV geenration.  So if IV
> generation is done with a block encrypt itself that'll slow the system
> down by 3.125% right there.
>
> If the driver is higher level using file-system APIs etc it may have
> to encrypt 1 cipher block size at a time each with a different IV, use
> encrypt to derive IVs in this scenario, and it'll be a 100% slowdown
> (encryption will take twice as long).

That is a good point, of course we could just use the old standby solution,
throw hardware at it. The hardware encrypts at disk (or even disk cache)
speed on the drive, eliminating all issues of this type. Not a particularly
cost-effective solution in many cases, but a reasonable option for others.

> > This becomes completely redoable (or if you're willing to sacrifice
> > a small portion of each block you can even explicitly stor ethe IV.
>
> That's typically not practical, not possible, or anyway very
> undesirable for performance (two disk hits instead of one),
> reliability (write one without the other and you lose data).

Actually I was referring to changing the data portion of the block from
{data}
to
{IV, data}

placing all the IVs at the head of every read. This of course will sacrifice
k bits of the data space for little reason.

> > > I've been thinking that Counter Mode AES sounds good, since it's easy
> > > to find the key for a specific block.   Would it be good enough just
to
> > use
> > >  Hash( (Hash(Key, block# ))
> > > or some similar function instead of a more conventional crypto
function?
> >
> > Not really you'd have to change the key every time you write to
> > disk, not exactly a good idea, it makes key distribution a
> > nightmare, stick with CBC for disk encryption.
>
> CBC isn't ideal as described above.  Output feedback modes like OFB
> and CTR are even worse as you can't reuse the IV or the attacker who
> is able to see previous disk image gets XOR of two plaintext versions.
>
> You could encrypt twice (CBC in each direction or something), but that
> will again slow you down by a factor of 2.
>
> Note in the file system level scenario an additional problem is file
> system journaling, and on-the-fly disk defragmentation -- this can
> result in the file system intentionally leaving copies of previous or
> the same plaintexts encrypted with the same key and logical position
> within a file.

Yeah the defragmentation would have to be smart, it can't simply copy the
dick block (with the disk block based IV) to a new location. This problem
disappears in the {IV, data} block type, but that has other problems that
are at least as substantial.

> So it's "easy" if performance is not an issue.

Or if you decide to throw hardware at it.

> Another approach was Paul Crowley's Mercy cipher which has a 4Kbit
> block size (= 512KB = sector sized).  But it's a new cipher and I
> think already had some problems, though performance is much better
> than eg AES with double CBC, and it means you can use ECB mode per
> block and key derived with a key-derivation function salted by the
> block-number (the ciph

Re: disk encryption modes (Re: RE: Two ideas for random number generation)

2002-04-26 Thread Adam Back

Right, it sounds like the same approach I alluded to, except I didn't
use a salt -- I just used a fast pseudon random number generator to
make the IV less structured than using the block number directly.

I did some experiments with a used disk and found that if you use the
block number directly for the IV, with CBC mode the block number and
plaintext difference cancel to result in the same input text to the
block cipher, resulting in the same ciphertext in a fair proportion of
cases (don't have the figures handy, but clearly this
not-insignificant number of collisions represents a leakage about the
plaintext on the disk).

The with aforementioned fast pseudo-random number generator I got no
collisions on the disk sample size (10Gig disk almost full of windows
application software and data).

I figure that's good empirical evidence of the soundness of the
approach, however another glitch may be if you consider that the
attacker can work partly from the inside -- eg influencing the
plaintext choice, as well as having read-only access to the ciphertext
-- in this case he could perhaps build up a partial codebook for the
cipher with the disk key, by influencing plaintext choices to create
values equal to the suspected differences between plaintext and
predicatable IVs.

How do you salt the random number generator?  Is it resistant to the
above type of attack do you think?

Adam

On Sat, Apr 27, 2002 at 11:19:04AM +1000, Julian Assange wrote:
> > You could encrypt twice (CBC in each direction or something), but that
> > will again slow you down by a factor of 2.
> 
> You can't easily get away with storing the IV as multiple parts of the IO
> pipe like to see blocks in 2^n form. 
> 
> The approach I take in Rubberhose is to calculate the IV from a
> very fast checksum routine and salt.



Re: disk encryption modes (Re: RE: Two ideas for random number generation)

2002-04-26 Thread Julian Assange

> You could encrypt twice (CBC in each direction or something), but that
> will again slow you down by a factor of 2.

You can't easily get away with storing the IV as multiple parts of the IO
pipe like to see blocks in 2^n form. 

The approach I take in Rubberhose is to calculate the IV from a
very fast checksum routine and salt.

--
 Julian Assange|If you want to build a ship, don't drum up people
   |together to collect wood or assign them tasks and
 [EMAIL PROTECTED]  |work, but rather teach them to long for the endless
 [EMAIL PROTECTED]  |immensity of the sea. -- Antoine de Saint Exupery



disk encryption modes (Re: RE: Two ideas for random number generation)

2002-04-26 Thread Adam Back

On Fri, Apr 26, 2002 at 11:48:11AM -0700, Joseph Ashwood wrote:
> From: "Bill Stewart" <[EMAIL PROTECTED]>
> > I've been thinking about a somewhat different but related problem lately,
> > which is encrypted disk drives.  You could encrypt each block of the disk
> > with a block cypher using the same key (presumably in CBC or some similar
> > mode), but that just feels weak.
> 
> Why does it feel weak? CBC is provably as secure as the block cipher (when
> used properly), and a disk drive is really no different from many others. Of
> course you have to perform various gyrations to synchronise everything
> correctly, but it's doable.

The weakness is not catastrophic, but depending on your threat model
the attacker may see the ciphertexts from multiple versions of the
plaintext in the edit, save cycle.

This could happen for example the attacker has read access to your
disk, or if the attacker gained temporary physical access to the
machine but didn't have enough resources to install software trojan,
or if good disk checksumming is in place, didn't have enough resources
to install hardware trojan.

> > So you need some kind of generator of pretty-random-looking keys so
> > that each block of the disk gets a different key, or at the very
> > least a different IV for each block of the disk, so in some sense
> > that's a PRNG.  (You definitely need a different key for each block
> > if you're using RC4, but that's only usable for Write-Once media,
> > i.e. boring.)  Obviously you need repeatability, so you can't use a
> > real random number generator.
> 
> Well it's not all the complicated. That same key, and encrypt the disk
> block number, or address or anything else. 

Performance is often at a premium in disk driver software --
everything moving to-and-from the disk goes through these drivers.

Encrypt could be slow, encrypt for IV is probably overkill.  IV
doesn't have to be unique, just different, or relatively random
depending on the mode.

The performance hit for computing IV depends on the driver type.

Where the driver is encrypting disk block at a time, then say 512KB
divided (standard smallest disk block size) into AES block sized
chunks 16 bytes each is 32 encrypts per IV geenration.  So if IV
generation is done with a block encrypt itself that'll slow the system
down by 3.125% right there.

If the driver is higher level using file-system APIs etc it may have
to encrypt 1 cipher block size at a time each with a different IV, use
encrypt to derive IVs in this scenario, and it'll be a 100% slowdown
(encryption will take twice as long).

> This becomes completely redoable (or if you're willing to sacrifice
> a small portion of each block you can even explicitly stor ethe IV.

That's typically not practical, not possible, or anyway very
undesirable for performance (two disk hits instead of one),
reliability (write one without the other and you lose data).

> > I've been thinking that Counter Mode AES sounds good, since it's easy
> > to find the key for a specific block.   Would it be good enough just to
> use
> >  Hash( (Hash(Key, block# ))
> > or some similar function instead of a more conventional crypto function?
> 
> Not really you'd have to change the key every time you write to
> disk, not exactly a good idea, it makes key distribution a
> nightmare, stick with CBC for disk encryption.

CBC isn't ideal as described above.  Output feedback modes like OFB
and CTR are even worse as you can't reuse the IV or the attacker who
is able to see previous disk image gets XOR of two plaintext versions.

You could encrypt twice (CBC in each direction or something), but that
will again slow you down by a factor of 2.

Note in the file system level scenario an additional problem is file
system journaling, and on-the-fly disk defragmentation -- this can
result in the file system intentionally leaving copies of previous or
the same plaintexts encrypted with the same key and logical position
within a file.

So it's "easy" if performance is not an issue.

Another approach was Paul Crowley's Mercy cipher which has a 4Kbit
block size (= 512KB = sector sized).  But it's a new cipher and I
think already had some problems, though performance is much better
than eg AES with double CBC, and it means you can use ECB mode per
block and key derived with a key-derivation function salted by the
block-number (the cipher includes such a concept directly in it's
key-schedule), or CBC mode with an IV derived from the block number
and only one block, so you don't get the low-tide mark of edits you
get with CBC.

But Mercy as a set of design criteria is very interesting for this
application.

Adam
--
http://www.cypherspace.org/adam/




Re: RE: Two ideas for random number generation

2002-04-26 Thread Joseph Ashwood

- Original Message -
From: "Bill Stewart" <[EMAIL PROTECTED]>

> I've been thinking about a somewhat different but related problem lately,
> which is encrypted disk drives.  You could encrypt each block of the disk
> with a block cypher using the same key (presumably in CBC or some similar
> mode),
> but that just feels weak.

Why does it feel weak? CBC is provably as secure as the block cipher (when
used properly), and a disk drive is really no different from many others. Of
course you have to perform various gyrations to synchronise everything
correctly, but it's doable.

> So you need some kind of generator of
> pretty-random-looking keys so that each block of the disk gets a different
key,
> or at the very least a different IV for each block of the disk,
> so in some sense that's a PRNG.  (You definitely need a different key for
each
> block if you're using RC4, but that's only usable for Write-Once media,
> i.e. boring.)
> Obviously you need repeatability, so you can't use a real random number
> generator.

Well it's not all the complicated. That that same key, and encrypt the disk
block number, or address or anything else. This becomes completely redoable
(or if you're willing to sacrifice a small portion of each block you can
even explicitly stor ethe IV.

> I've been thinking that Counter Mode AES sounds good, since it's easy
> to find the key for a specific block.   Would it be good enough just to
use
>  Hash( (Hash(Key, block# ))
> or some similar function instead of a more conventional crypto function?

Not really you'd have to change the key every time you write to disk, not
exactly a good idea, it makes key distribution a nightmare, stick with CBC
for disk encryption.
Joe




Re: Re: Re: Two ideas for random number generation

2002-04-23 Thread jamesd

--
Joseph Ashwood
> > > Because with a pRNG we can sometimes prove very important 
> > > things, while with a RNG we can prove very little (we can't 
> > > even prove that entropy actually exists, let alone that we 
> > > can collect it).

James A. Donald:
> > Don't be silly.  Of course we know that entropy exists, and we 
> > can collect it.
> >
> > If a RNG runs off Johnson noise, then the ability to predict 
> > its output would imply the ability to violate the second law 
> > of thermodynamics.  If it runs off shot noise, then the 
> > ability to predict its output would disprove quantum 
> > mechanics.

Joseph Ashwood
> Actually there are models that fit the universe that are 
> entirely deterministic.

These models are entirely incoherent, and I would summarize them 
as "God knows".   And if these models allowed us to predict the 
outcome of a true RNG, they would not fit the universe.

James A. Donald:
> > > > And if ofne is implementing a PRNG in software, it is 
> > > > trivial to have lots of internal state (asymptotically 
> > > > approaching one-time pad properties).

Joseph Ashwood
> > > The problem is not having that much internal state, but what 
> > > do you do with it? Currently the best options on that front 
> > > involve using block ciphers in various modes, but this has a 
> > > rather small state,
> >
> > RC4 has 1684 bits of state, which should prove sufficient to 
> > defeat guessing.
>
> And RC4 is far from a good RNG of any type, it's distinguishable 
> from random fairly easily, and unless it's used very carefully 
> it's weak. If one were to try to guess all 1684 bits it would be 
> exceedingly difficult, but to start with, it's only a 
> permutation so the space is much smaller, in addition the state 
> itself has more attacks available

Wrong.  1684 bits of entropy.  Count them.

The state is a permutation 256, which requires 2048 bits to 
describe (256 *8) but contains 1684 bits of entropy, not 1684 
bits.

2048 bit description, but because it is a permutation, 1684 bits
actual entropy. 

--digsig
 James A. Donald
 6YeGpsZR+nOTh/cGwvITnSR3TdzclVpR0+pr3YYQdkG
 MjdAfFTXXtA7qo/FzKsFLPFEYgVQ8bY2lfseYhYX
 4P9O7sqp2z5todA8tcLMmb8wQiZ9lLBz/la5zhU+f




Re: Re: Two ideas for random number generation: Q for Eugene

2002-04-22 Thread gfgs pedo

hi,

I get the point.Thanx for all the replies.

regards Data.

--- Joseph Ashwood <[EMAIL PROTECTED]> wrote:
> 
> - Original Message -
> From: "gfgs pedo" <[EMAIL PROTECTED]>
> 
> > > > Oh surely you can do better than that - making
> it
> > > hard to guess the seed
> > > > is also clearly a desirable property (and one
> that
> > > the square root "rng"
> > > > does not have).
> > U can choose any arbitrary seed(greater than 100
> bits
> > as he (i forgot who) mentioned earlier.Then
> subject it
> > to the Rabin-Miller test.
> > Since the seed value is a very large number,it
> would
> > be impossible to determine the actual value.The
> > chances the intruder  find the correct seed or the
> > prime number hence generated is practically verly
> low.
> 
> You act like the only possible way to figure it out
> is to guess the initial
> seed. The truth is that the number used leaves a
> substantial amount of
> residue in it's square root, and there are various
> rules that can be applied
> to square roots as well. Since with high likelihood
> you will have a lot of
> small factors but few large ones, it's a reasonable
> beginning to simply
> store the roots of the first many primes, this gives
> you a strong network to
> work from when looking for those leftover
> signatures. With decent likelihood
> the first 2^32 primes would be sufficient for this
> when you choose 100 bit
> numbers, and this attack will be much faster than
> brute force. So while you
> have defeated brute force (no surprise there, brute
> force is easy to defeat)
> you haven't developed a strong enough generation
> sequence to really get much
> of anywhere.
> 
> > > Of course, finding the square root of a 100
> digit
> > > number to a
> > > precision of hundreds of decimal places is a lot
> of
> > > computational
> > > effort for no good reason.
> > Yes the effort is going to be large but why no
> good
> > reason?
> 
> Because it's a broken pRNG, that is extremely
> expensive to run. If you want
> a fast pRNG you look to ciphers in CTR mode, or
> stream ciphers, if you want
> one that's provably good you go to BBS (which is
> probably faster than your
> algorithm anyway). So there's no good reason to
> implement such an algorithm.
> 
> > > BTW, the original poster seemed to be under the
> > > delusion that
> > > a number had to be prime in order for its square
> to
> > > be irrational,
> > > but every integer that is not a perfect square
> has
> > > an irrational
> > > square root (if A and B are mutually prime,
> A^2/B^2
> > > can't be
> > > simplified).
> >
> > Nope ,I'm under no such delusion :)
> 
> Just the delusion that your algorithm was good.
> Joe
> 


__
Do You Yahoo!?
Yahoo! Games - play chess, backgammon, pool and more
http://games.yahoo.com/




Re: Re: Two ideas for random number generation: Q for Eugene

2002-04-22 Thread Joseph Ashwood


- Original Message -
From: "gfgs pedo" <[EMAIL PROTECTED]>

> > > Oh surely you can do better than that - making it
> > hard to guess the seed
> > > is also clearly a desirable property (and one that
> > the square root "rng"
> > > does not have).
> U can choose any arbitrary seed(greater than 100 bits
> as he (i forgot who) mentioned earlier.Then subject it
> to the Rabin-Miller test.
> Since the seed value is a very large number,it would
> be impossible to determine the actual value.The
> chances the intruder  find the correct seed or the
> prime number hence generated is practically verly low.

You act like the only possible way to figure it out is to guess the initial
seed. The truth is that the number used leaves a substantial amount of
residue in it's square root, and there are various rules that can be applied
to square roots as well. Since with high likelihood you will have a lot of
small factors but few large ones, it's a reasonable beginning to simply
store the roots of the first many primes, this gives you a strong network to
work from when looking for those leftover signatures. With decent likelihood
the first 2^32 primes would be sufficient for this when you choose 100 bit
numbers, and this attack will be much faster than brute force. So while you
have defeated brute force (no surprise there, brute force is easy to defeat)
you haven't developed a strong enough generation sequence to really get much
of anywhere.

> > Of course, finding the square root of a 100 digit
> > number to a
> > precision of hundreds of decimal places is a lot of
> > computational
> > effort for no good reason.
> Yes the effort is going to be large but why no good
> reason?

Because it's a broken pRNG, that is extremely expensive to run. If you want
a fast pRNG you look to ciphers in CTR mode, or stream ciphers, if you want
one that's provably good you go to BBS (which is probably faster than your
algorithm anyway). So there's no good reason to implement such an algorithm.

> > BTW, the original poster seemed to be under the
> > delusion that
> > a number had to be prime in order for its square to
> > be irrational,
> > but every integer that is not a perfect square has
> > an irrational
> > square root (if A and B are mutually prime, A^2/B^2
> > can't be
> > simplified).
>
> Nope ,I'm under no such delusion :)

Just the delusion that your algorithm was good.
Joe




Re: Re: Two ideas for random number generation

2002-04-22 Thread Joseph Ashwood

- Original Message -
From: "Eugen Leitl" <[EMAIL PROTECTED]>

> On Mon, 22 Apr 2002, Tim May wrote:
>
> > What real-life examples can you name where Gbit rates of random digits
> > are actually needed?
>
> Multimedia streams, routers. If I want to secure a near-future 10 GBit
> Ethernet stream with a symmetric cypher for the duration of a few years
> (periodic rekeying from a RNG might help?) I need both lots of internal
> state (the PRNG can't help leaking information about its state in the
> cypher stream, though the rate of leakage is the function of smarts of the
> attacker) and a high data rate.

Actually that's not necessarily the case. Let's use your example of a
Multimedia stream server that is filling a 10GBit/s connection. Right now
the current minimum seems to be 56kbit/s. So that means that if every
available connection is taken in the same second, the server would only need
a rate of 2.2 million bits/sec from it's RNG to build a 128-bit key for
each. A good design for this though has the client doing most of the random
number choosing, where the only purpose of the server random number is to
prevent the client of biasing the result, so 128-bits is more than
sufficient. So 2.2 Mbit/sec seems to be the peak for that. Finding
situations where a decent design will yield a need for an RNG to run about 1
Gbit/sec is extremely difficult. With poor designs it's actually rather
easy, take a RNG that is poor enough (or a situation where that is a basic
assumption) that it has to be distilled to 1 billionth it's size, obviously
to support that multimedia stream server would require 2.2 million Gigabits
per second (approximately).

> > In any case, if someone wants Gbits per second of random numbers,
> > it'll cost 'em, as it should. Not something I think we need to worry
> > much about.
>
> Maybe, but it's neat trying to see how the constraints of 2d and 3d layout
> of cells, signal TOF and fanout issues influence PRNG design if lots of
> state bits and a high data rate are involved. It is not very useful right
> now, agreed.

I think it would be a good process to go through to develop a design for
one, or at least a basic outline for how it could be done, but the basic
idea that comes to mind looks a lot like /dev/random, but run in parallel
collecting from several sources including a custom hardware pool similar to
the Intel RNG.
Joe




Re: Re: Two ideas for random number generation

2002-04-21 Thread Joseph Ashwood


- Original Message -
From: <[EMAIL PROTECTED]>
To: "Tim May" <[EMAIL PROTECTED]>; "Eugen Leitl" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Sunday, April 21, 2002 1:33 PM
Subject: CDR: Re: Two ideas for random number generation


> Why would one want to implement a PRNG in silicon, when one can
> easily implement a real RNG in silicon?

Because with a pRNG we can sometimes prove very important things, while with
a RNG we can prove very little (we can't even prove that entropy actually
exists, let alone that we can collect it).

> And if one is implementing a PRNG in software, it is trivial to
> have lots of internal state (asymptotically approaching one-time
> pad properties).

The problem is not having that much internal state, but what do you do with
it? Currently the best options on that front involve using block ciphers in
various modes, but this has a rather small state, but again we can quite
often prove things about the construct.
Joe