>>protecting data on stolen laptops is a short term problem Laptop drives have 5 years of expected lifetime. On the other hand, disk drives could be used for data archival, too, although (currently) it is a much smaller business. In this case, we do need AES-256, but speculations of its security over 20 years might be completely off.
>>AES may not resist related key attacks. We have two HW engines in an encrypting disk drive, AES and the Galois Multiplier. I have a bad feeling about using a cipher to randomize its own keys (any slight non-randomness could be amplified). This is why I proposed using the very different multiplier for whitening key sequences, but, if we could agree using a disk ID, we won't need any key randomizer: the tweak values would be different. >>An adversary who somehow managed to get ahold of one of the derived keys >>should not be able to compute any other of the keys Good point. Accordingly, we should not use derived (sequential-encrypted) keys, at all. Using the (public) disk ID to differentiate between drives does not help, either. In the standard, or in the accompanying white-paper (or best practices document) we should state, that even in an array, each disk drive must be assigned at least one the keys independent-randomly. >>EME* essentially does two passes of ECB mode AES, plus three extra AES calls It means, that a parallel implementation can perform all of the ECB mode AES calls in the top and bottom row (a latency of 2), plus two of the extra AES operations in the leftmost column (Figure 2 of the EME* paper). We have a total delay of 4 AES operations, plus change. In case of XCB the question boils down to how parallelizable is the GHash operation. Because it is a hash function, it is basically sequential, is not it? This would give the advantage to EME*, unless there was a parallel version of the function h in XCB. For sequential implementation the speed relations depend on the speed of AES vs. a GHash block operation. A GHash step can be implemented faster in HW, cannot it be? That would make the sequential XCB faster. How about the royalties? Laszlo > -------- Original Message -------- > Subject: Re: p1619 (disk): ciphertext-stealing, tweak-mapping, other > From: David McGrew <[EMAIL PROTECTED]> > Date: Wed, December 21, 2005 9:19 am > To: [EMAIL PROTECTED] > Cc: SISWG <[EMAIL PROTECTED]> > > Hi Laszlo, > > a few comments inline: > > On Dec 20, 2005, at 3:26 PM, [EMAIL PROTECTED] wrote: > > >>> Didn't we mandate 256-bit AES? > > We should not. The most important application (in hundreds of million > > disk drives a year, a $10 billion business) is protection of data on > > laptop drives. These drives must consume little power, must be cheap, > > so the difference in HW complexity precludes anything but AES-128. > > that's an interesting point: protecting data on stolen laptops is a > short term problem (and is probably a good business motivation). > This is in contrast to the "protect it for 50 years" requirement that > one usually has in data storage encryption. > > > > >>> if you need several keys, make them all independent (or derive > >>> them using a strong pseudo-random generator). > > You did not comment on my real proposal of using AES keys K1, K1+K2, > > K1+K2^2,..., if you want to differentiate between disk drives. K2^k > > looks strong enough pseudorandom. For each drive, we need to compute > > this key only at boot-up, so any iterated hash function looks OK, does > > not it (also Hask(k))? In any case, a unique drive ID as part of I > > would be preferable. > > When deriving one key from another, we need to use a pseudorandom > function (either AES or HMAC, basically). This is the only accepted > crypto practice. > > Other proposals that don't use pseudorandom key derivation, like > using K, K+1, K+2, ... or K, K^2, ... all have potential > vulnerabilities: > > 1. AES may not resist related key attacks. This is not just a > theoretical point. The AES key schedule itself is not strongly > pseudorandom, that is, the round keys are derived from the block > cipher key using a method that is not really pseudorandom. > > 2. An adversary who somehow managed to get ahold of one of the > derived keys should not be able to compute any other of the keys in > the system. With a weak key derivation function, an adversary could > compute all of those keys. > > > > >>> an extension to EME that works on arbitrary-size strings. I > >>> called it EME*. > > Could you compare it to XCB or ABL? It looks like a parallel > > implementation of EME* has 4 cipher operations in the critical path > > (plus two XORs), a serial implementation takes 4(n/128)+3 AES > > operations. > > IIRC, EME* essentially does two passes of ECB mode AES, plus three > extra AES calls, and a bunch of cheaper operations. About half of > those operations are on the critical path, since all of the data has > to be processed before any of the output can be computed. XCB > essentially does a pass of AES counter mode and two passes of GHASH > (the GCM hash function), plus two extra block cipher calls. One pass > of CTR and one of GHASH are on the critical path. ABL does two > passes of AES CTR and two of GHASH. > > Shai, please correct me if need be. > > David > > > Also, I might have missed the licensing status. Could you > > update us? > > > >>> more or less random access to compartments, but sequential access > >>> to blocks within a compartment > > The drive index is a constant ID, handling it separately would allow > > optimizations. LBA is random access, the cipher block index is > > sequential, which again allows optimizations. These are three > > different > > types of data, so making three compartments makes sense. The standard > > does not have to specify how the 64-bit unique drive index is created: > > it can be the disk ID; assigned by the controller; the SCSI ID; a > > random number; or a sequential number. Even uniqueness is not an > > absolute requirement. Hard decisions are only needed about where to > > partition the I bits. > > > >>> if you have two of them, you can do different processing to the > >>> sequential index and the random-access index > > You can do different processing for each part of the I field, as well. > > For notebook drives, an extra 128-bit key and an extra Galois > > multiplication for each LBA access could be prohibitive. > > > > Laszlo > >> -------- Original Message -------- > >> Subject: Re: p1619 (disk): ciphertext-stealing, tweak-mapping, other > >> From: Shai Halevi <[EMAIL PROTECTED]> > >> Date: Tue, December 20, 2005 2:48 pm > >> To: SISWG <[EMAIL PROTECTED]> > >> > >>>>> You should not be using K+1, K+2, etc, lest you run into > >>>>> related-key attacks. > >>> > >>> I am not aware of any weakness of LRW in this regard. I was > >>> proposing to > >>> use related K1 (AES) keys. [...] > >> > >> Using the keys in this manner "voids the warranty" of AES. No > >> attacks on > >> the full AES that use this particular form are known to date, but > >> modifying > >> the keys in this fashion opens very powerful new avenues for > >> cryptanalysts. > >> We don't want to do this. > >> > >> The general rule-of-thumb for crypto thingies: if you need several > >> keys, > >> make them all independent (or derive them using a strong pseudo- > >> random > >> generator). > >> > >>> [...] I have not seen a > >>> modification of EME, which supports 520-bit LBA's. > >> > >> I have a paper from about a year ago that describes an extension to > >> EME that works on arbitrary-size strings. I called it EME*. See > >> http://eprint.iacr.org/2004/125/ > >> > >>> Is not it {K1,K2} = 32-byte? Only with AES-256 it would go up to > >>> 48-byte. > >> > >> Didn't we mandate 256-bit AES? I forget.. > >> > >>>>> If they use several different keys, then all the 48 bytes have > >>>>> to be > >>> chosen "at random and independently" for each of these different > >>> keys. > >>> > >>> Do you know a specific threat, or does only the security proof > >>> require > >>> this? > >> > >> The proof requires it, as does "general cryto hygiene" > >> > >>>>> introduce the notion of "compartments" within the key-scope, > >>>>> and use > >>> exactly two indexes, namely the compartment index and the index > >>> of the > >>> block within the compartment > >>> > >>> Why not use three compartments: Drive_Index, LBA, > >>> Cipherblock_within_LBA? > >> > >> The reason for two (rather than one) is that they are used > >> differently: > >> you expect more or less random access to compartments, but sequential > >> access to blocks within a compartment. In this regard there is no > >> difference > >> between the drive and the LBA index: in both cases you do not expect > >> sequential access, so you might as well pack them both in one > >> "compartment > >> index". > >> > >> Although this standard is specifically for disk, I still think > >> that there > >> is a wide variety of disks and virtualized disks out there, and I > >> prefer > >> not to tie the standard too much to one of them. So I prefer that > >> the part > >> of the standard that specifies the transform would be stated using > >> "technology neutral" terms (such as "compartments"). > >> > >> We can have a different part of the standard specifying how to map > >> the > >> information that is available to the encrypting device in various > >> settings > >> into these two indexes. (Presumably you then need to talk > >> separately about > >> SCSI, ATA, RAID-controller, and many many other different > >> technologies). > >> > >> > >>>>> A different proposal is to use two 128-bit values for K2 > >>> > >>> We seem to have enough bits in I for partitioning: 64-bit drive ID, > >>> 48-bit LBA (up to 140,737 TB disks with 512-byte LBAs), 16-bit > >>> cipher-block index, so one K2 key looks sufficient. > >> > >> The point was that if you have two of them, you can do different > >> processing to the sequential index and the random-access index (i.e., > >> use K2a*compartment-index and K2b*2^{block-index}). This makes it > >> easier to optimize in some environments. > >> > >> -- Shai