> (b) The "LBA to I mapping" is not quite the right question to discuss,
>   it should be "tweak to T mapping". (Recall that T is what you get when
>   multiplying I by Key_2.)
> A different proposal is to use two 128-bit values for K2...

Gents,

I have to say we need a reality check at this point. There is a real danger 
that the standard is
going to get unnecessarily complicated.

I personally am quite happy with the T = K2.I computation; there is absolutely 
nothing to be gained
in security or implementation by splitting this to use multiple K2a,K2b keys - 
that just makes it
harder since implementations with ONLY hardware multipliers have to switch keys 
which can be time
consuming.

There is also nothing wrong with the current use of incrementing I values for 
sequential cipher
blocks within a sector (and preferably between contiguous power-of-two-sized 
sectors too). Changing
to K2.(1<<j) is just silly; an optimised implementation using tables would have 
to store 32 K2
partial products rather than the 5 currently needed (1,3,7,15,31).

I am still dubious about allowing I=0; I'll go with the flow if most people 
agree this is a
non-issue; it is trivial to avoid but if you don't have to it makes it slightly 
easier.

*** All I was originally looking for was a clarification on the way I is 
formed: ***

(a) The endianness is unclear with the current wording of $5.2. We now agree 
that I is defined back
to front and that the LS bit of I is on the right, to match the GF polynomial. 
I suggest we use "i
=" where i is the integer value which gets encoded bitwise backwards into I. 
This can be fixed with
a simple rewrite and a couple of nice pictures.

(b) How are the I values assigned within a key scope, with the three scenarios 
of multiple keyscopes
and/or drives? This could be considered implementation dependent (since I is 
not a
drive-user-visible value) and does not even need to go in the standard, but I 
feel some guidance
here is necessary.

By careful choice, nasty binary boundaries during contiguous block processing 
can be avoided - we
have already worked out how to do this with minimal effort. I propose that "i" 
is formed by this
recommended (but not mandatory) method:

        i = drive_index<<96 + LBA<<n + (0,1,2...j-1),

        where n = ceil(log2(j)), and j = ceil(sector_size/16) are constants 
computed at format time.
        and drive_index is in range 0..65535 (or start at 1 to avoid I=0: 
vendor decision)

For a single drive with one scope, the index defaults to 0 and the LBA can be 
completely unmodified
(apart from shift offset) and the multiplication is as easy as it will ever get.

For multiple key scopes on a single drive (partitions?) the LBA used in this 
calculation could be
normalised to 0 within the partition/scope, or it could be completely unmodifed 
(easier) - just need
to choose either rule and document it.

For a key scope spanning multiple drives, the index _must_ be varied 
accordingly. The controller can
program each connected drive uniquely at power up to ensure the same I value is 
never reused within
a key scope (the security proof relies on this assumption). This is most likely 
scenario where this
is needed is a RAID application, I think.

> We seem to have enough bits in I for partitioning: 64-bit drive ID,
> 48-bit LBA (up to 140,737 TB disks with 512-byte LBAs), 16-bit
> cipher-block index, so one K2 key looks sufficient.

This is just a trivial variant of the proposal above. It's not as friendly 
because it doesn't allow
easy contiguous sector processing (unless you have 64K*16 byte sectors). Maybe 
the
cipher-block-index & LBA alignment I proposed should be mandatory, and the 
drive_index/drive_ID part
made recommended. Using a 64-bit drive ID rather than a simple index is 
slightly more work for no
security benefit and should be the vendors choice.

> In this regard there is no difference between the drive and the LBA index: in 
> both
> cases you do not expect sequential access, so you might as well pack them 
> both in
> one "compartment index".

I disagree that random access within the LBA is the norm. An application may 
perform lots of
contiguous sector accesses and we don't want to make that inefficient by 
mandating composition of I
badly.

> Although this standard is specifically for disk, I still think that there
> is a wide variety of disks and virtualized disks out there, and I prefer
> not to tie the standard too much to one of them. So I prefer that the part
> of the standard that specifies the transform would be stated using
> "technology neutral" terms (such as "compartments").

The current standard defines just that - it's called a key scope, we don't need 
a new term for it.
What needs to be made clear is how the key scopes are utilised in some typical 
scenarios, using
examples with familiar terms such as LBA and drive_index used to formt he block 
index within the key
scope, "i".


Other things:

> that the standard is intended to protect data at rest, and *not* in transit

Agreed. If you want to implement LRW externally to the drive you must ensure 
the controller-drive
connection is appropriately secure, either physically in a strong box or using 
an appropriate
encapsulating security protocol with non-repeating nonces and frequent 
re-keying. IPsec or MacSec
are approriate here. Perhaps this can be made clear in the introduction?

>> You did not comment on my real proposal of using AES keys K1, K1+K2,
>> K1+K2^2,..., if you want to differentiate between disk drives.

There are many techniques of generating multiple smaller keys from larger 
master keying material
using well-understood PRFs with nonces (ie, drive index). Why not use one?

>>Didn't we mandate 256-bit AES?
> We should not. The most important application (in hundreds of million
> disk drives a year, a $10 billion business) is protection of data on
> laptop drives. These drives must consume little power, must be cheap,
> so the difference in HW complexity precludes anything but AES-128.

I totally disagree. AES-256 is only slightly larger and slower than AES-128 in 
hardware; it's even
closer in area provided you don't change the key too frequently (ie random 
access in multiple
scopes).

Colin.

Reply via email to