RE: attack on rfc3211 mode (Re: disk encryption modes)

2002-05-27 Thread Lucky Green

Peter wrote:
> Yup.  Actually the no-stored-IV encryption was never designed 
> to be a non- malleable cipher mode, the design goal was to 
> allow encryption-with-IV without having to explicitly store 
> an IV.  For PWRI it has the additional nice feature of 
> avoiding collisions when you use a 64-bit block cipher, which 
> is probably going to be the case for some time to come even 
> with AES around.  It was only after all that that I noticed 
> that the first pass was effectively a CBC-MAC, but it didn't 
> seem important enough to mention it in the RFC since it 
> wasn't an essential property (good thing I didn't :-).
> 
> >With a disk mode, unlike with RFC3211 password based 
> encryption for CMS 
> >there is no place to store the structure inside the 
> plaintext which may 
> >to some extent defend against this attack.

Here is a partial list of requirements that I believe apply to drive
encryption cryptographic systems. I am sure that list is incomplete and
may contain errors. The following is what springs to mind:

1) I do not believe that there is a fundamental need to limit the size
of the ciphertext to the size of the plain text. Adding a 1% or even
more space overhead for encryption is acceptable under any day-to-day
scenario that I can think of.

2) The algorithm must be able to decrypt individual sectors without
having to decrypt the entire contents of the drive. Nor must the
algorithm leak any plaintext, even if the attacker were to have
knowledge of all but one byte of the plaintext stored on the drive.

3) The encrypted partition should leak no information about the number,
nature, and size of any files stored on the drive. Unless one has access
to the key, the entire partition should be appear to the observer as a
homogenous block of opaque encrypted data.

4) It would be nice, but is not in the least required, to be able to
convert an existing unencrypted partition to an encrypted partition and
back.

5) It must be possible to pass the encryption key as a parameter to
mount, presumably in the form of a config file containing they key to
prevent the key from showing up in ps.

6) It should be possible to both specify a raw AES key as well as a
passphrase-derived SHA2 generated AES key.

7) Since the key will need to be stored in RAM for extended periods of
time, the key should be protected from forensic recovery by being never
being swapped to disk as well as by periodic bit-flipping.

8) Ideally, and this is definitely a feature for v2.0, each user would
be able to specify a capability that will permit the listing and access
of any files under that user's permissions on the encrypted file system.

9) While steganographic file systems offering multiple levels of
credible distress keys are nice, I don't consider this a feature that
should be included in v1.0.




Re: attack on rfc3211 mode (Re: disk encryption modes)

2002-05-10 Thread Adam Shostack

On Sat, May 11, 2002 at 04:01:11AM +1200, Peter Gutmann wrote:
| General rant: It's amazing that there doesn't seem to be any published research
|   on such a fundamental crypto mechanism, with the result that everyone has to
|   invent their own way of doing it, usually badly.  We don't even have a decent
|   threat model for this, my attempt at one for password-based key wrap may or
|   may not be appropriate (well, I hope it's more or less right), but it's going
|   to be rather different than for a situation where you have an ephemeral
|   symmetric key rather than a fixed, high-value key wrapping another key.  The
|   same problem exists for things like PRFs, we now have PKCS #5v2, but before
|   that everyone had to invent their own PRF for lack of anything useful, with
|   the result that every single protocol which needs a PRF has its own,
|   incompatible, often little-analysed one.
| 
| More specific rant: Looking at the security standards and protocols deployed in
|   the last decade or so, you'd be forgiven for thinking that the only crypto
|   research done in the last 10 years (beyond basic crypto algorithms) was
|   STS/SPEKE and HMAC.  There seems to be this vast gulf between what crypto
|   researchers are working on and what practitioners actually need, so while
|   conferences are full of papers on group key management and anonymous voting
|   schemes and whatnot, people working on real-world implementations have to
|   home-brew their own mechanisms because there's nothing else available.  The
|   RFC 3211 wrap is actually parameterised so you can slip in something better
|   when it becomes available, but I can't see that ever happening because
|   researchers are too busy cranking out yet another secure multiparty
|   distributed computation paper that nobody except other researchers will ever
|   read.
| 
| (Did I miss offending anyone? :-).

The voting folks? ;)

Adam

-- 
"It is seldom that liberty of any kind is lost all at once."
   -Hume




Re: attack on rfc3211 mode (Re: disk encryption modes)

2002-05-10 Thread Peter Gutmann

Adam Back <[EMAIL PROTECTED]> writes:

>I can see that, but the security of CBC MAC relies on the secrecy of the
>ciphertexts leading up to the last block.  In the case of the mode you
>describe in RFC3211, the ciphertexts are not revealed directly but they are
>protected under a mode which has the same splicing attack. The splicing attack
>on "CBC MAC with leading ciphertext" works through CBC encryption, here's how
>that works:

Right.  One minor point, the IV is never zero, for disk encryption it's a
cryptographic transform of the sector number, for PWRI it's supplied by the CMS
algorithm parameter (this doesn't affect the attack).

>I would have thought this would be considered a 'break' of a non-malleable
>cipher mode as discussed for disk encryption where each bit of plaintext
>depends on each bit of ciphertext as would be the case with a secure cipher
>matching Mercy's design goals (a block cipher used in ECB mode with a
>different key per block).

Yup.  Actually the no-stored-IV encryption was never designed to be a non-
malleable cipher mode, the design goal was to allow encryption-with-IV without
having to explicitly store an IV.  For PWRI it has the additional nice feature
of avoiding collisions when you use a 64-bit block cipher, which is probably
going to be the case for some time to come even with AES around.  It was only
after all that that I noticed that the first pass was effectively a CBC-MAC,
but it didn't seem important enough to mention it in the RFC since it wasn't an
essential property (good thing I didn't :-).

>With a disk mode, unlike with RFC3211 pasword based encryption for CMS there
>is no place to store the structure inside the plaintext which may to some
>extent defend against this attack.

Even with PWRI you get at most ~32 bits of protection, and can bypass even that
if you encrypt 5 or more 64-bit blocks and mess up the second block, since the
garble will propagate to at most 4 of the 5 blocks.  The 32-bit limit was
deliberate, I was worried about dictionary attacks above all else (in fact the
first version of the wrap was a bit too paranoid in that it had no redundancy
at all, which unfortunately meant that it wasn't possible to catch incorrect
passwords.  User complaints lead to the addition of the 24-bit check value
which is enough to catch virtually all mistyped passwords but not enough to
provide more than a small reduction in the number of guesses necessary for an
attacker.  The length byte is there so an attacker can't perform an iterative
attack where they change the algorithm ID for the wrapped key to 40-bit RC4 and
brute-force it, then 56-bit DES and brute-force the 16-bit difference, then 80-
bit Skipjack, 112-bit two-key 3DES, and finally 128-bit AES.  There was another
key wrap design where this was possible).

Digressing from the original disk-sector-encryption, I'd be interested in some
debate on the requirements for password-based key wrapping.  The design goals
I used were:

  1. Resistance to dictionary/password-guessing attacks above all else.

  2. No need to use additional algorithms like hash algorithms (see the RFC for
 the rationale, to save me typing it all in again).

The reason for 1. is that provided you use a secure cipher the best approach
for an attacker is going to be a dictionary attack or similar attack on the
password used to wrap the key.  Since the wrapping key is going to be used to
protect things like long-term private keys, this is an extremely high-value
item.

Note that the immunity to key-guessing requirement is mutually exclusive with
modification-detection, since (for example) storing a SHA-1 hash would allow
you to immediately verify whether you'd found the password/key.  This is the
exact problem which PKCS #12 has, you don't need to attack the key wrap since
the MAC on the wrapped data is a much easier target.  There's another key wrap
which stores a full SHA-1 hash alongside the wrapped key (to protect against
some problems present in an earlier version of the same key wrap mechanism,
which fell to fairly trivial attacks).  This is great for integrity protection,
but terrible for security, since it allows you to verify with pretty much 100%
accuracy whether you've guessed the password.I also looked at some sort of
OAEP-like wrapping (or, more generally, a Feistel-like construct of the kind
used in OAEP), but it seemed a bit ad-hoc when used with symmetric key wrap
rather than RSA.

Does anyone have any thoughts on symmetric key wrap, and specifically the
differences between high-value and low-value (ephemeral) key wrap requirements?

General rant: It's amazing that there doesn't seem to be any published research
  on such a fundamental crypto mechanism, with the result that everyone has to
  invent their own way of doing it, usually badly.  We don't even have a decent
  threat model for this, my attempt at one for password-based key wrap may or
  may not be appropriate (well, I hope it's more or less right), but it'

Re: Re: disk encryption modes

2002-05-01 Thread Joseph Ashwood

- Original Message -
From: "Morlock Elloi" <[EMAIL PROTECTED]>

> Collision means same plaintext to the same ciphertext.

Actually all it means in this case is the same ciphertext, since the key is
the same it of course carries back to the plaintext, but that is irrelevant
at this point. The ciritical fact is that the ciphertexts are the same.

> The collision happens on
> the cypher block basis, not on disk block basis.

The only one that matters is the beginning of the disk block, since that is
what was being detected.

> This has nothing to do with practical security.

It has everything to do with practical security. This collision of headers
leaks information, that leak is what I highlighted.

> You imply more than *hundred thousand* of identical-header word *docs* on
the
> same disk and then that identifying several of these as potential word
docs is
> a serious leak.

What I said was that given a significant number of documents with identical
headers (I selected Word documents because business men generally have a lot
of them), it will be possible to detect a reasonable percentage of them
fairly easily. I never implied, much less stated that there would be 100,000
of these, I stated that there is somewhere on the order of 100,000
possibilities for collision (80,000 is close enough, even 50,000 can
sometimes be considered to be on the same order).

The ability to identify that document X and document Y are word documents
may in fact be a serious leak under some circumstances, including where the
data path has been tracked. To steal an example from the current news, if HP
and Compaq had trusted the cryptography, and their messages (but not the
contents) had been traced, and linked, there would have been a substantial
prior knowledge of the something big happening, this would have meant an
opportunity for someone to perform insider trading without any evidence of
it. This encryption mode poses a significant, real security threat in
realistic situations.
Joe




RE: disk encryption modes

2002-04-29 Thread JonathanW
Title: RE: disk encryption modes





With a 4096 byte cluster size, 1 GB of drive space would require 4 MB temporary key file storage. At this ratio, a 128 MB compact flash card could hold a key file for 32 GB of hard drive space. The key file could be stored on the same physical drive if you wanted to do so, but putting it on separate, and easily microwaveable media gives you the "wipe all the data without touching the actual hard drive" capability. If you trust the reliability of the storage hardware, you could send the main drive the encrypted data and the temporary keyfile drive the temp key data concurrently and let the drive buffering do its magic without a major performance hit. Reliability would be a significant issue, since losing keyfile data would mean the loss of a proportionally larger amount of data on the main storage device. If operational reliability is really super-important, having 2 copies of the key file on separate CDRW's would up the warm-and-fuzzy factor, but require the destruction of both CD's or CF cards or whatever to securely destroy the data.

The main feature I was going for was the ability to give a semi-trusted third party out of the reach of your local men-with-guns the ability to irrevocably destroy your data in an emergency, without giving the third party any of your actual data. If the "I need you to destroy the keyfile NOW" signal was automatically sent to the third party after N failed login attempts by the encryption driver (by writing a pre-arranged random value to a pre-arranged random section of the key file) you wouldn't even have to be conscious. And your (semi) trusted third party could have a similar arrangement with you, to covertly warn you if he was compromised. This design is intended primarily for applications where complete loss of the data is less dire than disclosure of the data to the wrong parties. For these applications, security considerations would probably be more important than absolute cutting-edge performance. but since the keyfile data would be about 0.4% of the actual stored data, I think it could be done reasonably reliably without a noticeable performance hit.

One real-world application that comes to mind for this idea is encryption for a corporate laptop computer. The laptop has an encrypted partition containing the sensitive corporate data, and the keyfile for that partition is stored at corporate HQ. In order for the encrypted partition to be accessed, the laptop has to have a live connection to corporate HQ. Even if this connection was a 33.6 kilobit dialup, you could still encrypt and decrypt at over 800 kilobytes per second, which is fast enough to open up most files in a reasonable amount of time. (The laptop/HQ connection would need to be end-to-end encrypted and authenticated to prevent an attacker from gradually acquiring the keyfile.) If the laptop is stolen, the thief gets none of the encrypted data, and runs the risk of having the computer tattle on his location via caller ID, GPS, or other means when it phones home. You could also use this concept for pay-per-view digital content, but of course it doesn't address the unsolvable issue of once the consumer has decrypted the content, how to make them play nice with it and not redistribute it.

-Original Message-
From: Bill Stewart [mailto:[EMAIL PROTECTED]]
Sent: Monday, April 29, 2002 2:16 AM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: Re: disk encryption modes



At 01:13 AM 04/29/2002 -0700, [EMAIL PROTECTED] wrote:
>  [each cluster has 128 bits permanent half-key, 128 bits nonce half-key...]
>  are for the second cluster, and so on. Each time a disk cluster is 
> written to, a new temporary half-key is pulled from the (P)RNG and used 
> to encrypt the cluster data, and then is stored in the temporary key 
> file. When a cluster is read, the appropriate temporary key half is read 
> from the temporary key file, combined with the permanent key half, and 
> then the data is decrypted.


At least it's big enough to prevent searches through the space.
But it not only requires managing the extra key-file (which could be pretty 
large,
and needs to be kept somewhere, apparently not in the same file system),
it potentially requires two disk reads per block instead of just one,
which is a major performance hit unless you're good at predictive caching,
and more seriously it requires two writes that both succeed.
If you write the key first and don't write out the block,
you can't decrypt the old block that was there, while if you write the 
block first
and don't succeed in writing the key, you can't decrypt the new block.
This makes depending on caching writes much more difficult - it's already 
one of the
things that helps make systems fast and either reliable or unreliable,
and you've made it tougher as well as requiring two disk spins.
You can get some relief 

Re: disk encryption modes

2002-04-29 Thread Bill Stewart

At 01:13 AM 04/29/2002 -0700, [EMAIL PROTECTED] wrote:
>  [each cluster has 128 bits permanent half-key, 128 bits nonce half-key...]
>  are for the second cluster, and so on. Each time a disk cluster is 
> written to, a new temporary half-key is pulled from the (P)RNG and used 
> to encrypt the cluster data, and then is stored in the temporary key 
> file. When a cluster is read, the appropriate temporary key half is read 
> from the temporary key file, combined with the permanent key half, and 
> then the data is decrypted.

At least it's big enough to prevent searches through the space.
But it not only requires managing the extra key-file (which could be pretty 
large,
and needs to be kept somewhere, apparently not in the same file system),
it potentially requires two disk reads per block instead of just one,
which is a major performance hit unless you're good at predictive caching,
and more seriously it requires two writes that both succeed.
If you write the key first and don't write out the block,
you can't decrypt the old block that was there, while if you write the 
block first
and don't succeed in writing the key, you can't decrypt the new block.
This makes depending on caching writes much more difficult - it's already 
one of the
things that helps make systems fast and either reliable or unreliable,
and you've made it tougher as well as requiring two disk spins.
You can get some relief using non-volatile memory (the way the Legato 
Prestoserve
did for NFS acceleration - first cache the write in battery-backed RAM,
send your ACK, and then write the block out to disk), but that's hardwary.

It's cute, though...





Re: Re: disk encryption modes

2002-04-29 Thread JonathanW
Title:  Re: Re: disk encryption modes





Here is a technique for encrypting a hard disk that should provide reasonable performance, good security, and be easy to render the entire disk unreadable in an emergency.

1. Start with a good (P)RNG. Seed it constantly with radioacitve decay noise, digitized samples of monkeys farting into your sound card, keystroke data, mouse squeaks, your favorite hardware RNG's, etc. Hash and whiten to your heart's content, just make sure it can output a few hundred KB/second of data cryptographically indistinguishable from "random" (an attacker having access to the entire of the output of this device since it started has no more than a .5 probability of determining any future bit of the output).

2. Each disk cluster is encrypted individually. (On my 100 GB NTFS drive the cluster size is 4096 bytes. Different drive sizes under different file systems may have different cluster sizes. For clarity's sake, I will stick with the 4K cluster size.) Encryption can be done with any cipher that can accept a 256 bit key, You can use a block cipher (in a suitable feedback mode) or a stream cipher. The first 128 bits of each block key is the master disk encryption key, (a hash of a passphrase ors ome such hereafter called the "permanent" key half)  and the other 128 bits are the randomest bits you can obtain from the aforementioned (P)RNG whenever a cluster is written to (hereafter referred to as the "temporary" half. The temporary bits of the key are stored in a separate file which can be on a CDRW disc, compact flash card, etc. The format of this file is simple; the first 16 bytes of the file is the temporary 128 bits of the key for the first cluster of the disk the next 16 bytes are for the second cluster, and so on. Each time a disk cluster is written to, a new temporary half-key is pulled from the (P)RNG and used to encrypt the cluster data, and then is stored in the temporary key file. When a cluster is read, the appropriate temporary key half is read from the temporary key file, combined with the permanent key half, and then the data is decrypted.

Here are the advantages I see with this technique:


1. If you edit a sensitive file and save several versions of it, no 2 versions of the file, or even any 2 4K sections of the file will be encrypted with the same key, so an attacker will not have many instances of similar ciphertexts as obvious targest for attack.

2. If you need to destroy the encrypted data quickly, and have the temporary key file on separate media, (like a CDRW) the temporary key file can be destroyed quickly (microwave the CDRW until extra crispy) thereby rendering the encrypted data unrecoverable even if the main passphrase is rubberhosed out of someone. Imaginative encryption driver design could have several temporary key files; a real one and several dummies, so that an attacker could be confused as to which file was real until the real one had already been destroyed. The temporary key file could also be located in a remote location (preferably somewhere with no extradition treaty with your jurisdiction) if you can find a party there who would be trusted to cut off access to, and securely destroy the real temporary key file (They could continue to provide access to a bogus one) if a certain signal was received. "If I ever tell you to write value X to block Y of the key file, assume I have been arrested and burn the CD the real key file lives on..." If you wanted to get really fancy you could use secret splitting or RAID techniques where the temporary key file is split into X pieces, and Y number of pieces are needed to reconstitute the entire file. You can use whatever values of X and Y you need to satisfy operational reliability requirements and your paranoia level.

Comments, nits to pick? 





Re: disk encryption modes

2002-04-28 Thread Peter Gutmann

Adam Back <[EMAIL PROTECTED]> writes:

>So if you mean the approach in 1311 you referenced below:
>
>|Key wrapping:
>| 
>|   1. Encrypt the padded key using the KEK.
>| 
>|   2. Without resetting the IV (that is, using the last ciphertext
>|  block as the IV), encrypt the encrypted padded key a second
>|  time.
>| 
>|The resulting double-encrypted data is the EncryptedKey.
>| 
>| 2.3.2 Key Unwrap
>| 
>|Key unwrapping:
>| 
>|   1. Using the n-1'th ciphertext block as the IV, decrypt the n'th
>|  ciphertext block.
>| 
>|   2. Using the decrypted n'th ciphertext block as the IV, decrypt
>|  the 1st ... n-1'th ciphertext blocks.  This strips the outer
>|  layer of encryption.
>| 
>|   3. Decrypt the inner layer of encryption using the KEK.
>
>are you sure it's not vulnerable to splicing attacks (swapping ciphertext 
>blocks around to get a partial plaintext change which recovers after a block or 
>two)?  CBC mode has this property, and this mode seems more like CBC in CBC 
>than a CBC-MACed CBC-encrypted message -- there can't be a MAC property as such 
>because there is no where to store one, so the best you could hope for is earch 
>byte of plaintext depends on each byte of ciphertext, and this is the property 
>I'm questioning based on the usual CBC splicing attacks.

It is a CBC MAC.  A CBC MAC encrypts n blocks and then takes the final output as 
the MAC.  Now look at where the IV for the second pass comes from.  It's a nice 
trick, because it works without any data expansion.

>What is Colin's design and where is it described?

Originally in a sci.crypt post of about 10 years ago, more recently (which isn't 
saying much, it was years ago) in the docs for SFS, 
http://www.cs.auckland.ac.nz/~pgut001/sfs/index.html.  Note that this is the 
speed-customised version, the version which uses two passes of encryption is in 
RFC 3211.

Peter.




Re: disk encryption modes

2002-04-28 Thread Morlock Elloi

> > > considering that people notice a matter of < 10%, people are going to
> >
> > Where can we observe those, pray tell ?
> 
> A great many place, but probably the place it's most easily noticed is on
> the freeway (where it's also a matter of speed). When was the last time you
> saw a number of people (there will of course be 1 occassionally) that
> voluntarily does 10% under the speed limit? Very often (at least everywhere

This is unrelated to computer storage use. 14% slower and smaller disk is a
non-issue. It's like having 26 Gb disk instead of 30 Gb or 3 MB/s throughput
instead of 3.4. Who cares.

> > Can you provide example in real file system terms where 1 in 65536 leaks
> > anything ?
> 
> Sure, current disks are 80 GB or larger for a decent sized setup. Since
> we've got 512 KB blocks that leaves 152587 blocks on the disk. If each block
> has a 1 in 65536 chance of collision, there will be somewhere in the
> neighborhood of 10 collisions on the disk. Since on the average
> businessman's computer there will be an enormous quantity of Word DOC files,
> all of which have the same header (standard Word header followed by standard

Collision means same plaintext to the same ciphertext. The collision happens on
the cypher block basis, not on disk block basis. 80 Gb drive has around 5
billion 128-bit blocks, and if all were the same plaintext there would be about
80 thousand same ones of each of possible 65536 128-bit values.

This has nothing to do with practical security.

You imply more than *hundred thousand* of identical-header word *docs* on the
same disk and then that identifying several of these as potential word docs is
a serious leak.



=
end
(of original message)

Y-a*h*o-o (yes, they scan for this) spam follows:
Yahoo! Health - your guide to health and wellness
http://health.yahoo.com




Re: disk encryption modes

2002-04-28 Thread Morlock Elloi

> considering that people notice a matter of < 10%, people are going to

Where can we observe those, pray tell ?


>> Probability of the same plaintext encrypting to the same cyphertext is 1 in
>> 65536.
> 
> Which is no where near useful. 1 in 65536 is trivial in cryptographic terms,

Can you provide example in real file system terms where 1 in 65536 leaks
anything ?




=
end
(of original message)

Y-a*h*o-o (yes, they scan for this) spam follows:
Yahoo! Health - your guide to health and wellness
http://health.yahoo.com




Re: disk encryption modes

2002-04-27 Thread Peter Gutmann

Nomen Nescio <[EMAIL PROTECTED]> writes:

>Peter Gutmann and Colin Plumb invented a simple trick which provides this
>property in conjunction with CBC or CFB modes.  We're going to encrypt/decrypt
>a disk block, which is divided into "packets" which are the cipher block size
>(64 or 128 bits).  Let P[j] mean the jth packet of plaintext in the block;
>assume there are n packets in the block. C[j] is the jth packet of ciphertext.
>
>Encryption
>
>P[n] ^= Hash(P[1..n-1], blocknum, diskID)
>Encrypt block using P[n] as IV
>
>Decryption
>
>Decrypt C[2..n] using C[1] as IV
>Decrypt C[1] using P[n] as IV
>P[n] ^= Hash(P[0..n-1], blocknum, diskID)
>
>The idea is to use P[n] as IV, first xoring in some function (doesn't have to
>be a crypto hash) of the first n-1 packets, the block number, and some disk-
>specific value.  Doing this ensures that the IV depends on all the bytes in
>the block, so a change to any byte will change the entire block.  The
>decryption doesn't have to be split into n-1 and 1 block parts as shown here,
>any division will do.

Actually it's more general that that, the reason the original implementation
used such a simple first pass was because it had to run on things like 385/25's
which weren't too fast doing two passes of encryption.  For anything more
recent, you'd do two passes of encryption, the first to build the IV and the
second to encrypt with it.  This in effect means that the first pass is a MAC
of the data (alongside encrypting it) and the second pass is pure encryption.
This gives you some nice provable security properties which Phil Rogaway and
Mihir Bellare have done some work with (see
http://www.cs.ucdavis.edu/~rogaway/papers/index.html and
http://www-cse.ucsd.edu/users/mihir/papers.html for some publications).  This
is the approach used in RFC 3211, which I'd regard as the proper (unconstrained
by having to run on a slow CPU) way to use the technique..

(I'm kinda surprised that this issue keeps coming up again and again, Colin's
 design is a general-purpose solution which works with any block cipher.  It's
 a solved problem, and has been so for about a decade).

While I'm on the topic, I'd also like to question the implicit assumption that
speed is the principal design target.  Having a lot of experience with disk
crypto on (by current standards) very slow machines, I must say that speed has
never been a problem for me.  Right now I'm doing software-based 3DES and I
can't even notice that the drive is encrypted.  Now if you're encrypting swap
and your system is thrashing I can see that you'd notice, but for the average
user who barely accesses the disk and who's running a modern OS which does
sensible buffering/cacheing, it really isn't an issue.  Far better to
concentrate on flexibility and security than to drop everything so you can
chase after this single red herring.

Peter.




Re: Re: disk encryption modes

2002-04-27 Thread Joseph Ashwood

- Original Message -
From: "Morlock Elloi" <[EMAIL PROTECTED]>
> > There's no need to go to great lengths to find a place to store the IV.
>
> Wouldn't it be much simpler (having in mind the low cost of storage), to
simply
> append several random bits to the plaintext before ECB encrypton and
discard
> them upon decryption ?
>
> For, say, 128-bit block cipher and 16-bit padding (112-bit plaintext and
16-bit
> random fill) the storage requirement is increased 14% but each block is
> completely independent, no IV is used at all, and as far as I can see all
> pitfails of ECB are done away with.

The bigger problem is that you're cutting drive performance by 14%,
considering that people notice a matter of < 10%, people are going to
complain, and economically this will be a flop. A drive setup like this
would be worse than useless, it would give the impression that encryption
must come at the cost of speed. Designing this into a current system would
set the goal of encryption everywhere back.

> Probability of the same plaintext encrypting to the same cyphertext is 1
in
> 65536.

Which is no where near useful. 1 in 65536 is trivial in cryptographic terms,
especially when compared to 1 in approx
340. Additionally you'll be sacrificing
_more_ of the sector to what amount to IV, and in exchange you'll be
decreasing security. If instead in that 512KB block you take up 128 bits,
you'll only lose about 0.02% performance and we were already trying to avoid
that (although for other reasons).
Joe




Re: RE: Re: disk encryption modes (Re: RE: Two ideas for random number generation)

2002-04-27 Thread Joseph Ashwood
Title: RE: Re: disk encryption modes (Re: RE: Two ideas for random number generation)



 

  - Original Message - 
  From: 
  [EMAIL PROTECTED] 
  To: [EMAIL PROTECTED] 
  
  Sent: Saturday, April 27, 2002 12:11 
  PM
  Subject: CDR: RE: Re: disk encryption 
  modes (Re: RE: Two ideas for random number generation)
  
  Instead of adding 16 bytes to the size of each sector for 
  sector IV's how about having a separate file (which could be stored on a 
  compact flash card, CDRW or other portable media) that contains the IV's for 
  each disk sector? 
Not a very good solution.

   
  You could effectively wipe the encrypted disk merely by wiping 
  the IV file, which would be much faster than securely erasing the entire disk. 
  
   
Actually that wouldn't work, at least not in CBC mode 
(which is certainly my, and seems to be generally favored for disk encryption). 
In CBC mode, not having the IV (setting the IV to 0) only destroys the first 
block, after that everything decrypts normally, so the only wiped portion of the 
sector is the first block.

   
  If the IV file was not available, decryption would be 
  impossible even if the main encryption key was rubberhosed it otherwise 
  leaked. This could be a very desirable feature for the tinfoil-hat-LINUX 
  crowd--as long as you have posession if the compact flash card with the IV 
  file, an attacker with your laptop isn't going to get far cracking your 
  encryption, especially if you have the driver constructed to use a dummy IV 
  file on the laptop somewhere after X number of failed passphrase entries to 
  provide plausible deniability for the existence of the compact flash 
  card.
   
And then the attacker would just get all of your file 
except the first block (assuming the decryption key is found).

   
  To keep the IV file size reasonable, you might want to encrypt 
  logical blocks (1K-8K, depending on disk size, OS, and file system used, vs 
  512 bytes) instead of individual sectors, especially if the file system thinks 
  in terms of blocks instead of sectors. I don't see the value of encrypting 
  below the granularity of what the OS is ever going to write to 
disk.
 
That is a possibility, and actually I'm sure it's 
occurred to the hard drive manufacturers that the next time they do a full 
overhaul of the wire protocol they should enable larger blocks (if they haven't 
already, like I said before, I'm not a hard drive person). This would serve them 
very well as they would have to store less information increasing the disk size 
producible per cost (even if not by much every penny counts when you sell a 
billion devices). Regardless this could be useful for the disk encryption, but 
assuming worst case won't lose us anything in the long run, and should enable 
the best case to be done more easily, so for the sake of simplicity, and 
satisfying the worst case, I'll keep on calling them sectors until there's a 
reason not to.
        
                
        Joe


Re: disk encryption modes

2002-04-27 Thread Morlock Elloi

> There's no need to go to great lengths to find a place to store the IV.

Wouldn't it be much simpler (having in mind the low cost of storage), to simply
append several random bits to the plaintext before ECB encrypton and discard
them upon decryption ?

For, say, 128-bit block cipher and 16-bit padding (112-bit plaintext and 16-bit
random fill) the storage requirement is increased 14% but each block is
completely independent, no IV is used at all, and as far as I can see all
pitfails of ECB are done away with.

Probability of the same plaintext encrypting to the same cyphertext is 1 in
65536. For typical unix disk use this *could* provide some hint that large
space is being zeroed, for instance, but when realistic entropy plaintext is
being written the danger is negligible.



=
end
(of original message)

Y-a*h*o-o (yes, they scan for this) spam follows:
Yahoo! Health - your guide to health and wellness
http://health.yahoo.com




Re: disk encryption modes

2002-04-27 Thread Nomen Nescio

There's no need to go to great lengths to find a place to store the IV.
An encryption mode that bases the IV on block number and propagates
changes throughout the disk block provides effectively just as much
security.  The only theoretical weakness of the latter approach is that
if a block's contents are unchanged, the ciphertext will be unchanged.
But even with a unique IV per block, in practice this same effect will
occur, as if you are not changing a portion of the disk, you're probably
not rewriting it, so the old IV will still be in use.  Hence an attacker
who compares the current disk state against an earlier snapshot will be
able to tell which blocks have changed and which have not, in either mode.

Peter Gutmann and Colin Plumb invented a simple trick which provides
this property in conjunction with CBC or CFB modes.  We're going to
encrypt/decrypt a disk block, which is divided into "packets" which are
the cipher block size (64 or 128 bits).  Let P[j] mean the jth packet
of plaintext in the block; assume there are n packets in the block.
C[j] is the jth packet of ciphertext.

Encryption

P[n] ^= Hash(P[1..n-1], blocknum, diskID)
Encrypt block using P[n] as IV

Decryption

Decrypt C[2..n] using C[1] as IV
Decrypt C[1] using P[n] as IV
P[n] ^= Hash(P[0..n-1], blocknum, diskID)

The idea is to use P[n] as IV, first xoring in some function (doesn't
have to be a crypto hash) of the first n-1 packets, the block number,
and some disk-specific value.  Doing this ensures that the IV depends
on all the bytes in the block, so a change to any byte will change the
entire block.  The decryption doesn't have to be split into n-1 and 1
block parts as shown here, any division will do.




RE: Re: disk encryption modes (Re: RE: Two ideas for random number generation)

2002-04-27 Thread JonathanW
Title: RE: Re: disk encryption modes (Re: RE: Two ideas for random number generation)





Instead of adding 16 bytes to the size of each sector for sector IV's how about having a separate file (which could be stored on a compact flash card, CDRW or other portable media) that contains the IV's for each disk sector? You could effectively wipe the encrypted disk merely by wiping the IV file, which would be much faster than securely erasing the entire disk. If the IV file was not available, decryption would be impossible even if the main encryption key was rubberhosed it otherwise leaked. This could be a very desirable feature for the tinfoil-hat-LINUX crowd--as long as you have posession if the compact flash card with the IV file, an attacker with your laptop isn't going to get far cracking your encryption, especially if you have the driver constructed to use a dummy IV file on the laptop somewhere after X number of failed passphrase entries to provide plausible deniability for the existence of the compact flash card.

To keep the IV file size reasonable, you might want to encrypt logical blocks (1K-8K, depending on disk size, OS, and file system used, vs 512 bytes) instead of individual sectors, especially if the file system thinks in terms of blocks instead of sectors. I don't see the value of encrypting below the granularity of what the OS is ever going to write to disk.




Re: Re: disk encryption modes (Re: RE: Two ideas for random number generation)

2002-04-27 Thread Joseph Ashwood

- Original Message -
From: "Adam Back" <[EMAIL PROTECTED]>

> Joseph Ashwood wrote:
> > Actually I was referring to changing the data portion of the block
> > from {data} to {IV, data}
>
> Yes I gathered, but this what I was referring to when I said not
> possible.  The OSes have 512Kbytes ingrained into them.  I think you'd
> have a hard time changing it.  If you _could_ change that magic
> number, that'd be a big win and make the security easy: just pick a
> new CPRNG generated IV everytime you encrypt a block.  (CPRNG based on
> SHA1 or RC4 is pretty fast, or less cryptographic could be
> sufficient depending on threat model).

>From what I've seen of a few OSs there really isn't that much binding to 512
Kbytes in the OS per se, but the file system depends on it completely.
Regardless the logic place IMO to change this is at the disk level, if the
drive manufacturers can be convinced to produce drives that offer 512K+16
byte sectors. Once that initial break happens, all the OSs will play catchup
to support the drive, that will break the hardwiring and give us our extra
space. Of course convincing the hardware vendors to do this without a
substantial hardware reason will be extremely difficult. On our side though
is that I know that hard disks store more than just the data, they also
store a checksum, and some sector reassignment information (SCSI drives are
especially good at this, IDE does it under the hood if at all), I'm sure
there's other information, if this could be expanded by 16 bytes, that'd
supply the necessary room. Again convincing the vendors to supply this would
be a difficult task, and would require the addition of functionality to the
hard drive to either decrypt on the fly, or hand the key over to the driver.

> > Yeah the defragmentation would have to be smart, it can't simply copy
the
> > di[s]k block (with the disk block based IV) to a new location.
>
> Well with the sector level encryption, the encryption is below the
> defragmentation so file chunks get decrypted and re-encrypted as
> they're defragmented.
>
> With the file system level stuff the offset is likley logical (file
> offset etc) rather than absolute so you don't mind if the physical
> address changes.  (eg. loopback in a file, or file system APIs on
> windows).

That's true, I was thinking more as something that will for now run in
software and in the future gets pushed down to the hardware and we can use a
smartcard/USBKey/whatever comes out next to feed it the key. A
meta-filesystem would be useful as a short term measure, but it still keeps
all the keys in system memory where programs can access them, if we can
maintain the option of moving it to hardware later on, I think that would be
a better solution (although also a harder one).

I feel like I'm missing something that'll be obvious once I've found it.
Hmm, maybe there is a halfway decent solution (although not at all along the
same lines). For some reason I was just remembering SAN networks, it's a
fairly known problem to design and build secure file system protocols
(although they don't get used much). So it might actually be a simpler
concept to build a storage area network using whatever extra hardened OSs we
need, with only the BIOS being available without a smartcard, put the smart
card in, the smartcard itself decrypts/encrypts sector keys (or maybe some
larger grouping), the SAN host decrypts the rest. Pull out the smartcard,
the host can detect that, flush all caches and shut itself off. This has
some of the same problems, but at least we're not going to have to design a
hard drive, and since it's a remote file system I believe most OSs assume
very little about sector sizes. Of course as far as I'm concerned this
should still be just a stopgap measure until we can move that entire SAN
host inside the client computer.

Now for the biggest question, how do we get Joe Public to actually use this
correctly (take the smart card with them, or even not choose weak
passwords)?
Joe




Re: disk encryption modes (Re: RE: Two ideas for random number generation)

2002-04-27 Thread Adam Back

Joseph Ashwood wrote:
> Adam Back Wrote:
> > > This becomes completely redoable (or if you're willing to sacrifice
> > > a small portion of each block you can even explicitly stor ethe IV.
> >
> > That's typically not practical, not possible, or anyway very
> > undesirable for performance (two disk hits instead of one),
> > reliability (write one without the other and you lose data).
> 
> Actually I was referring to changing the data portion of the block
> from {data} to {IV, data}

Yes I gathered, but this what I was referring to when I said not
possible.  The OSes have 512Kbytes ingrained into them.  I think you'd
have a hard time changing it.  If you _could_ change that magic
number, that'd be a big win and make the security easy: just pick a
new CPRNG generated IV everytime you encrypt a block.  (CPRNG based on
SHA1 or RC4 is pretty fast, or less cryptographic could be
sufficient depending on threat model).

> placing all the IVs at the head of every read. This of course will
> sacrifice k bits of the data space for little reason.

Security / space trade off with no performance hit (other than needing
to write 7% or 14% more data depending on size of IV) is probably more
desirable than having to doubly encrypt the block and take a 2x cpu
overhead hit.  However as I mentioned I don't think it's practical /
possible due to OS design.

> > Note in the file system level scenario an additional problem is file
> > system journaling, and on-the-fly disk defragmentation -- this can
> > result in the file system intentionally leaving copies of previous or
> > the same plaintexts encrypted with the same key and logical position
> > within a file.
> 
> Yeah the defragmentation would have to be smart, it can't simply copy the
> dick block (with the disk block based IV) to a new location. 

Well with the sector level encryption, the encryption is below the
defragmentation so file chunks get decrypted and re-encrypted as
they're defragmented.

With the file system level stuff the offset is likley logical (file
offset etc) rather than absolute so you don't mind if the physical
address changes.  (eg. loopback in a file, or file system APIs on
windows).

> > Another approach was Paul Crowley's Mercy cipher which has a 4Kbit
> > block size (= 512KB = sector sized).  But it's a new cipher and I
> > think already had some problems, though performance is much better
> > than eg AES with double CBC, and it means you can use ECB mode per
> > block and key derived with a key-derivation function salted by the
> > block-number (the cipher includes such a concept directly in it's
> > key-schedule), or CBC mode with an IV derived from the block number
> > and only one block, so you don't get the low-tide mark of edits you
> > get with CBC.
> 
> It's worse than that, there's actually an attack on the cipher. Paul details
> this fairly well on his page about Mercy.

Yes, that's what I was referring to by "already had some problems".

Adam
--
http://www.cypherspace.org/adam/



Re: disk encryption modes (Re: RE: Two ideas for random number generation)

2002-04-27 Thread Joseph Ashwood

- Original Message -
From: "Adam Back" <[EMAIL PROTECTED]>

> On Fri, Apr 26, 2002 at 11:48:11AM -0700, Joseph Ashwood wrote:
> > From: "Bill Stewart" <[EMAIL PROTECTED]>
> > > I've been thinking about a somewhat different but related problem
lately,
> > > which is encrypted disk drives.  You could encrypt each block of the
disk
> > > with a block cypher using the same key (presumably in CBC or some
similar
> > > mode), but that just feels weak.
> >
> > Why does it feel weak? CBC is provably as secure as the block cipher
(when
> > used properly), and a disk drive is really no different from many
others. Of
> > course you have to perform various gyrations to synchronise everything
> > correctly, but it's doable.
>
> The weakness is not catastrophic, but depending on your threat model
> the attacker may see the ciphertexts from multiple versions of the
> plaintext in the edit, save cycle.

That could be a problem, you pointed out more information in your other
message, but obviously this would have to be dealt with somehow. I was goign
to suggest that maybe it would be better to encrypt at the file level, but
this can very often leak more information, and depending on how you do it,
will leak directory stucture. There has to be a better solution.

> > Well it's not all the complicated. That same key, and encrypt the disk
> > block number, or address or anything else.
>
> Performance is often at a premium in disk driver software --
> everything moving to-and-from the disk goes through these drivers.
>
> Encrypt could be slow, encrypt for IV is probably overkill.  IV
> doesn't have to be unique, just different, or relatively random
> depending on the mode.
>
> The performance hit for computing IV depends on the driver type.
>
> Where the driver is encrypting disk block at a time, then say 512KB
> divided (standard smallest disk block size) into AES block sized
> chunks 16 bytes each is 32 encrypts per IV geenration.  So if IV
> generation is done with a block encrypt itself that'll slow the system
> down by 3.125% right there.
>
> If the driver is higher level using file-system APIs etc it may have
> to encrypt 1 cipher block size at a time each with a different IV, use
> encrypt to derive IVs in this scenario, and it'll be a 100% slowdown
> (encryption will take twice as long).

That is a good point, of course we could just use the old standby solution,
throw hardware at it. The hardware encrypts at disk (or even disk cache)
speed on the drive, eliminating all issues of this type. Not a particularly
cost-effective solution in many cases, but a reasonable option for others.

> > This becomes completely redoable (or if you're willing to sacrifice
> > a small portion of each block you can even explicitly stor ethe IV.
>
> That's typically not practical, not possible, or anyway very
> undesirable for performance (two disk hits instead of one),
> reliability (write one without the other and you lose data).

Actually I was referring to changing the data portion of the block from
{data}
to
{IV, data}

placing all the IVs at the head of every read. This of course will sacrifice
k bits of the data space for little reason.

> > > I've been thinking that Counter Mode AES sounds good, since it's easy
> > > to find the key for a specific block.   Would it be good enough just
to
> > use
> > >  Hash( (Hash(Key, block# ))
> > > or some similar function instead of a more conventional crypto
function?
> >
> > Not really you'd have to change the key every time you write to
> > disk, not exactly a good idea, it makes key distribution a
> > nightmare, stick with CBC for disk encryption.
>
> CBC isn't ideal as described above.  Output feedback modes like OFB
> and CTR are even worse as you can't reuse the IV or the attacker who
> is able to see previous disk image gets XOR of two plaintext versions.
>
> You could encrypt twice (CBC in each direction or something), but that
> will again slow you down by a factor of 2.
>
> Note in the file system level scenario an additional problem is file
> system journaling, and on-the-fly disk defragmentation -- this can
> result in the file system intentionally leaving copies of previous or
> the same plaintexts encrypted with the same key and logical position
> within a file.

Yeah the defragmentation would have to be smart, it can't simply copy the
dick block (with the disk block based IV) to a new location. This problem
disappears in the {IV, data} block type, but that has other problems that
are at least as substantial.

> So it's "easy" if performance is not an issue.

Or if you decide to throw hardware at it.

> Another approach was Paul Crowley's Mercy cipher which has a 4Kbit
> block size (= 512KB = sector sized).  But it's a new cipher and I
> think already had some problems, though performance is much better
> than eg AES with double CBC, and it means you can use ECB mode per
> block and key derived with a key-derivation function salted by the
> block-number (the ciph

Re: disk encryption modes

2002-04-27 Thread Nomen Nescio

The problem with a random IV in disk encryption is that you may not have
anywhere to store it, since you're already using all of your disk space.
Using hash of block number as IV works except that in most encryption
modes, if the first part of the plaintext is unchanged, that part of
the ciphertext will also be unchanged.

Better to use an encryption mode where a change anywhere in the plaintext
will affect the whole plaintext.  Then you can use hash of block number as
IV.  This still leaks info about whether a block is changed or unchanged,
but that is hard to avoid unless you are going to re-encrypt the entire
disk any time you change a bit anywhere.  And this way, when a disk
block is changed at all, the entire block ciphertext changes.




Re: disk encryption modes (Re: RE: Two ideas for random number generation)

2002-04-26 Thread Adam Back

Right, it sounds like the same approach I alluded to, except I didn't
use a salt -- I just used a fast pseudon random number generator to
make the IV less structured than using the block number directly.

I did some experiments with a used disk and found that if you use the
block number directly for the IV, with CBC mode the block number and
plaintext difference cancel to result in the same input text to the
block cipher, resulting in the same ciphertext in a fair proportion of
cases (don't have the figures handy, but clearly this
not-insignificant number of collisions represents a leakage about the
plaintext on the disk).

The with aforementioned fast pseudo-random number generator I got no
collisions on the disk sample size (10Gig disk almost full of windows
application software and data).

I figure that's good empirical evidence of the soundness of the
approach, however another glitch may be if you consider that the
attacker can work partly from the inside -- eg influencing the
plaintext choice, as well as having read-only access to the ciphertext
-- in this case he could perhaps build up a partial codebook for the
cipher with the disk key, by influencing plaintext choices to create
values equal to the suspected differences between plaintext and
predicatable IVs.

How do you salt the random number generator?  Is it resistant to the
above type of attack do you think?

Adam

On Sat, Apr 27, 2002 at 11:19:04AM +1000, Julian Assange wrote:
> > You could encrypt twice (CBC in each direction or something), but that
> > will again slow you down by a factor of 2.
> 
> You can't easily get away with storing the IV as multiple parts of the IO
> pipe like to see blocks in 2^n form. 
> 
> The approach I take in Rubberhose is to calculate the IV from a
> very fast checksum routine and salt.



Re: disk encryption modes (Re: RE: Two ideas for random number generation)

2002-04-26 Thread Julian Assange

> You could encrypt twice (CBC in each direction or something), but that
> will again slow you down by a factor of 2.

You can't easily get away with storing the IV as multiple parts of the IO
pipe like to see blocks in 2^n form. 

The approach I take in Rubberhose is to calculate the IV from a
very fast checksum routine and salt.

--
 Julian Assange|If you want to build a ship, don't drum up people
   |together to collect wood or assign them tasks and
 [EMAIL PROTECTED]  |work, but rather teach them to long for the endless
 [EMAIL PROTECTED]  |immensity of the sea. -- Antoine de Saint Exupery