Re: A mighty fortress is our PKI, Part II

2010-08-04 Thread David-Sarah Hopwood
Anne  Lynn Wheeler wrote:
 Kaspersky: Sham Certificates Pose Big Problem for Windows Security
 http://www.ecommercetimes.com/story/70553.html
 
 from above ..
 
 Windows fails to clearly indicate when digital security certificates
 have been tampered with, according to Kaspersky Lab's Roel Schouwenberg,
 and that opens a door for malware makers.

Huh? I don't understand the argument being made here.

Obviously Windows can't distinguish an unsigned executable from one where
the was a signature that has been stripped. How could it possibly do that?

Signatures are largely a distraction from the real problem: that software
is (unnecessarily) run with the full privileges of the invoking user.
By all means authenticate software, but that's not going to prevent malware.

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature


Re: A mighty fortress is our PKI

2010-07-23 Thread David-Sarah Hopwood
Peter Gutmann wrote:
 Readers are cordially invited to go to https://edgecastcdn.net and have a 
 look 
 at the subjectAltName extension in the certificate that it presents.  An 
 extract is shown at the end of this message, this is just one example of many 
 like it.  I'm not picking on Edgecast specifically, I just used this one 
 because it's the most Sybilly certificate I've ever seen.  You'll find that 
 this one Sybil certificate, among its hundred-and-seven hostnames, includes 
 everything from Mozilla, Experian, the French postal service, TRUSTe, and the 
 Information Systems Audit and Control Association (ISACA), through to 
 Chainlove, Bonktown, and Dickies Girl (which aren't nearly as titillating as 
 they sound, and QuiteSFW).  Still, who needs to compromise a CA when you have 
 these things floating around on multihomed hosts and CDNs.
[...]
 What a mess!  A single XSS/XSRF/XS* attack, or just a plain config problem,
 and the whole house of cards comes down.

Please don't mistake the following comment for a defence of any aspect of
current PKI practice, but:

I'm not seeing how an XSS or XSRF attack on one of the domains named in this
certificate would enable attacks on the other domains.

IIUC, if you resolve one of the domains that is a client of Edgecast, say
www.mozilla.com, then you may get an Edgecast proxy server that will serve
content over TLS on behalf of that domain.

Clearly if you compromise such a proxy, then you get the ability to spoof
any of the domains named in the certificate. But if you do some origin-based
web attack on a particular domain, then you can only spoof that domain.
And even if you have a full compromise of a server for one of the domains,
that doesn't get you the private key for the certificate, which is held only
by the proxies. Or am I missing something?

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature


Merkle-Winternitz-HORS signature scheme for Tahoe-LAFS

2010-07-09 Thread David-Sarah Hopwood
[cc:d to cryptography list from the tahoe-dev list.
See http://allmydata.org/pipermail/tahoe-dev/2010-June/004439.html,
http://allmydata.org/pipermail/tahoe-dev/2010-June/004502.html and
http://allmydata.org/pipermail/tahoe-dev/2010-June/004496.html for context.]

Brian Warner wrote:
 On 6/23/10 7:18 AM, David-Sarah Hopwood wrote:

 Ah, but it will work for a multi-layer Merkle tree scheme, such as
 GMSS: if keys are generated deterministically from a seed, then the
 signatures certifying keys at upper layers are also deterministic, so
 there's no key-reuse problem for those.

 Right, exactly. The basic scheme would look something like this:

  * set K=1024
  * build a K-ary tree with 2^M=2^256 leaves. Each leaf corresponds to a
possible message.
  * define a deterministic keyed function from leaf number to keypair
  * define a deterministic keyed function from node number to keypair
  * publish the root pubkey as the readcap. Retain the secret key (used
as an input to the above deterministic functions) as the writecap.
  * to sign a given message:
* identify the DEPTH=ln(base=K, 2^256) =26ish tree nodes along the
  path from root to leaf
* generate the one privkey for the message leaf, use it to sign the
  message itself
* for all nodes on the path, from bottom to top:
[
 * generate all K pubkeys for the node's children, concatenate them
   into a string, treat that as a message
 * sign the message with the parent node's privkey
]

 As zooko pointed out, the leaf signature can be optimized away: since
 each message gets a unique pubkey, it suffices to reveal the
 preimage/privkey for that pubkey. This reduces the size of the leaf
 signature down to a single hash.

 Assuming a 256-bit Merkle-signature scheme (with a 0 and a 1
 preimage for each bit), it takes 512 hashes to build all the
 privkey/preimages, and then 512 hashes to build the pubkeys.
 Each signature requires computing and publishing (revealing) 256 preimage
 hashes.

Note that this is a Lamport-Diffie signature, not a Merkle one-time
signature. The Merkle one-time signature scheme (described in the second
paragraph under Signing a several bit message in [Merkle1987]) publishes
only preimage hashes corresponding to 1 bits, and uses a checksum.

 Generating the readcap is pretty easy: you build the K pubkeys for the
 top node (4*256 small hashes each), hash each one down (K large hashes
 of 512*32 bytes each), then hash those lists into a single value (one
 large hash of K*32 bytes). So 1M small hashes, 1024 hashes of 16KiB
 each, and one hash of 32KiB bytes.

 For K=1024 and M=256, the message signature consists of the leaf
 signature (one hash), and 26 nodes. Each node contains one signature
 (256 preimages), one pubkey (512 hashes), and 1023 hashes-of-pubkeys.

 So the overall message signature requires you publish about 46567 hashes,
 or 1.5MB.

The scheme that I'm currently considering has the following five
improvements over the one above:


1. For the signatures on non-leaf public keys, use the Winternitz one-time
   signature scheme. This was first publically described in [Merkle1987],
   but a clearer description is given in [BDS2008].

   The Winternitz scheme (unlike the Lamport-Diffie or Merkle schemes) has
   the property that the full public key can be derived from a signature.
   Therefore it's not necessary to explicitly include the pubkey that is
   being signed at each node, since it can be derived from the signature
   on the next-lower node. More precisely, the lower signature gives a
   claimed public key, which can be authenticated using the upper signature.

   Winternitz signatures also allow a trade-off between signature size and
   the number of hash computations needed to sign, depending on the base.

   (Typically the scheme is described with a base that is a power of 2,
   i.e. the message representative and checksum are expressed as base
   2^w numbers. Actually it works for an arbitrary base = 2, and using
   a base that is not a power of two can sometimes save an extra few
   percent in signature cost for a given signing size.)

   The signing cost is linear in the base B, while the size of the
   signature is only divided by lg(B). Nevertheless, choices of B from
   4 up to about 32 are useful.

   In the example above, the 256 + 512 = 768 hashes for the signature and
   pubkey, are reduced to 133 hashes for base 4; 90 hashes for base 8; and
   67 hashes for base 16.

   Note that the optimal choices of K are typically smaller than 1024, so
   the one-time signature/pubkey makes up a greater proportion of the
   published size for each layer than in the example above. For instance,
   if K = 16, B = 16, and M = 256, then the number of hashes published per
   layer drops to 67 + K-1 = 82. Without any of the other improvements
   below, at least 64 layers would be needed, so that would be 5248 hashes,
   or 164 KiB. (If Zooko's optimization is used for the leaf

Re: Merkle-Winternitz-HORS signature scheme for Tahoe-LAFS [correction]

2010-07-09 Thread David-Sarah Hopwood
David-Sarah Hopwood wrote:
[snip]
 There could also be a concern that point 4 above is similar to
 on-line/off-line signatures as patented by Even, Goldreich and Micali
 (U.S. patent 5016274, filed in 1988; expires on 14 May 2011).

Ah, I calculated the expiration date incorrectly. It was filed before the
rules changed in June 1995, so it's the later of 20 years after filing
(8 November 2008) or 17 years after issue (14 May 2008). So it has already
expired.

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature


Re: Truncating SHA2 hashes vs shortening a MAC for ZFS Crypto

2009-11-08 Thread David-Sarah Hopwood
Nicolas Williams wrote:
 On Tue, Nov 03, 2009 at 07:28:15PM +, Darren J Moffat wrote:
 Nicolas Williams wrote:
 Interesting.  If ZFS could make sure no blocks exist in a pool from more
 than 2^64-1 transactions ago[*], then the txg + a 32-bit per-transaction
 block write counter would suffice.  That way Darren would have to store
 just 32 bits of the IV.  That way he'd have 352 bits to work with, and
 then it'd be possible to have a 128-bit authentication tag and a 224-bit
 hash.

 The logical txg (post dedup integration we have physical and logical 
 transaction ids) + a 32 bit counter is interesting.   It was actually my 
 very first design for IV's several years ago!
[...]
 I suspect that sometime in the next 584,542 years the block pointer size 
 for ZFS will increase and I'll have more space to store a bigger MAC, 
 hash and IV.  In fact I guess that will happen even in the next 50 years.
 
 Heh.  txg + 32-bit counter == 96-bit IVs sounds like the way to go.

I'm confused. How does this allow you to do block-level deduplication,
given that the IV (and hence the ciphertext) will be different for every
block even when the plaintext is the same?

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



signature.asc
Description: OpenPGP digital signature


Re: Truncating SHA2 hashes vs shortening a MAC for ZFS Crypto

2009-11-03 Thread David-Sarah Hopwood
 instead of MACs, then the integrity of the
system does not depend on keeping secrets. It only depends on preventing
the attacker from modifying the root of the Merkle tree. One consequence
of this is that if there are side-channel attacks against the
implementations of crypto algorithms, there is no information that they
can leak to an attacker that would allow compromising integrity.

(Of course, the integrity of the OS also needs to be protected. One way
of doing that would be to have a TPM, or the same hardware that is used
for crypto, store the root hash of the Merkle tree and also the hash
of a boot loader that supports ZFS. Then the boot loader would load an
OS from the ZFS filesystem, and only that OS would be permitted to update
the ZFS root hash.)

 One of the founding ideas of the whole design of ZFS was end-to-end
 integrity checking.  It does that successfully now, for the case of
 accidents, using large checksums.  If the checksum is secure then it
 also does it for the case of malice.  In contrast a MAC doesn't do
 end-to-end integrity checking.

A cryptographic checksum on the ciphertext alone doesn't do end-to-end
integrity checking either. Even if everything is implemented correctly
and there are no hardware errors, it doesn't verify the integrity of the
decryption key.

 For example, if you've previously
 allowed someone to read a filesystem (i.e., you've given them access to
 the key), but you never gave them permission to write to it, but they
 are able to exploit the isses that you mention at the beginning of [1]
 such as Untrusted path to SAN, then the MAC can't stop them from
 altering the file, nor can the non-secure checksum, but a secure hash
 can (provided that they can't overwrite all the way up the Merkle Tree
 of the whole pool and any copies of the Merkle Tree root hash).

The scheme I suggested above also has that advantage: if you have a
plaintext verifier, then you can check the integrity of the plaintext
even if an attacker knows the decryption key (and no separate MAC key is
needed).

 Likewise, a secure hash can be relied on as a dedupe tag *even* if
 someone with malicious intent may have slipped data into the pool.  An
 insecure hash or a MAC tag can't -- a malicious actor could submit data
 which would cause a collision in an insecure hash or a MAC tag, causing
 tag-based dedupe to mistakenly unify two different blocks.

I agree. I don't think that Darren Moffat was suggesting to use the MAC tag
for dedupe. I also agree that a hash used for dedupe needs to be quite long
(256 bits would be nice, but 192 is probably OK).

 [1]
 http://hub.opensolaris.org/bin/download/Project+zfs%2Dcrypto/files/zfs%2Dcrypto%2Ddesign.pdf

--
David-Sarah Hopwood http://davidsarah.livejournal.com

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Question about Shamir secret sharing scheme

2009-10-04 Thread David-Sarah Hopwood
Kevin W. Wall wrote:
 Hi list...I have a question about Shamir's secret sharing.
 
 According to the _Handbook of Applied Cryptography_
 Shamir’s secret sharing (t,n) threshold scheme works as follows:
 
 SUMMARY: a trusted party distributes shares of a secret S to n users.
 RESULT: any group of t users which pool their shares can recover S.
 
 The trusted party T begins with a secret integer S ≥ 0 it wishes
 to distribute among n users.
 (a) T chooses a prime p  max(S, n), and defines a0 = S.
 (b) T selects t−1 random, independent coefficients defining the random
 polynomial over Zp.
 (c) T computes Si = f(i) mod p, 1 ≤ i ≤ n (or for any n distinct
 points i, 1 ≤ i ≤ p − 1), and securely transfers the share Si
 to user Pi , along with public index i.
 
 The secret S can then be computed by finding f(0) more or less by
 using Lagrangian interpolation on the t shares, the points (i, Si).
 
 The question that a colleague and I have is there any cryptographic
 purpose of computing the independent coefficients over the finite
 field, Zp ?

Yes, the information-theoretic security of the scheme depends on
performing the arithmetic in a finite field, and on the coefficients
being chosen randomly and independently in that field. In Shamir's
original paper:

http://www.caip.rutgers.edu/~virajb/readinglist/shamirturing.pdf

the statement that By construction, these p possible polynomials
are equally likely depends on these conditions. I believe any finite
field will work, but Zp is the simplest option.

[Incidentally, if you're implementing this from Handbook of Applied
Cryptography, there's an erratum for that section (12.71):
of degree at most t in the paragraph after the Mechanism should be
of degree less than t.]

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: how to encrypt and integrity-check with only one key

2009-09-15 Thread David-Sarah Hopwood
Zooko Wilcox-O'Hearn wrote:
 following-up to my own post:
 
 On Monday,2009-09-14, at 10:22 , Zooko Wilcox-O'Hearn wrote:
 
 David-Sarah Hopwood suggested the improvement that the integrity-check
 value V could be computed as an integrity check (i.e. a secure hash)
 on the K1_enc in addition to the file contents.
 
 Oops, that's impossible.  What David-Sarah Hopwood actually said was
 that this would be nice if it were possible, but since it isn't then
 people should pass around the tuple of (v, K1_enc) whenever they want to
 verify the integrity of the ciphertext.
 
 http://allmydata.org/pipermail/tahoe-dev/2009-September/002798.html

Zooko is referring to the argument after the first '-' in that post.
Note that the argument after the second '-' was wrong; see the correction in
http://allmydata.org/pipermail/tahoe-dev/2009-September/002801.html.

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com