On Mon, Mar 17, 2008 at 12:14 AM, Michael Sierchio <[EMAIL PROTECTED]> wrote:
> Kyle Hamilton wrote:
>
>  > A key's lifetime is, cryptographically speaking, the amount of time
>  > for which it can be expected to provide a sane level of security in
>  > relation to the value of the data which it protects.
>
>  Right, which is a matter of consensus best practice, we hope...

You are arguing with yourself; I'll approach why I state this later on.

>  > Of course, cryptography is just a means of applying a policy to a
>  > piece of data.  If you share a means of decryption, you also share a
>  > piece of trust with whomever you share that means with that they won't
>  > violate that policy, even though the policy is advisory (i.e.,
>  > non-enforceable) once the data is decrypted.
>
>  I think you need to be more careful about what is trusted.

I'm starting from zero-principles in my approach.  Your argument is
wrapped up in terminology, in concepts that have obscured the actual
'meaning' behind their creation and consensual agreement.

>  In the case of a signed message, our trust depends on a
>  number of things -- that the message was signed during the
>  validity period of the signer's cert, that the cert declares
>  the key to be valid for that use, and perhaps trust in the CAs
>  policy enforcement and revocation methods, CRL publication, etc.
>  We trust that, absent a key revocation for any reason, including
>  expiry, that a private key remains under the exclusive control of
>  the signer.  Signatures might require third party digest timestamps
>  for non-repudiation of the validity of the signature wrt time of
>  signing prior to a trusted date.

What 'trusts'?  All of the things that you mention -- the validity
period of the certificate, the CA's policy enforcement (and thus our
ability to form a policy that can rely on that policy enforcement),
the CA's revocation methods, CRL publication, etc -- are means of
deciding, via policies, whether to trust or distrust the assertion
made.  They are a means of associating policy with data, and are not
independent ends of their own.

A statement of identity binding -- a certificate -- must be signed, by
policy and practice.  Policy may not allow you to accept that
statement, however, unless it is signed by a specific key that has
been bound to the identity of an organization which has policies that
you choose to trust.  This signing is a use of cryptography to
associate data (the identity in the certificate) with policy (that the
organization's rules -- its policies -- have been followed, such as 'I
have verified that the identity named in this certificate is in fact
the entity that presented me with this key to certify').

Following that, the use of that certificate to encrypt a message to
the identity it states is, in effect, the communication of the message
with the following policy: "My policy is that only the identity bound
in the certificate I'm encrypting to is allowed to see the contents of
this message in a perceivable form."  You attempt to enforce this
policy by using the cryptographic key bound to the identity -- but
once that person receives the message, that person can do whatever
they want with that message.  You cannot enforce that that person not
disclose the contents of that message to anyone else; thus your use of
that key to communicate your policy is purely advisory.

(This is the trap that has allowed DRM, for example, to be applied and
then broken -- DRM relies on the ability to enforce a policy that is
communicated.  It is also the trap that Windows system policies have
fallen into -- they rely on the ability of the System security
identifier to make arbitrary changes to the environment in the
registry or in the filesystem, without recognizing that the System SID
can be blocked from making those changes by anyone who has absolute
authority (i.e., local Administrator) access to the machine.)

>  Anyway, in the case of RSA keypairs we don't manufacture them, we
>  discover them.  They're already there, we just search for our p's and q's
>  in the appropriate range and rely on chance starting conditions to find
>  some not in use.  I suggested, but not entirely in jest, giving them all
>  a timestamp of 0.  Creation date is a useless concept.  Not valid before
>  and Not valid after attributes make enormous sense, and are where they
>  ought to be.

They exist, certainly -- but nobody has mapped them all.  By your
logic, the generation of a 128-bit or 256-bit symmetric key (since the
keyspace is finite) also has no 'creation date', thus requiring that
they all be timestamped to 0 as well.  However, each and every context
that a key is used in is different.  Within each and every context, to
discover the secret key we must start over from square 1, start over
from zero-knowledge.

The useful life of a key starts on the date when it was first either
'generated'/'discovered' or 'used in context'.  (Since 'context' is a
tricky thing to quantify, in order to make it most likely that the
statistical probabilities that lead to the mathematically near-certain
guarantees are upheld, we simply say 'used' -- and generate/discover a
new keypair whenever we want to change the context we're working in.)
That is the date that the key should be timestamped -- not '0' --
because that is the first time that the context has been used and
applied.  i.e., by 'generating' a keypair, what we're really doing is
creating a new context to apply the numbers within.

This useful life cannot be appropriately measured by any observer,
including the CA that performs the identity binding.  This useful life
can ONLY be appropriately identified, measured, and counted by the
person who actually uses that context with the private portion of the
number set.

All the CA can do is state "I assert that this public key [and by
extension its private key] can be effectively viewed as identical to
the identity to which I am binding it during this timeframe, assuming
that all of my own policies for key management are properly followed."

This is why I state that you're arguing with yourself: on the one
hand, you state that it's a "best practice" to limit the lifetime of a
key, but on the other hand you state that every key's life started at
time 0.  Assuming both statements are equally and infallibly true, the
absurd outcome is that the lifetime of every possible key -- whether
it has been used or not -- has expired.  This is not, and cannot, be
the case.

It used to be that the mathematical calculations involved in the
certification of a public key (and even the generation of a keypair)
were horribly expensive, and it made sense to minimize the number of
times that those calculations were performed.  Now, though, we have
laptops that are faster than the supercomputers that X.500 was
designed to run on.  It now takes me less than one tenth of a
CPU-second to bind a 2048-bit key/identity certification using another
2048-bit key.  Processing power is orders of magnitude cheaper than it
used to be, and there should not be any reason to keep any old key
around.  There should not be any reason to reuse a previously-used
key, to 'fake' a new context.

(This is another fallacy in your viewpoint, incidentally: the owner of
the keypair is the only one who can know for certain that the key has
never been used elsewhere.  The CA cannot.  This means that the
'context' cannot be viewed to map 1 to 1 to the validity period
certified by the CA.  If nothing else, the keypair /must/ by necessity
exist before the CA binds it to an identity since the key must be put
in with the information that the CA is binding before it can bind it.)

>  The trust conferred on a signature derives from signature validation,
>  which requires certificate validation.  One part of the validation is
>  that the message in question was signed during the validity period
>  as defined by certificate policy.

You're going back to the concept of 'time' as an important factor
which must be authenticated and otherwise proven.  This is, to reuse
the term that you used, the use and application of a policy.  This is
not anything inherent to the use of cryptographic functionality.

>  You may argue, and get me to agree, that cert reissue/resigning with
>  the same SubjectPubkeyData is a bad idea.  Make 'em generate keypairs.
>  Keep a list forever of pubkeys seen in certs and reject any that appear
>  in CSRs.  Your storage requirements won't rival that of Youporn, or
>  Wikipedia.

You could as easily keep hashes of the keys seen, and your storage
requirements would go down that much more.  This would also allow for
possible collisions, which would allow for more rejections than just
"I know I have seen this key before" -- if a bank were to issue
certificates to its customers for the purposes of accessing account
information, for example, the knowledge that the key has been seen
before is more valuable (it could be a currently-valid key for someone
else's account, and if you can get the certificate for it you can
access the other account).  Seeing a hash collision means that it
/might/ currently be valid -- or it could just be a statistical
anomaly.

But... why shouldn't the person generating the keypair know when the
keypair was first generated?  Why shouldn't the person looking back on
the keypair be able to know that it was generated a week and a half
before he found that his entire computer was compromised by spyware?
Why shouldn't this be an attribute of the private key?

-Kyle H
______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
User Support Mailing List                    openssl-users@openssl.org
Automated List Manager                           [EMAIL PROTECTED]

Reply via email to