> As primary keys go, I'm certain that whole-cert hashes would come in
> quite handy. 

It's a mindset thing.   A key should lead you to more information.
Give me your name, I'll tell you your phone number and home address.
Read off the tag on the back of the car and I'll tell you the make
and model of the car it *should* be attached to and the name and
address of the registered owner.

A whole-cert hash, in contrast, is an attribute.  It may be unique,
but it tells me nothing about the rest of the data.  It's like indexing 
books by the number of times they've been checked out.

> As for lookups, I don't know in
> how many cases, apart from OCSPv2 apparently, a lookup on
> whole-cert-hash would actually be called for. 

Once again, the issue is *not* lookups.  EVERY plug-in will be required
to suport EVERY lookup method defined at the time it's written.  That's
trivial to do with a boilerplate routine and some macros.

The issue is existence is the choice of the key to be used for storage
mechanisms that *require* a unique primary key.  Berkeley DB, GDBM, 
hashed tables, etc.

To date we've identified two candidates.  issuer||serial (a legitimate
search key) and whole-cert hashes (an attribute), and I'm about to 
propose a third.

> I guess these could then be
> used in supporting tables in order to look up foreign keys leading into
> the 'master table'.

Secondary indexes, not foreign keys.  The latter is when one (or more)
attributes of a table is a primary key into another.  E.g., if I have
a table containing home addresses, the "postal code" attribute may be 
a foreign key into a table "pcodes" that contains the correct city and
state, the timezone, etc.

Secondary indexes (or indices) are a neat idea but they're a real pain
in the butt to get right, especially once multithreading or concurrent 
access from multiple processes enters the picture.  If somebody has enough 
data that they really need a secondary index, they should probably make
the jump to a RDBMS backend.

> >   in order to avoid operational problems (e.g., with revocation),
> >   either (issuer,serial) can be treated as a unique identifier,
> >   or (issuer,serial,X) will be sufficient to uniquely identify each
> >   cert.
> > 
> > The question is "what is X"?
> 
> The issuer's key identifier? Authority key identifiers are MUSTs in both
> certificates and CRLs in accordance with the PKIX profile.

Are we prepared to REQUIRE every cert comply with PKIX recommendations, 
without exception?  And if we are, will this be adequate?

> I think this is very much up to the individual implementation. If you
> are working in a banking application you might conceivably be required
> by law to keep every issued certificate and CRL available for 10-20
> years in order to be able to go back and prove/disprove the validity of
> old transactions.

I wasn't clear - I was saying that we could recommend (not require) 
that the backend refuse requests to delete a revoked cert prior to its 
expiration, not that it should delete them when they expire.  Even if
unrevoked certs can be deleted at any time.

The rationale is to avoid the race condition described earlier.

> And yes, while they're quite handy for protecting and storing an
> end-entity's keys and certificates, a smart card certainly wouldn't be
> the ideal candidate for a PKI repository. I definitely think an RDBMS
> would be the way to go.

But a plug-in that transparently updated a smart card would be extremely
handy. :-)  That's what makes the design so hard - it needs to be able
to handle everything from 8k smart cards holding a single veiled key and
cert to RDBMS databases with 50,000+ entries.
______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
Development Mailing List                       [EMAIL PROTECTED]
Automated List Manager                           [EMAIL PROTECTED]

Reply via email to