On 19/05/11 16:09, Erwann ABALEA wrote:
Bonjour Tim,


Hi Erwann,

I presume there is a slight possibility of a serial number clash
with that? Not that it's a problem, but it would be wise to check
index.txt to see if the number has been used before?

Really, no. A counter is encrypted, and the result of the encryption
is the serial number.

If you imagine a collision in this serial number generation scheme,
then it means that a ciphertext (serial number) can be decrypted into
2 different plaintexts (counter). That's impossible if the secret key
is constant.

The problem with this scheme is that it doesn't deal well with
parallel certificate signatures. You have one shared information that
must be incremented in an atomic way. But for a "Junk CA" (that's how
I call the set of scripts I use), that's not a problem.

OK - I see now - thanks.

You could have used another scheme:
  - generate 16 random bytes
  - place them in the serialNumber
With such a scheme, yes, a collision could occur, and you should check
the index.txt file first.

OK - that has straightened out my thinking :)

Beware of the "ca" command line tool if you have a large population. I
wanted to use it for a massive certificate generation (in order to
fill an LDAP directory), and the time taken to generate a certificate
goes larger with the number of generated certificates (it must load
the file into memory, check if the given name either doesn't exist or
exists only with revoked or expired certificates, extend the list, and
finally write it again onto disk).

Interesting...

The infrastructure I envisage is one where a database will have a
record of all SSL enabled services and cert/key file locations plus
CN information - which makes it practical to run and distribute by
itself - which then makes Adam's idea viable.

Then this deployment server will be your SPOF :)


Only in a limited sense :) I have worked with such a system before (in fact I am merely reimplementing - except for the SSL bit which would be new).

What actually happens is: kerberos is slaved to a couple of other machines in the usual way. The master config datastore is backed up (for a high volume environment it is actually hosted on another server anyway) and backed up.

The ability to do root ssh is lost if the master gold server dies, but that doesn't stop anything working (for a time anyway until updates are needed to the other hosts) and the config system can be restored (or is still working on another host) and can be used to ship out a fresh .ssh/authorized_keys file to all the hosts.

Only the SSL datastore is vulnerable as you probably don't want that backed up in the conventional manner (unless you separate the master passphrase cache out and secure all the probably passphrase-less keys). But given you need two or more kerberos (or could be LDAP) auth servers, all of which *must* be secure[1], you could have one replicate (back up) its SSL store to the other.

[1] I take the view that if you root a kerberos server, you have the ability to alter any principle so you can own the rest of the systems - the same is true of a config master (whether it be puppet or a home grown system like mine). So giving it overt ability to do other things is not an additional risk.

Cheers,

Tim

______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
User Support Mailing List                    openssl-users@openssl.org
Automated List Manager                           majord...@openssl.org

Reply via email to