On Nov 8, 2010, at 5:23 PM, Paul Wouters wrote:

> On Mon, 8 Nov 2010, Rickard Bellgrim wrote:
> 
>> 3.1 Operational Motivations (5th paragraph)
>> Is it true that you get “operational flexibility and higher computational 
>> performance” by using a file based ZSK and smartcard KSK (offline in a safe) 
>> than a single key stored on an HSM? The HSM can give you the option to have 
>> the keys online, thus no need to take out the smartcard from the safe. And 
>> the HSM can be designed to give you speed. I know that my HSM is faster than 
>> OpenSSL on my machine.
> 
> Which HSM and which openssl version on what type of computer? A decent 
> quadcore outperforms
> an HSM in my experience.

Hi Paul,

I think, in general, specs for single purpose dedicated hardware is difficult 
to be compared to general purpose hardware perform a single task. You'd need to 
be able to weigh all variables. Apples and oranges really. 

An HSM can be designed to give you speed, http://bit.ly/9cPEDS

It might not be faster than an cluster of general purpose hardware of course.

>> 3.4.2 Key Sizes (1st paragraph)
>> “can safely use 1024-bit keys for at least the next ten years”. NIST SP 
>> 800-131 says that 1024 – 2048 bit is acceptable to use in 2010. It is 
>> deprecated between 2011 and 2013. From 2011 you should use keys larger than 
>> 2048 bits.
>> http://csrc.nist.gov/groups/ST/toolkit/documents/draftSP800-131_June_11_2010.pdf
> 
> key size depends on key usage. The NIST SP (if I remember correctly) lists 1 
> validity time for both volatile signatures
> and long term encryption.

It doesn't.

> In fact, talking to cryptanalists, the 10 years could even be extended, but 
> it was used as a
> very conservative number.

What I understand from that NIST recommendation: 1024 is safe until 2010. RSA 
1024 bit signature generation is deprecated between 2011-2013. Deprecated means 
that the use of the algorithm and key length is allowed, but the user must 
accept some risk. RSA 1024 bit signature validation is 'legacy use' after 2010. 
Legacy use means that the algorithm or key length may be used to process
already protected information. I have no idea if this is liberal or 
conservative. I found it hard to get anyone underwrite its conservativeness, 
most folks say 'yep, sounds about right to me'. Better safe than sorry.

>> 4.2.1 KSK Compromise (2nd paragraph)
>> A compromised KSK used by an attacker can also sign data in the zone other 
>> than the key set. An attacker does not need to follow the definitions of KSK 
>> vs ZSK.
> 
> I wonder how different implementations handle this case......

I have tested chains of trust (with BIND9 and unbound) in the past and noticed 
that its validity did not depend on the SEP bit being clear or set. 

>> 4.3.1 Initial Key Exchange
>> I also think that it should be possible to send in a DS RR for which there 
>> is no DNSKEY in the child zone. I know that there are registries that 
>> disallow this and others allow this. The reason is to not limit any (future) 
>> rollover mechanism. What we could say is that there should be at least one 
>> of the DS RRs pointing to a DNSKEY.
> 
>> 4.3.3 Security Lameness (2nd paragraph)
>> No, the parent should allow DS RR pointing to non-published DNSKEY. We 
>> should not limit any (future) rollover mechanisms. There should of course be 
>> at least one DS pointing to one DNSKEY.
> 
> If you already know the DNSKEY (because you have the DS record), what would 
> be the use of not already
> publishing that DNSKEY?

DNSKEY RRSet size might be an issue. 

>> 5.3.3 NSEC3 Salt (3rd paragraph)
>> You recommend doing re-salting at the same time as ZSK rollover. But it is 
>> not required to drop all signatures at once during a ZSK rollover. This can 
>> be a smooth transition determined by the refresh period.
> 
> I thought you would always have the new ZSK sign all the zone data, and just 
> keep the old ZSK for cached
> sigs? So in that case, yes you always drop all signatures during a ZSK 
> rollover (on the signer, TTL's and
> SigMax will cause a spread out expiry of the old sigs on caching validating 
> resolvers)

When I transition from old to new key, it would be nice to spread the load 
(generating new signatures over the entire zone with the new key), while 
keeping One Signature per RRSet (for response size). That means that for some 
serial number, some NSEC3 records have signatures with the old key, others have 
signatures with the new key. Especially handy for very large zones.

I think its best to see re-salting as an individual event, and not tie it to 
some other event. 

>> Finally: You are missing text about standby keys.
>> Used to speed up the rollover in case of an emergency. But also as part of 
>> your disaster recovery, when you have lost access to your keys and the 
>> backups cannot be restored. This is how you do it:
>> 1. Generate standby KSK and ZSK. Store them safely.
>> 2. Prepublish standby ZSK in zone.
>> 3. Prepublish DS of standby KSK in parent zone.
>> 4. You have a disaster.
>> 5. Create a new datacenter and import the standby keys.
>> 6. Postpublish old ZSK in zone (fetch it from a secondary somewhere).
>> 7. Sign and publish zone.
>> 8. After "some" TTL remove old ZSK and old DS
>> 9. Continue with normal operation.
> 
> I find this very odd. You publish "bad data" just because you might have a 
> bad backup? It seems like
> outsourcing your responsibilities, at the cost of everyone's resolvers 
> needing to handle a bogus DS
> record.

Validation doesn't really work that way, right? You have a chain of DS's+KEYS 
you want to validate. A bogus DS is not in that chain. Maybe validators 
validate differently these days. 

Also, 'outsourcing your responsibilities' is a bit harsh. This is just another 
way to limit damage. In that sense they are taking their responsibility.

Note that I'm not a big fan of standby keys. I think a little bit of redundant 
setup goes a long way, and has the same effect.

If you keep the DNSKEY ttl down a bit, the time needed for this key to be 
pre-published goes down with it. You can then consider not to pre-publish that 
standby key at all, and we're back to the bogus DS argument.

> And why would the disaster not destroy the standby keys?

Because you keep them 'elsewhere'.

Roy
_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to