On Tue, May 12, 2015 at 8:28 PM, Scott Kitterman <ietf-d...@kitterman.com> wrote:
> > Is it appropriate to change the protocol document for this? Isn't it > > really more of a BCP? > > I think when key size got put in the protocol, then it's a protocol update > to > change it. > Is it part of the protocol, or is it part of the prose around the protocol? The DKIM algorithms don't break if you use weak keys any more than they break if you put false information in a header field. More generally, I don't believe DKIM itself cares about the size of the key. All of our advice absolutely does, and rightly so. Do we have any other crypto-related protocols where the protocol itself legislates the appropriate key parameters for the encoding, decoding, signing or verifying? Section 6 of RFC3207 (STARTTLS) explains explicitly that it's a local matter and out of scope, for example. I scanned RFC4033-4035 (DNSSEC) and didn't see any restrictions or advice about key size selection at all. I always go cross-eyed when I try to read the TLS RFCs so I'll stop there for now. ;-) > The size of the key doesn't affect interoperability, but rather the > > receiver's choice to accept the signature as valid when it's based on a > > weak key. > > To me that's equivalent to saying choice of crypto algorithm doesn't affect > interoperability since it only affects the receivers choice to accept the > signature as valid. > There's also nothing wrong with a receiver deciding it doesn't like signatures that use relaxed canonicalization, SHA1, or decline to sign the Subject header field. The algorithm itself worked fine -- that's interoperability -- but the receiver doesn't like one of the parameters the signer used and thereby makes a local policy decision. You have to set a floor below which it's not reasonable to accept > signatures as > valid. Since receivers have no way to vet sender's key rotation policies, > taking an out like RFC 6376 and its predecessors do and say that keys > smaller > that 1024 bits are OK for keys that aren't long lived is not tenable. That > and since DKIM was first deployed, at least for 512 bit keys, the not long > lived required to meet even the modest security goals of DKIM are > substantially shorter than the amount of time typically needed to ensure > that > mail deliveries are completed (some fraction of a day at longest). > > Key lengths less than 1024 need to be killed dead. > I don't argue with any of that, except to say again that I'm not convinced DKIM, the protocol, has to suddenly break for small keys. I absolutely agree with a BCP statement of some kind, and I also agree in retrospect that the not-long-lived key advice in RFC6376 is probably not helpful. (You could in theory observe the timestamp of when you first saw a key and then watch how long it gets used, but that puts an unreasonable burden on receivers.) Do we also want to issue a BCP more generally that tries to compel all implementations of TLS or anything doing signatures to flatly decline to operate if someone tries to use a sub-1024-bit key size? BTW (for reference), I'm prompted to do the work to make this change by a > recent change in opendkim [1] that removed the ability to mark messages > with > small keys as DKIM fail. > The change I think you're talking about (you didn't include a reference URL; I think it's https://sourceforge.net/p/opendkim/bugs/221/) appears to agree with what I'm saying above. When talking about unacceptably small keys, the "unacceptable" decision is not made by the protocol, but by the receiver. DKIM didn't fail, so it shouldn't be treated as a DKIM failure. Accordingly, OpenDKIM now reports those as failures for policy reasons rather than failures for protocol reasons. -MSK
_______________________________________________ NOTE WELL: This list operates according to http://mipassoc.org/dkim/ietf-list-rules.html