On 05/13/2015 06:35 AM, Murray S. Kucherawy wrote:

On Tue, May 12, 2015 at 8:31 AM, Martijn Grooten <martijn.groo...@virusbtn.com <mailto:martijn.groo...@virusbtn.com>> wrote:


    You are right to point out that the RFC says that "[t]he security
    goals of this specification are modest", which indeed they are,
    but I think 100 USD is well within the means of the kind of
    adversary DKIM is trying to protect against. The story of Google's
    512-bit key that Scott already pointed to[2] gives at least some
    anecdotal evidence about why this matters in practice.


Is it appropriate to change the protocol document for this? Isn't it really more of a BCP?

I suspect that there are two parallel concerns here, which may warrant different approaches at different levels of organisation.

The first is a concern for the freedom of action by individual practitioners. This includes the need for protocol specifications to support a range of parameter values to support different cost/benefit trade-offs for differently situated practitioners, different costs/benefit trade-offs for the same practitioners at different times (particularly as increasing computational power invalidates shorter keys) and interoperability with older systems. This generally argues for specifications to cover widely-used existing elements (signature algorithms, key lengths, ...) - particularly including weaker values - and a wide enough range to cater to the needs of both the most exposed practitioners at present, and the majority in the foreseeable future. In this environment, removal of mandatory algorithm and key-length support at the low end makes sense only as opportunistic elimination of no-longer-in-use (not merely no-longer-advisable) options at the time that a major revision is taking place, however more frequent BCP updates seem appropriate to support this.

The second is an ecosystem concern because of the tendency for the minimum stance in documented standards to become the default position for a majority of implementations and, therefore, to establish what it is that criminals have to work with. It would seem extraordinarily unwise for a single protocol working group to make its own choices on this front (think OSGP <http://en.wikipedia.org/wiki/Open_smart_grid_protocol#Problems>), rather it makes sense to perform a broader periodic review, including thorough review by expert cryptographers, of cryptographic algorithm and parameter choices for use across all protocols and applications and, when the output of such a review changes, to treat it as a minimum standard for the cryptographic elements of all subsequent specification updates. So far as I can tell the IETF does not do this, but various other standards bodies do, notably NIST. If this group thinks it's appropriate to eliminate weaker options for the ecosystem reason, then it would appear appropriate to hew to NIST's recommendations (or to those of a comparable body elsewhere), specifically NIST SP 800-57 <http://csrc.nist.gov/publications/nistpubs/800-57/sp800-57_part1_rev3_general.pdf>:

4.2.4.2 RSA
The RSA algorithm, as specified in [ANSX9.31] and [PKCS#1] (version 1.5 and higher) is adopted for the computation of digital signatures in [FIPS186]. [FIPS186] specifies methods for generating RSA key pairs for several key sizes for [ANSX9.31] and [PKCS#1] implementations. Older systems (legacy systems) used smaller key sizes. While it may be appropriate to continue to verify and honor signatures created using these smaller key sizes, new signatures shall not be created using these key sizes.

The specified methods in FIPS186 are for 1024, 2048 and 3072 bit keys only. Note that a method of secret key generation is ordinarily considered to be outside of scope for IETF communication protocol specifications, which leaves us with something like the current situation: that signers should not use <1024 bit keys, but that (a) verifier implementations need to remain capable of verifying them and (b) the operators of verifiers need to be able to make decisions about whether or not to ignore 512 bit keys. This all feels like BCP territory too.

8.1.4 Special Considerations for Key Sizes
...
It is recommended that DNS administrators maintain 1024-bit RSA/SHA-1 and/or RSA/SHA-256 ZSK’s until October 1, 2015, or until it is proven that the majority of routers, caches and other network middle boxes can handle packet sizes over 1500 bytes (if before 2015).

This rationale would also appear to apply here too (note that the specific numbers are about DNSSEC ZSKs only; the relevant bit is the idea that a transition is predicated upon empirical data about widespread adoption). Any proposal to remove the requirement for verifiers to be capable of verifying 512 bit keys would appear to be burdened with an obligation to either prove that the number of signers still using 512 bit keys is negligible, or make the case that the abuse risk was serious enough that changing the spec without achieving that threshold would achieve sufficient improvement to make it permissible. (I recall comparable discussions around some element of DMARC, I can't recall how low a use threshold had to be reached before a feature could be removed; perhaps others here do.) Someone with access to relevant data would need to undertake and publish the research, discussion about what matters would be required (e.g. perhaps measurement discovers that 512 bit keys are only used by low-risk domains; does this warrant killing the feature for the good of those who are being targeted, or retaining it because it's still in use, with a clear understanding that highly-targeted organisations aren't going to use 512 bit keys to begin with?), it may well be necessary for measurement and publication to be repeated periodically, etc.

I'd offer 3 conclusions:

 * That shifting signers to MUST use >=1024 bit keys is already
   reasonable, whether immediately or at the next major update.
 * That shifting verifiers to MUST NOT verify <1024 bit keys is _*not*_
   reasonable without empirical data about 512 bit key use and
   consensus on thresholds.
 * That issuing/updating a BCP to indicate that verifiers should treat
   512 bit keys as a weak signal (e.g. to ignore them for
   high-abuse-risk domains, provide the means to selectively ignore
   them, etc.) is reasonable.


- Roland

_______________________________________________
NOTE WELL: This list operates according to 
http://mipassoc.org/dkim/ietf-list-rules.html

Reply via email to