On Mon, 10 Jul 2017, Shumon Huque wrote:

We've posted a new draft on algorithm negotiation which we're hoping
to discuss at IETF99 (and on list of course). I've discussed this
topic with several folks at DNS-OARC recently.

    https://tools.ietf.org/html/draft-huque-dnssec-alg-nego-00

I'm not a fan :)

        This means that DNS servers have to send responses with signatures of
        all algorithms that the requested data are signed with, which can
        result in significantly large responses.

This seems to be a pretty small use case. And it is the only one
mentioned in the introduction. It only affects zones that are in
algorithm rollover, unless you want to run zones that permanently sign
using multiple algorithms, but that's not a use case mentioned.

For any caching nameserver with multiple clients, there will be no gain
for the next 10 years, as it only takes 1 old client not using this
option to cause the caching nameserver to refetch (adding more bandwidth
usage)

You explain all this is vulnerable to a downgrade attack. And the
clients SHOULD (not MUST?) check the algorithms used in the DNSKEY
RRset. If the client can do that, why bother with a new EDNS0 option
to begin with? (I think the answer is "because an exiting DNSKEY might
be in the DNSKEY RRset, but no longer in use" but that would also affect
the downgrade attack then)

Since you need to survive the DNSEY RRset size, most of the introduction
explaining how this is to survive packet size matters at all, you still
need to survive it.

I don't think response size during a KSK algorithm roll is an issue that
needs fixing.

Can there be 'pre-pubished' DNSKEY's that are not used for signing yet, to
would not be available for response signatures?

Very good question Yes, there certainly can be. If the pre-published key's
algorithm is higher strength than the others, then it could cause the resolver 
to
mistakenly deduce an algorithm downgrade attack might be in progress. I think 
this
argues that we really do need the new zone apex (active) algorithms list record 
-
which we already were thinking of proposing - in the last paragraph of Section 
7.

One would hope zones are migrated from "strong" to "even stronger"
algorithms, and not from "weak" to "strong enough", so I don't think
algorithm downgrade is ever an issue.

This draft gives me a feeling it is really about something else, keeping
long term dual signed zones out there. I don't think that we should
aim for that to become commonplace.

Paul

_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to