On Mon, Jul 10, 2017 at 5:00 PM, Paul Wouters <p...@nohats.ca> wrote:

> On Mon, 10 Jul 2017, Shumon Huque wrote:
>
> We've posted a new draft on algorithm negotiation which we're hoping
>> to discuss at IETF99 (and on list of course). I've discussed this
>> topic with several folks at DNS-OARC recently.
>>
>>     https://tools.ietf.org/html/draft-huque-dnssec-alg-nego-00
>>
>
> I'm not a fan :)
>
>         This means that DNS servers have to send responses with signatures
> of
>         all algorithms that the requested data are signed with, which can
>         result in significantly large responses.
>
> This seems to be a pretty small use case. And it is the only one
> mentioned in the introduction. It only affects zones that are in
> algorithm rollover, unless you want to run zones that permanently sign
> using multiple algorithms, but that's not a use case mentioned.
>

Perhaps we didn't explain it clearly enough, so let me give you a concrete
example:

My zone is currently signed with 2048-bit RSASHA256. I want to offer
signatures with Ed448 (or some other new algorithm) also, so that newer
validators can take advantage of it. However, I want to be able to continue
to support the current population of validators that don't support Ed448
until a sufficient amount of time has passed where they have all been
upgraded - this could be some number of years.

I also don't want to double sign the zone and return multiple signatures in
the responses, because they might be fragmented and cause timeouts and
retransmissions at the client (validator) end. I could truncate those
responses and prompt them to re-query over TCP, but then again I have
caused an unnecessary failed roundtrip and have incurred additional
processing costs associated with TCP, and maybe I haven't scaled up my
authoritative infrastructure sufficiently to deal with that.

I also don't want to deploy only Ed448 and cause my zone to be instantly
treated as unsigned by the vast majority of resolvers. Obviously, because
I've nullified the security benefit of DNSSEC, but also because I have
application security protocols, like DANE, that critically depend on DNSSEC
authentication, for which this would pose a grave security risk.

So the goal is not to have them "permanently" signed with multiple
algorithms, but for a defined transition period, which may not be very
short. At that point, the older algorithm would be withdrawn -- so
algorithm rollover, but over an extended period.

For any caching nameserver with multiple clients, there will be no gain
> for the next 10 years, as it only takes 1 old client not using this
> option to cause the caching nameserver to refetch (adding more bandwidth
> usage)
>
> You explain all this is vulnerable to a downgrade attack. And the
> clients SHOULD (not MUST?) check the algorithms used in the DNSKEY
> RRset.


Yes, that should have been a MUST (I made a note of that already for the
next revision).


> Since you need to survive the DNSEY RRset size, most of the introduction
> explaining how this is to survive packet size matters at all, you still
> need to survive it.
>

Of course there are initial costs. The goal is longer term - the benefits
will increase with more adoption over time. There will be a lot of large
responses initially, which will decrease over time.


> I don't think response size during a KSK algorithm roll is an issue that
> needs fixing.
>
> Can there be 'pre-pubished' DNSKEY's that are not used for signing yet, to
>>> would not be available for response signatures?
>>>
>>
> Very good question Yes, there certainly can be. If the pre-published key's
>> algorithm is higher strength than the others, then it could cause the
>> resolver to
>> mistakenly deduce an algorithm downgrade attack might be in progress. I
>> think this
>> argues that we really do need the new zone apex (active) algorithms list
>> record -
>> which we already were thinking of proposing - in the last paragraph of
>> Section 7.
>>
>
> One would hope zones are migrated from "strong" to "even stronger"
> algorithms, and not from "weak" to "strong enough", so I don't think
> algorithm downgrade is ever an issue.
>

Really? RSA1024 is still widely deployed, and is frequently why DNSSEC is
the butt of jokes in the larger security community.


>
> This draft gives me a feeling it is really about something else, keeping
> long term dual signed zones out there. I don't think that we should
> aim for that to become commonplace.


See the first example I gave. We're not trying to surreptitiously sneak in
a use case.

-- 
Shumon Huque
_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to