Op 29 feb 2024 om 09:35 heeft Ralf Weber <d...@fl1ger.de> het volgende 
geschreven:

> Bind and and other servers (egg Akamai Cacheserve)  already stop/fail when 
> having more then two keys with a colliding key tag today. I think what at 
> least I want is that every key a validator has to consider has a unique key 
> tag, so that we don’t need to do any additional cryptographic work for 
> figuring out if the signature is correct.


If I may attempt to summarise:

It is in the interests of a signer to make it easy to validate since if it is 
difficult the validation might fail. Therefore it is worth it to take care not 
to publish duplicate key tags at the same owner name for that reason, since 
duplicates cause validators extra work and if you're going to publish something 
in the DNS your ultimate goal is not SERVFAIL. However, accidents will still 
happen because that is what accident means and also some signer configurations 
are complex. 

It is in the interests of a validator not to endure unpredictable spikes of 
work when validating. This is generally true and had many consequences for the 
DNS and for DNSSEC, in this case that dealing exhaustively with duplicate key 
tags at the same owner name is a bad idea. 

Mark is saying that validators should feel empowered to down tools and walk 
away the second a duplicate is detected. Zero tolerance. For that to happen he 
feels that a protocol change is needed.

(I have lost track of the nested +1s and I am sure Mark is not alone, but I 
mention him here as a data point. Hi Mark!)

Others are saying that validators might try a little harder than that since 
there are cases where duplicate key tags might arise for non-malicious reasons, 
and perhaps a small amount of work to consume them is reasonable. This arguably 
doesn't seem to need a protocol change, because resolvers often give up for 
reasons of local policy if they are faced with unreasonable workloads and this 
is just one more example of that.

As I understand it, implementers have already followed the more liberal path to 
mitigating the attack risk of non-accidental keytag duplication. They have not 
adopted a zero tolerance policy. 

If we assume that avoiding duplicate key tags is good advice for signers, then 
the point of disagreement seems to lie in the degree of tolerance that is 
warranted. Compared to a more liberal posture, zero tolerance:

(a) would not change the prevalence of keytag duplication in the wild (no 
protocol police, accidents will still happen, attackers ignore rules and advice)

(b) would make some validators fail earlier but not all validators, because 
today and inertia and long tails

(c) would not make any difference to the attack risk, since in both cases the 
attack risk is mitigated

(d) would not make existing validator implementations less complex, since 
existing implementations have already adopted more tolerant defences

(e) might make future validator implementations less complex, if we ignore the 
market forces that would perhaps give reasons not to SERVFAIL when the 
competition is returning answers to the same question.

Perhaps there are points of clear agreement above and it would be helpful to 
just focus on statements that are contentious or are not in that list.

A document that gives advice to implementators both of signers and of 
validators seems like a reasonable idea, regardless of the consensus on degree 
of tolerance. I would be happy to volunteer to be an editor of such a document.


Joe
_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to