On 3/1/24, 13:45, "pch-b538d2...@u-1.phicoh.com on behalf of Philip Homburg" 
<pch-b538d2...@u-1.phicoh.com on behalf of pch-dnso...@u-1.phicoh.com> wrote:

>    If we have a protocol where validators are allowed to discard RR sets with
>    duplicate key tags but we place no restriction on signers, then we have a 
>    protocol with a high chance of failure even if all parties follow the 
>    protocol.

>From what I gather, from what I've measured, and from what I've heard from 
>others, generally key tag collisions don't happen often in natural operations. 
> (They may begin in malicious operations.)

If a validator chooses to discard all signatures for which there are multiple 
DNSKEY resource records matching the key tab in the RRSIG resource record, 
there'll be SERVFAILs across the population that cares about the data involved. 
 From past observations, when there's a widespread "I can't get to that", it 
bubbles up to the service provider and then take steps to fix it.

This kind of feedback loop seems to be the state of the art in the Internet 
today.  I'm not sure we need to take on what would be a large effort to do 
better.  At least given anecdotal evidence todate.

>At the end of the day, following the protocol is voluntary. But if we want
>to be able to reason about the protocol, then we have to assume that all
>interested parties try to follow the protocol.

To use an anecdote - "When crossing a street with a cross-walk signal you 
should still look for vehicles.  While no vehicle ought to be entering the 
cross-walk against the signal, don't bet your life on it."  Something like this 
was on a police safety poster.

In designing a protocol, you can't assume that the remote end will do anything 
sensible.  You need to focus on what you can control locally.

>Indeed. But the question is, if a validator finds both RRSIGs associated with a
>RR set and we have guarantees about uniqueness of key tags for public key,
>can the validator then discard those signatures?

What if both signatures were generated by the same key (private of the pair) 
but the data changed between the inception time of one and the inception time 
of another?  One signature may be over a stale copy of the data, not from a 
different key.

>The first step to conclude is that for the core DNSSEC protocol, requiring
>unique key tags is doable. Even without a lot of effort (other the usual
>of coordinating changes to the protocol).

Back in the day, prefacing because it may no longer be true, BIND would 
generate keys and place them in a default directory.  Each key would be in a 
file whose name included the owner name, the DNSSEC security algorithm number, 
and key tag.  A key tag collision would be detected if the file name about to 
be used was already present in the directory.  This strategy only worked though 
if the user of BIND did not move the keys elsewhere, this is something the 
strategy couldn't control.

I'm not sure it's doable, even for "simple" DNSSEC, if you have to account for 
the myriad of ways signer processes are implemented.  Perhaps I'm being 
obstinate about the ease in which collisions can be detected because I still 
maintain it just doesn't matter.  Validators still need to protect themselves 
and when something that matters breaks, it'll light up the social media sphere. 
 (As it has in the past....)

>But the protocol also has to take reasonable measures to limit the amount
>of time a validator has to spend on normal (including random exceptional)
>cases.
>
>For example, without key tags, validators would have to try all keys in
>a typical DNSKEY RR set or face high random failures.

For the most part, zones don't have many keys.  Usually only one ZSK and one 
KSK unless there is a roll happening.  There are some zones with lots of keys, 
but that doesn't seem the norm.  I don't know if there is a study that finds 
the average number of keys in zones weighted by the use of the data in the 
zone.  (Meaning, TLDs would be weighted more highly than an ill-managed hobby 
zone.)

>So the question is, does requiring unique key tags significantly reduce the
>attack surface for a validator?
>
>Are there other benefits (for example in diagnotics tools) for unique key
>tags that outweigh the downside or making multi signer protocols more
>complex?

Key tag collisions are not desirable, we know that.

My diagnostic tool has crashed the two times it came across them, in one case I 
could differentiate by assuming the role (KSK vs. ZSK) and in the other I shot 
off a (possibly futile) message to the operator and the collision cleared 
quickly, plus I was able to smudge my code a bit.  Sooner or later, I know I 
won't be able to distinguish a collision unless I grab more data.

The question isn't about the goodness of collisions.  It's about the best way 
to address the resource consumption problem than can exacerbate.  Ruling them 
out of bounds doesn't mean they can't come back on the field and cause 
problems.  Treat the problem - resource consumption - that can be done.

And don't design a new protocol with a key tag thing.  Again.

_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to