Hi Daniel,

On 5/18/23 02:26, Daniel Migault wrote:
    On 5/17/23 22:01, Daniel Migault wrote:
     > I agree but as far as can see the cap of the TTL with a revalidation 
will only resync the resolver and the zone more often than could be expected 
otherwise but does not result in the cached RRsets differing from those provided 
by the zone.

    These two things are not the responsibility of the resolver. It is 
precisely the task of the TTL to set the expectation for expiry and, when 
changes are made, for the convergence time to eventual consistency.

I agree but if a DRO can implement some mechanism that avoids the cache to 
remain incoherent,

No, it is almost a defining feature of the DNS that it only gives eventually 
consistent responses. The DNS is not a real-time database.

I think that is good - especially in a situation where many different zones are 
involved. Typically, the DRO remains the one serving responses and if one says: 
the cache does not reflect what I see on the authoritative side...

Some may find that unfortunate, but capping TTLs at the lowest value in the 
trust chain harms scalability.

In addition, when resolvers apply various ways of "fixing" the auth's TTls, 
behavior will end up differing across the Internet. That harms interoperability and makes 
debugging harder.

this is seen by many MNOs as an issue.

How can it be an issue if the zone operator declares that this is the TTL they 
want?

It's completely legit for an auth to set a TTL of (say) 1 day, at the cost of 
slow updates, but with the benefit of more resiliency against short period of 
auth unreachability. Why take that flexibility away from them?

Then, do we have an easy way to implement Viktor's revalidation ? TTL cap is at 
least a (costly) way to implement it.


    That said, I still don't understand what it's needed for: it seems that 
it's just not necessary to do revalidation / refetches (= early expiry) after 
the minimum of TTL_RRset and all of the TTL_DNSKEY, TTL_DS in the chain; you 
can achieve the same output reliability by doing it when a change in the trust 
chain is detected and validation would otherwise fail.

Then I might be missing how this could be implemented. How do we check that the 
validation will fail? Does the resolver check the key tags for every response ?

There is no extra check. Imagine the following situation:

Day 1
- DS/DNSKEY TTL 1 hour, records fetched at 10:30am UTC
- MX TTL 24 hrs, record fetched at 10:30am UTC
- cache otherwise empty

Day 2
- DS/DNSKEY not currently in cache (expired 1 hour after the fetch on Day 1)
- MX TTL is still valid! (until 10:30am)

Imagine that some time in the morning of Day 2, the AAAA record is queried, 
let's say at 06:00am UTC. The DS/DNSKEY records are have long expired from the 
cache (on Day 1), so they will also be re-fetched in order to validate the AAAA 
response.

If the DS and DNSKEY records turn out are the same as before, no problem. But 
let's say that the re-fetched the keys have changed completely since yesterday, 
for whatever reason. The resolver will then use the new key information to 
validate the AAAA response.

And it is *now* that the resolver could say "oh, actually, I still have this MX 
record cached until 10:30am, but I know that the keys I used for its validation are no 
longer there". The resolver COULD then remove the MX record from the cache.

This is very different from applying the minimum of the various TTLs: when you 
do that, the MX record would already have expired on Day 1 at 11:10am.

I'm fine with resolvers doing that, but I'm not in favor of a general 
recommendation. It's primarily the zone maintainer's task to get their zone 
content in order.

Best,
Peter

--
https://desec.io/

_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to