Re: [DNSOP] Working Group Last Call for draft-ietf-dnsop-dnssec-validator-requirements

2023-05-17 Thread Daniel Migault
Hi Peter,

Thanks for the response. I think I need to understand better how
revalidation is performed.

Yours,
Daniel

On Wed, May 17, 2023 at 4:26 PM Peter Thomassen  wrote:

> Hi Daniel,
>
> On 5/17/23 22:01, Daniel Migault wrote:
> > I agree but as far as can see the cap of the TTL with a revalidation
> will only resync the resolver and the zone more often than could be
> expected otherwise but does not result in the cached RRsets differing from
> those provided by the zone.
>
> These two things are not the responsibility of the resolver. It is
> precisely the task of the TTL to set the expectation for expiry and, when
> changes are made, for the convergence time to eventual consistency.
>
> I agree but if a DRO can implement some mechanism that avoids the cache to
remain incoherent, I think that is good - especially in a situation where
many different zones are involved. Typically, the DRO remains the one
serving responses and if one says: the cache does not reflect what I see on
the authoritative side... this is seen by many MNOs as an issue.

> That said, it provides the opportunity to the zone admin to eventually
> force that refresh in case of a mistakenly long TTL value.
>
> Triggering sudden refetches before TTL expiry is costly. I think that
> cases where the TTL is misconfigured are better addressed by requesting a
> cache flush manually from the resolver operator.
>
>  By forcing a refresh I meant respecting the TTL, not an emergency
roll-over. This at least provides a limit for the RRset.

> > Now one reason I think we also came to the cap, was that though we
> know tweaking the TTL is possible, I had in mind that adding a field like
> in our case the 'revalidation TTL' was much harder. Can we assume such a
> mechanism can realistically be implemented ?
>
> Such a new field would be a big change.
>
Then, do we have an easy way to implement Viktor's revalidation ? TTL cap
is at least a (costly) way to implement it.

>
> That said, I still don't understand what it's needed for: it seems that
> it's just not necessary to do revalidation / refetches (= early expiry)
> after the minimum of TTL_RRset and all of the TTL_DNSKEY, TTL_DS in the
> chain; you can achieve the same output reliability by doing it when a
> change in the trust chain is detected and validation would otherwise fail.
>
Then I might be missing how this could be implemented. How do we check that
the validation will fail? Does the resolver check the key tags for every
response ?

>
> The extra field also could also be misconfigured, such as having a
> mistakenly long value. What then?
>
> Just to clarify, the field I was referring to was the DS, DNSKEY TTL of
the RRSet. The field is not sent over the network, but a field in the
resolver database.


> Thanks,
> Peter
>
> --
> https://desec.io/
>


-- 
Daniel Migault
Ericsson
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Working Group Last Call for draft-ietf-dnsop-dnssec-validator-requirements

2023-05-17 Thread Daniel Migault
Hi Peter,

Thanks you very much for these comments. I will look carefully how to
implement carefully these comments in our new version.

Yours,
Daniel

On Tue, May 16, 2023 at 1:08 PM Peter Thomassen  wrote:

>
>
> On 5/12/23 23:09, Viktor Dukhovni wrote:
> > Repost of my belated comments in the thread, apologies about not doing
> > it right the first time...
>
> Inspired by Viktor's comments, I spent some time to give the document a
> thorough review.
>
> I'd like to support Viktor's comments on the dependent RRset TTL cap
> described in Section 9.
>
I feel that the recommendation there is potentially harmful while its
> benefit is unclear. As for the harm, it makes DS updates less flexible
> because it effectively pushes their TTL towards higher values (so that
> caches remain effective). While always-low DS TTLs are problematic, too, it
> doesn't seem like a sound concept for an auth's load to be essentially
> inversely proportional to the DS TTL when it is set to a low value
> temporarily.
>
> As for the benefit, the objective appears to be "preponing" the removal of
> cached RRsets from their scheduled expiry to "as soon as they potentially
> would not longer validate", as indicated by upstream TTLs related to the
> trust chain. However, there's no need to do this based on TTLs alone: if
> one wants to pursue this (optional) objective, it is sufficient to
> revalidate once an *actual* change in the DNSKEY or DS set is detected. But
> even in the face of a sudden change in the trust relationship, it's not
> clear whether ignoring a signed (!) long TTL is beneficial, as that might
> harm stability and resilience during time periods of configuration errors,
> which the cache would otherwise help survive.
>
> I agree and will update the document (see my response to Viktor). Thanks
for raising this.

>
> Second, I'm confused about the normative language in this informational
> document. (There are about 20 occurrences of MUST and about 40 of
> SHOULD/RECOMMENDED.)
>
> These are recommendations, so a MUST inside a recommendation is in many
cas eto be read as "strongly recommended" while SHOULD may be "it is
recommended". I do understand your concern and we try to see the best way
to solve that.

>
> Third, The document contains several inaccurate or contradictory
> statements. One example is related to  Section 7.1.4, which says:
> *  DNS resolver MUST validate the TA before starting the DNSSEC
>resolver, and a failure of TA validity check MUST prevent the
>DNSSEC resolver to be started.  Validation of the TA includes
>coherence between out-out band values, values stored in the DNS as
>well as corresponding DS RRsets.
>
> The recommendation says that a resolver may not be started if it's trust
> anchors are incoherent with values obtained from the DNS.
>
> My understanding is that the purpose of a trust anchor is to pin a trusted
> key for a name, in a self-contained fashion, without relying on its
> confirmation through some other channel (e.g. corresponding DS records). If
> a trust anchor is required to be coherent with values stored in the DNS,
> then the trust anchor doesn't appear to be needed in the first place.
>

The primary intent of that recommendation is to prevent a resolver
configured with a deprecated TA or synchronised time that starts resolving.
This idea here is that the resolver starts by checking the coherence
time/TA raises an error otherwise. There is more control during when the
resolver is started, so the net admin is likely to investigate what
happened if it does not see the resolver being started.


>
> It is also left open how the DRO should check "coherence between out-out
> band values, values stored in the DNS as well as corresponding DS RRsets"
> for their root trust anchors. There are no DS records, so you can check ...
> DNSKEY?

yes, that is essentially the keys.

> Hm. Then, what exactly to check? Also, what about IANA's root-anchors.xml
> file (RFC 7958)? -- The problem here is that "values stored in the DNS" is
> underspecified, although one MUST comply with it.
>
> It is underspecified as there is not a single procedure and we did not
want to assume only the root key is used. For the KSK of the root zone 7958
is expected to be part of the validation procedure. We mention that we
expect other TA follows have as similar mechanism.

What's more, Sections 7.1.2.1 has:
> Besides deployments in
> networks other than the global public Internet (hence a different
> root), operators may want to configure other trust points.
>
> Now, how would the above recommendation (enforce trust anchor coherence
> with DNS) be enforced in such a setting?
>
> This idea here is that if a DRO is willing to set a specific Trust Anchor
(let's say other than the root). There is a need to implement a procedure
to validate the TAs can be used. One of the intent here is also to point
out at the implications of adding another TA than the root TA.

>
> 

Re: [DNSOP] Working Group Last Call for draft-ietf-dnsop-dnssec-validator-requirements

2023-05-17 Thread Peter Thomassen

Hi Daniel,

On 5/17/23 22:01, Daniel Migault wrote:

I agree but as far as can see the cap of the TTL with a revalidation will not 
only resync the resolver and the zone more often than could be expected 
otherwise but does not result in the cached RRsets differing from those 
provided by the zone.


These two things are not the responsibility of the resolver. It is precisely 
the task of the TTL to set the expectation for expiry and, when changes are 
made, for the convergence time to eventual consistency.


That said, it provides the opportunity to the zone admin to eventually force 
that refresh in case of a mistakenly long TTL value.


Triggering sudden refetches before TTL expiry is costly. I think that cases 
where the TTL is misconfigured are better addressed by requesting a cache flush 
manually from the resolver operator.


Now one reason I think we also came to the cap, was that though we know 
tweaking the TTL is possible, I had in mind that adding a field like in our 
case the 'revalidation TTL' was much harder. Can we assume such a mechanism can 
realistically be implemented ?


Such a new field would be a big change.

That said, I still don't understand what it's needed for: it seems that it's 
just not necessary to do revalidation / refetches (= early expiry) after the 
minimum of TTL_RRset and all of the TTL_DNSKEY, TTL_DS in the chain; you can 
achieve the same output reliability by doing it when a change in the trust 
chain is detected and validation would otherwise fail.

The extra field also could also be misconfigured, such as having a mistakenly 
long value. What then?

Thanks,
Peter

--
https://desec.io/

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Working Group Last Call for draft-ietf-dnsop-dnssec-validator-requirements

2023-05-17 Thread Daniel Migault
Hi Viktor,

Thanks for the feedbacks. Please see my comment/responses below.

Yours,
Daniel



On Fri, May 12, 2023 at 5:10 PM Viktor Dukhovni 
wrote:

> On Wed, Oct 19, 2022 at 03:21:27PM -0400, Tim Wicinski wrote:
>
> > This starts a Working Group Last Call for
> > draft-ietf-dnsop-dnssec-validator-requirements
> >
> > Current versions of the draft is available here:
> >
> https://datatracker.ietf.org/doc/draft-ietf-dnsop-dnssec-validator-requirements/
> >
> > The Current Intended Status of this document is: Informational
> >
> > Please review the draft and offer relevant comments.
> > If this does not seem appropriate please speak out.
> > If someone feels the document is *not* ready for publication, please
> speak
> > out with your reasons.
> >
> > This starts a two week Working Group Last Call process, and ends on:  2
> > November 2022
>
> Repost of my belated comments in the thread, apologies about not doing
> it right the first time...
>
> --
> Viktor.
>
> >  Recommendations for DNSSEC Resolvers Operators
> >draft-ietf-dnsop-dnssec-validator-requirements-04
>
> Before I dive into some paragraph-by-paragraph details, and bury the
> lede, my main high-level issue is with sections 9, primarily on
> substance, but also for IMHO notably stilted and fuzzy language.
>
> The most significant issue is that the I-D recommends at the bottom of
> section 9 to cap the TTLs of all dependent RRsets by the TTLs of the DS,
> KSK and ZSK records.
>
>>  RUNTIME:
>>
>>   *  To limit the risks of incoherent data in the cache, it is
>>  RECOMMENDED DRO enforce TTL policies of RRsets based on the TTL
> of
>>  the DS, KSK and ZSK.  RRsets TTL SHOULD NOT exceed the DS, KSK or
>>  ZSK initial TTL value, that TTL SHOULD trigger delegation
>>  revalidation as described in [I-D.ietf-dnsop-ns-revalidation].
>>  TTL SHOULD NOT exceed the signature validity.
>
> This is not necessary for correctly operated authoritative zones, that
> retain "inactive" ZSKs and KSKs in the DNSKEY RRset for a few TTLs after
> the keys become inactive, and likewise drop hashes of inactive KSKs from
> the DS RRset only after new KSKs have been published for a sufficient
> time.
>
> I agree but as far as can see the cap of the TTL with a revalidation will
not only resync the resolver and the zone more often than could be expected
otherwise but does not result in the cached RRsets differing from those
provided by the zone.


> What we'd need instead of TTL capping is more akin to "revalidation"
> where under normal/expected conditions, cached RRsets continue to
> "enjoy" their natural TTL (as tweaked by any resolver limits).  That TLL
> can for many RRsets be higher than e.g. the TTL of the parent zone DS
> RRset.
>
> With "revalidation", if a DS record refreshed after TTL expiration is
> (as is typically the case) identical to its previous value, or in any
> case continues to establish a chain of trust to the cached DNSKEY RRset,
> nothing needs to happen to the caching of the descendent records.
>
> Similarly, DNSKEY RRset refreshes that still include the ZSK originally
> used to validate a cached RRset need not have any affect on the validity
> of the signed data.
>
> The above DS and ZSK continuity conditions are expected standard
> practice from the signer.  So long as these hold, validated records
> should be cached for their advertised TTLs as bounded by the signed
> origin TTLs and resolver cache time limits.
>
> Resolvers *MAY* take action to revalidate cached data should abrupt
> changes take place in the DS RRset (no longer building a trust path to
> the cached DNSKEYs) or abrupt changes in the DNSKEY RRset (no longer
> validating cached RRSIGs), but should not be expected to do so.
>
> I see your point. Our goal was that a RRset could not sit in the cache for
ages because a TTL has been wrongly configured. Using the 'cap' method,
such a re-synchronisation is forced which generates unnecessary additional
requests.  The 'revalidation' method resynchronization happens only when
the signature will not match, that is when the key has been changed - which
is the only way the validation can fail. I do not see keys being
rolled-over sufficiently often to address our initial concern on a regular
basis that is every TTL_(ZSK, DS). That said, it provides the opportunity
to the zone admin to eventually force that refresh in case of a mistakenly
long TTL value.

I do see the revalidation mechanism as providing assurance that the cached
data remains coherent and this is probably what we should do - I agree with
you it seems a better compromise. Now one reason I think we also came to
the cap, was that though we know tweaking the TTL is possible, I had in
mind that adding a field like in our case the 'revalidation TTL' was much
harder. Can we assume such a mechanism can realistically be implemented ?


> As resolvers gain implementation experience with this revalidation
>