> On 10 Jul 2024, at 9:23 AM, Ben Schwartz <bemasc=40meta....@dmarc.ietf.org> 
> wrote:
> 
> I see several different directions this could go that might be useful.
> 
> 1. "DNS at the 99th percentile"
>  ...

> 2. "DNS Lower Limits"
> 
> ...

> 3. "DNS Intrinsic Limits"
> 
> ...

> 4. "DNS Proof of Work"
> ...



The 99th percentile begs to obvious question: 99% of what?

Some "resolvers" handle queries for tens of millions of users (or more), some 
handle queries for a single user. This kind of threshold measurement runs the 
risk of assuming that all resolvers are "equal" in some sense when in fact they 
are not. I can see what you are trying to get to here Ben, but there is a 
non-trivial set of unanswered measurement questions behind such a proposition.

We've seen in other scenarios (IPv6 minimum unfragmented packet size, for 
example) that lower limits  are more useful than upper bounds that do not have 
underlying protocol constraints. Setting a minimum capability level for 
resolvers and saying that if the particular configuration exceeds such lower 
bounds of capability, then not all resolvers may cope seems (to me) to be a 
better way of defining such concepts.


Geoff

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
DNSOP mailing list -- dnsop@ietf.org
To unsubscribe send an email to dnsop-le...@ietf.org

Reply via email to