On Feb 8, 2017, at 1:02 AM, Mark Andrews <ma...@isc.org> wrote:
> Which assumes agggressive negative caching.   I'm going to make a
> realistic assumption that it will take 10+ years for there to be
> meaningful (>50%) deployment of aggressive negative caching.

First of all, this probably isn't true, since most of the caches we care about 
are run by network operators, who tend to update more frequently for obvious 
reasons.   But the solution you propose, which causes a SERVFAIL, involves a 
specially-prepared resolver.   So if you get to _create_ the problem with a 
specially-prepared resolver, I get to solve it with a differently 
specially-prepared resolver.   If you want queries to .alt not to leak, you 
have to preload your cache with a proof of nonexistence for .alt, and you have 
to keep it up to date.   This is not terribly difficult to arrange.

>> Leaking a query to .alt is harmless.
> 
> Is it?  What reports are we going to see over the next 20 year of
> DITL data on *.alt.

How is that relevant?   Suppose we don't create .alt, and instead people just 
pick random top-level domains for their experiments, as they do now.   Will the 
number of bogus queries to the root be greater, or fewer?   Furthermore, 
queries for .alt are legitimate, in just the same way that queries for .com 
are.   They just have a different answer.

_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to