Hi,

AS112 absolutely proves that unowned anycast can work at scale; that's not
my concern.  But if my neighbor announces a route to the AS112 addresses,
and then misconfigures a server, fills it with lies, or logs all my
queries, the practical effect on me is pretty small: the worst case
scenario I can think of offhand is that somebody gleans information about
my internal network topology that probably wouldn't have been difficult to
guess anyway.
One of my biggest concerns about the current proposal is that it seems to 
suggest that AS112 works.

I would like to find some definition of “works” and how we come to that 
conclusion. In my experience there are AS112 nodes out there that are 
misconfigured in many ways (RIPE Atlas be your friend). Returning SERVFAILs, 
wrong data, etc. While wrong data is safeguarded by using DNSSEC in this 
proposal, malfunction is likely to occur still and can be just as bad. In the 
current system this issue is lessened due to the many different operators.
Within a given enterprise or ISP that would have limited impact and one could 
just point the finger at them and not care (although I don’t agree with that 
either). However route leaks are going to occur as they have in the past 
(no-export stripping happens by accident) and will start to have impact on 
users outside of that admin/routing domain. Assuming that local routes are 
always the routes that are chosen first is a flawed assumption. Routing is 
integral to this proposal and cannot be disregarded if you wanna find a 
workable solution.

From a TLD operator perspective I think it’s a huge step backwards that we will 
loose our update propagation assurance. Will I have to rely on the RRSIG expiry 
as my worst case scenario for a zone update to be fully propagated? With the 
sort of requirements that are put on TLD operators and DNS operators these days 
that sounds completely unacceptable path to me. It’s very different from AS112 
where there is are simple zones that are configured as master and then remain 
that way.

I support the expansion of root server deployments. In my opinion this can be 
fully achieved in the given framework and ICANN as the operator of L-root has 
shown what can be done in a very short period of time. The discussion should be 
about the standards of operation that each root server operator is held to 
these days. There should be no question that some of the current root server 
operators muscle a way more substantial deployment than others. If it’s 
politically too sensitive/hard to establish any level of quality with the given 
root server operators, the addition of other root server operators within the 
current protocol limitations could be used to hold them to a certain standard. 
For the overall system to function well this would suffice. This is very 
similar what was done as part of the new gTLD program from ICANN where a whole 
set of requirements was added that didn’t exist before (IPv6, DNSSEC, etc.).

In closing, this draft proposes a solution to a problem that hasn’t been 
quantified and has no measure of success. Personally I think that’s bad 
practice.

Regards,
Wolfgang
_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to