Hi Steve,

On Jul 29, 2018, at 15:09, Steve Crocker <st...@shinkuro.com> wrote:

> As an individuals, you, I, or anyone else, can do whatever we like, of 
> course.  On the other hand, as system designers we presumably look at the 
> overall system and try to put in place an operational structure that 
> anticipates and meets the likely needs of the users.

Agreed!

> The present and long-standing system provides the recursive resolvers with 
> well-oiled and highly effective solutions to (a) finding the root servers and 
> (b) getting highly reliable and highly responsive answers from them.

This is true. However, there are some disadvantages of the current system that 
are worth thinking about when we consider alternatives, such as the privacy 
implications of having all resolvers call home to a set of well-known servers 
and the aggregate cost of engineering and operations that has gone into making 
the root system as resilient as it is.

I have spoken in the past in opposition to the idea that slaving the root zone 
on resolvers was a desirable end-state; I think it leaves operational gaps that 
we should want to fill. Being able to validate the contents of the zone and 
have software react appropriately (without human operator intervention) when 
zone data is found to be stale or inaccurate obviates many of my concerns.

Perhaps there is a future where the root server system was preserved only to 
serve legacy clients, whilst more modern software had a diversity of options in 
addition to that fall-back.

I think the root zone is a convenient starting point for these kinds of 
discussions, but I think the scope could be wider. Maybe one day the DNS 
protocol (for all zones, not just the root zone) is only comonly used for 
communications between stub and resolver, where we have the deployed base with 
the longest tail, and where capacity planning and attack resistance is a 
different kind of problem because all your traffic generators are on-net. 
Perhaps the database problem of replicating authoritative data into resolver 
caches is solved in different ways.

It would be nice if end-users didn't have to rely upon authoritative servers 
being up in order to resolve names; it would be nice if there wasn't a small 
set of targets against which the next memcached-scale attack could be used to 
take us back to 21 October 2016; it would be nice if the integrity of the 
naming scheme didn't ultimately rely upon the deployment of BCP38.

If such a mechanism relied upon DNSSEC to ensure integrity of data in the 
absence of plausible channel authentication, availability of zones might be 
aligned with DNSSEC deployment, which would give the Alexa 500 a(nother) reason 
to sign their zones.

There are lots of things to think about here. I don't think clinging to the 
status quo in terms of infrastructure or institutions is necessarily a good 
idea, although I do agree with the idea of preserving legacy compatability and 
incremental change.


Joe
_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to