Joe, As an individuals, you, I, or anyone else, can do whatever we like, of course. On the other hand, as system designers we presumably look at the overall system and try to put in place an operational structure that anticipates and meets the likely needs of the users.
The present and long-standing system provides the recursive resolvers with well-oiled and highly effective solutions to (a) finding the root servers and (b) getting highly reliable and highly responsive answers from them. It seems to me reasonable and reasonably easy to sustain these attributes as we evolve toward downloading the entire root zone instead of individual pieces of it. And by "evolution" we're necessarily talking about a lengthy period of hybrid operation. There will likely be a growing set of recursive resolvers downloading the full root zone and but there will certainly be a very large set of recursive resolvers that continue to operate in the current model. Even if there were to be an aggressive push toward hyper local root service, the existing service would have to remain as is for a long time. And by "a long time" I'm guessing ten years is not enough, so I suspect it will be twenty years before one can imagine the current root service reaching twilight. The heated concern of several years ago about the potential size of the root zone is behind us, I hope. The root zone is not going to grow exponentially. The whole zone will be in the single megabyte range, I think. (Caution: I haven't looked at the actual size. Apologies if I am off a bit. But the overall point is still right. Another round or so of gTLDs might double or even triple the current size of the root zone, but it will not grow by even one order of magnitude and certainly not by multiple orders of magnitude.) Distribution of a megabyte or even a few megabytes to, say, a million recursive resolvers twice a day is a relatively modest endeavor on today's Internet. If there are going to be problems, I suspect they won't be related to ad hoc fetching of the root zone from random untrusted sources. Steve On Sun, Jul 29, 2018 at 11:50 AM, Joe Abley <jab...@hopcount.ca> wrote: > On Jul 29, 2018, at 12:19, Steve Crocker <st...@shinkuro.com> wrote: > > > It feels like this discussion is based on some peculiar and likely > incorrect assumptions about the evolution of root service. Progression > toward hyper local distribution of the root zone seems like a useful and > natural sequence. However, the source of the copies of the root zone will > almost certainly remain robust and trusted. > > I think you need to be more clear what you mean by "source". > > If you mean the original entity that constructs and first makes > available the root zone (e.g. the root zone maintainer in the current > system) then what you say seems uncontentious. > > If what you mean is "the place that any particular consumer if the > root zone might have found it" then I think you need to show your > working. > > Resolvers currently prime from a set of trusted servers (albeit over > an insecure transport without authentication, so we could quibble > about what "trusted" means even there) but it's not obvious to me that > this is a necessary prerequisite for new approaches. > > If I have a server sitting next to me that has a current and accurate > copy of the root zone and I am able to get it from there and assess > the accuracy of what I receive autonomously, why wouldn't I? > > > Joe >
_______________________________________________ DNSOP mailing list DNSOP@ietf.org https://www.ietf.org/mailman/listinfo/dnsop