I think that Poul-Henning Kamp has had a fantastic idea and invented a robust protocol that improves the current situation:
. Using DNS gets you caching and time-to-live for any stored data; this is not only true on the server side, but the necessary information is transported through to clients. . Using DNS also means de-centralisation without reducing the validity of the original data, because the DNS system as such is designed to honour the validity of data in time, causing automatic cache refreshes as necessary. . Using DNS the data would distribute to a practically infinite number of places, heavily reducing overall network trafic and "reducing traceroute hops" to the next available storage (caching DNS instance) -- without losing the property given in the last item. The next caching DNS instance may even be a local DNS server or some system-wide caching facility, like mDNSResponder on OS X, but i think Microsoft also has one, and of course Dnsmasq on *BSD and Linux systems. This means that in order to get a guaranteed up-to-date version of the data no network traffic at all is necessary. . Accessing DNS is available on low programming levels via standardized interfaces. Practically all Unix systems ship with command line tools that can be used to access the necessary information from within scripting languages or shell scripts. (In i think that the output of the tools i think of is even portable when i recall a tool switch thread from some FreeBSD list correctly.) "Clive D.W. Feather" <cl...@davros.org> wrote: |Steffen Nurpmeso said: |>|1. No IP addresses are misused, in the sense that should anybody |>| have a valid claim to use these addresess, nothing I do will |>| impact their ability to do so. I'm merely transmitting numbers, |>| there will never on my behalf be an IP packet using these |>| numbers as source or destination addresses. |> |> Class E should be out-of-bounds, and if that isn't a valid "future |> use" then maybe IANA has too many lawyers, too. | |Addresses, even class E ones, are meant to be addresses, not random bits of |data. This is the sort of perversion that was never meant to be done. It's |not a clever hack, it's just dumb. If you have to use DNS, use TXT records. Yeeeaaaah, but i do think that "dumb"ness is a pretty double-edged, culture- and zeitgeist-dependent term. The problem with TXT records is that they cannot always be used. I.e., the gethostbyname(3) function cannot retrieve any such information, and the widely used Dnsmasq "network infrastructure" for example doesn't support it either, as far as i know. Here "doesn't support" should mean that no local caching is performed on this very unusual DNS RR (resource record), which would cut down the benefits of Poul-Henning Kamp's idea for no real reason. (However, having to reach out for the next real DNS server will still be better than downloading a complete text file from a IERS server.) A lot of perversion has happened on the DNS system. Just think of that terrible IDNA, a *really* intelligent way of encoding hostnames in fewer bytes than UTF-8. It would have been better to extend the DNS limits for plain UTF-8 string usage. And when i say that i really mean _already back then_. It would be good and well today in that most likely not a single old-style DNS server would remain, «dans le monde entier». So to say. I'm guilty of perverting Poul-Henning's idea by proposing misuse of DNS "A" records for that. Doing so would introduce the possibility of transporting this information to practically all devices, how old the local DNS infrastructure may be, at once and today. "A" records are also very space efficient. Nothing would prevent noone to go IANA, write a new RFC draft (now in XML, please) and request a new RR, say "UTCDRIFT". Let it use the same format that Poul-Henning Kamp worked out for "A" then administrators can provide both and as even more time goes by the "A" version can eventually be dropped entirely. The implementations that have choosen to go that route could almost stay the same. (The data computation, anyway.) The problem is again that you have no option. Maybe there are DNS RRs that drive your coffee machine or that program your carrier pigeon. Now there is a draft on a time zone data distribution service, and i mean -- why not? But it requires HTTP and iCalendar (in JSON [/ XML]). JSON parsers can be done very efficiently. (Btw., how about a binary distribution in CBOR format, as in iCalendar/CBOR? That collaborates nice with JSON and is *very* easy to parse.) But of course: that is much more bloated and requires way more logic than a simple current UTC-time gettimeofday(2) / clock_gettime(2) and some gethostbyname(3) to gain the current leap offset. Maybe it does no longer matter now that there are mobile phones with more resources than my laptop has, but i'd prefer to have the _possibility_ to get necessary at a level as low as possible. --steffen _______________________________________________ LEAPSECS mailing list LEAPSECS@leapsecond.com https://pairlist6.pair.net/mailman/listinfo/leapsecs