Guillaume Filion wrote:
I made a prototype using Border Gate Protocol (BGP) data last April --
with mixed results. With hindsight I think that the only factor that
must be taken into consideration for NTP is latency. BGP had a tendency
of optimizing for bandwidth.
I played a bit with Maxmind's GeoIP and Great Circle Distance
calculations recently, but I'm not to the point of testing a prototype.
Well, there is a point in using geographical position as a weight
factor, but it should
not be too heavy weight because geographical position does not tell much
about
latency either. Maybe in a 1000km grid, or so.
You could do a traceroute back to the client to find out in what network
neighborhood
they live :-) That would slow down the reply and increase the load on
the DNS
server a lot, of course.
The prototype you demo'd did not work that bad for me. It returned
addresses
within my ISP. But that probably is because there are lots of active
pool servers
in my ISP's network (xs4all.nl)
This includes not only the database of pool members, but also the
uptodate reachability information, the recent history of replies sent
to users, the source network of the query, etc.
My earlier prototype (and the next ones) used a CDB file
(http://cr.yp.to/cdb.html) for storing the information about the NTP
servers. That CDB file can be regenerated by a server at pool.ntp.org
every few minutes and rsync'd to the DNS servers without any disruption
of DNS service.
I'm not sure about "the recent history of replies sent to users" though.
I wonder if there exists some DNS server that has separated network
handling and
storage backend, for which one can write plugins that implement the
retrieval of
information. If so, this server could be used and the calls that return
the server
addresses could be made in real time.
What I mean by recent history is that we could return 1..3 addresses
from the list
of valid replies for that query, and change the reply for every query.
So when 30
servers are available for some country, we would not be returning 10 of
those 30
for an hour, then another subset for another hour, but we would be
smoothly returning
the entire set (and can even set the likelyhood on every server
depending on its quality
and capacity).
What you propose would imply a finite state machine in the DNS server,
and I don't think that that would work well with DNS caching servers. As
a rule of thumb, I try to keep my designs as stateless as possible.
True that it inserts state information, but with a small cache time and
a large number
of DNS caching servers around the world it should not work that badly.
Large sites like google or cnn use such algorithms to direct traffic to
local servers and
work around failed servers, and they reply with cache times around 5
minutes. This
means that the distribution would at least be 12 times better than with
reloading the
servers with a full table once an hour. And with so many different
caching servers
(relative to the number of queries made by clients) I think it will have
little effect
anyway.
That would make it possible to distribute the load more evenly and to
give out server addresses that are reasonably close to the requester
without requiring all those different names to be figured out by the
clients.
Yes. I'm pretty sure that most users don't use the region/country zones
anyway.
Indeed the best possible situation would be when there is only a single
name to be
used by everyone (pool.ntp.org), maybe with the exception of using
another name in
cases where a server is hardwired into some appliance or OS distribution
(so it can
be used for statistical purposes or to direct the clients to independent
time server
networks), but not to bother the user with issues that require him to
study the matter.
We cannot (apparently) even trust the implementers of time clients to
study the
matter, let alone the users...
I guess that it would also make the NTP servers that are close to ISP's
customers very popular while the NTP servers in server rooms (far away
from the mass) would be less popular. But that's pure speculation, we'll
see when we test it.
I think there should also be a manually maintained table in the system
where ISP
timeservers and the networks they cover (or want to service) are stored.
Some not-so-retarded ISPs actually have a timeserver that their clients
can use,
although it is not always that well published and/or good quality. When
a client of
that ISP requests a timeserver, we might return that address instead of
a server
from the voluntary pool. Of course only for situations where the ISP
agrees.
This would reduce the load on other servers and give the user a server
that at least
is quite local.
(for example, xs4all.nl operates the servers ntp.xs4all.nl and
ntp2.xs4all.nl)
It would be harder to keep accurate data about the number of users of
the pool. Now we can take the average number of clients in a server and
multiply that by the number of servers in the pool. With a new DNS
system, the number of clients would be different for each NTP server.
It will always be hard to guess the number of users, but in this case we
could of
course include the actual weights of the servers in the calculation.
Problem is that
clients like ntpd only make one query to find the address of a server
and then keep
using that until a restart (even when it becomes unreachable).
Rob
_______________________________________________
timekeepers mailing list
[email protected]
https://fortytwo.ch/mailman/cgi-bin/listinfo/timekeepers