Hello, I have 3 servers operating at stratum-2, providing time to a small population of clients (most routers and stratum-3 servers that redistribute time to their LANs). The only source of time sync are public NTP servers, and the configuration is based on this document: http://www.eecis.udel.edu/~mills/ntp/html/notes.html. Each server has 2 stratum-1 sources (total of 6 distinct servers) and 4 stratum-2 peers (the other 2 internal servers plus 2 external ones). The reason for this is mostly the ability to simple disable one external server when it goes off-sync for some time and still have 3 external sources.
Now, I know that performance is quite a subjective matter. All we need is to keep lan servers and clients in sync, with most timestamps having 1 sec resolution. Currently I see offsets of about +/- 10ms on ours main NTP servers, with occasional peaks, so let's say performance is "good enough" for us. But sometimes I wonder how good is that on an absolute scale, just out of curiosity. And of course if it can be somehow improved. I've plotted graphs from peerstats of the three main servers (ntp1, ntp2, ntp3), and what really surprises me is that the three servers show quite different patterns (hardware _is_ quite different tho). May someone more experienced than me have a quick look at the graphs and provide a couple of comments on them? They are here: http://stats.esiway.net/NTP/ I'd like to know how they compare to, say, similar stratum-2 servers. I've found a few other graphs of running servers on the Internet, but I can't make real comparisons since they are either stratum-1 servers or stratum-2 servers right next (same LAN) to a stratum-1 server, and of course their accurancy is orders of magnitude better. TIA, .TM. _______________________________________________ questions mailing list [email protected] https://lists.ntp.isc.org/mailman/listinfo/questions
