David Malone wrote:
Danny Mayer<ma...@ntp.org>  writes:

For traditional TCP (single flow), you need bandwidth*latency as
sockbuf at both ends plus the same at the bottleneck router. Some
of the new TCP congestion control systems can do with less, and
still fill the link if they are the only flow.

Since NTP only uses UDP the packet handling will be different. I'm not
sure why you are talking about TCP here.

Oh - I though we'd drited onto the topic of how much buffering was
sensible in a network. The bandwidth*latency rule of thumb, which
Terje mentioned, is basically derived from the amount of buffering
required for a TCP flow to fill a link. I agree this has nothing
to do with ntp, except that NTP packets will often share a buffer
with TCP packets.

This is the key here: As long as NTP has to share the same transmit queues as all the TCP packets, any (excessive) intermediate buffering will show up as increased latency for the NTP packets.

It would be more useful to discuss what happens with UDP flows since
that is what NTP uses.

For ntp, I suspect the required amount of buffering is (number of
peers)*(largest number of packets sent in burst modes), and probably
less in practice?

Much less: NTP, even on very buzy S1/S2 servers, uses little bandwidth.

On my home NTP/GPS server, the symmetric 30 Mbit/s fiber is sufficient that I never notice the NTP traffic. :-)

Terje

--
- <Terje.Mathisen at tmsw.no>
"almost all programming can be viewed as an exercise in caching"

_______________________________________________
questions mailing list
questions@lists.ntp.org
http://lists.ntp.org/listinfo/questions

Reply via email to