Re: [ntp:questions] Detecting bufferbloat via ntp?

2011-02-16 Thread Terje Mathisen

Rick Jones wrote:

Kevin Obermanober...@es.net  wrote:


No, you probably won't. Both theoretical and empirical information
shows that overly large windows are not a good thing. This is the
reason all modern network stacks have implemented dynamic window
sizing.



As far as I know, Linux, MacOS (I think), Windows, and BSD (at least
FreeBSD) all do this and do it better then it is possible to do
manually. N.B. Windows XP probably does not qualify as modern.


Sadly, I see Linux's dynamic window sizing take the window to 4MB when
128KB would do.  I'm not familiar with the behaviour of the other
stacks'


There's a huge difference between the window sizes at the ends of a link 
and those employed at the various nodes in between:


The end points needs at least bandwidth*latency buffers simply to keep 
the flow going, while routers in between should have very little buffer 
space, simply because that will allow the end points to discover the 
real channel capacity much sooner.


You might claim that a little intermediate buffer space is a good thing, 
in that it can allow a short-term burst of packets to get through 
without having to discard other useful stuff, but only as long as most 
links have spare capacity most of the time.


Terje
--
- Terje.Mathisen at tmsw.no
almost all programming can be viewed as an exercise in caching

___
questions mailing list
questions@lists.ntp.org
http://lists.ntp.org/listinfo/questions


Re: [ntp:questions] Detecting bufferbloat via ntp?

2011-02-16 Thread David Malone
Terje Mathisen terje.mathisen at tmsw.no writes:

The end points needs at least bandwidth*latency buffers simply to keep 
the flow going, while routers in between should have very little buffer 
space, simply because that will allow the end points to discover the 
real channel capacity much sooner.

For traditional TCP (single flow), you need bandwidth*latency as
sockbuf at both ends plus the same at the bottleneck router. Some
of the new TCP congestion control systems can do with less, and
still fill the link if they are the only flow.

You might claim that a little intermediate buffer space is a good thing, 
in that it can allow a short-term burst of packets to get through 
without having to discard other useful stuff, but only as long as most 
links have spare capacity most of the time.

There was some work a few years ago that suggested that you needed
about bandwidth*latency/sqrt(n) buffering at a link with n bottlenecked
TCP flows, in order to make sure that the flows could actually use
the link. There was also a suggestion that you could get away with
less, but that neemed to require a quite large n in practice.

David.

___
questions mailing list
questions@lists.ntp.org
http://lists.ntp.org/listinfo/questions


Re: [ntp:questions] Detecting bufferbloat via ntp?

2011-02-16 Thread Danny Mayer
On 2/16/2011 7:01 AM, David Malone wrote:
 Terje Mathisen terje.mathisen at tmsw.no writes:
 
 The end points needs at least bandwidth*latency buffers simply to keep 
 the flow going, while routers in between should have very little buffer 
 space, simply because that will allow the end points to discover the 
 real channel capacity much sooner.
 
 For traditional TCP (single flow), you need bandwidth*latency as
 sockbuf at both ends plus the same at the bottleneck router. Some
 of the new TCP congestion control systems can do with less, and
 still fill the link if they are the only flow.

Since NTP only uses UDP the packet handling will be different. I'm not
sure why you are talking about TCP here.
 
 You might claim that a little intermediate buffer space is a good thing, 
 in that it can allow a short-term burst of packets to get through 
 without having to discard other useful stuff, but only as long as most 
 links have spare capacity most of the time.
 
 There was some work a few years ago that suggested that you needed
 about bandwidth*latency/sqrt(n) buffering at a link with n bottlenecked
 TCP flows, in order to make sure that the flows could actually use
 the link. There was also a suggestion that you could get away with
 less, but that neemed to require a quite large n in practice.

It would be more useful to discuss what happens with UDP flows since
that is what NTP uses.

Danny
___
questions mailing list
questions@lists.ntp.org
http://lists.ntp.org/listinfo/questions


Re: [ntp:questions] Detecting bufferbloat via ntp?

2011-02-16 Thread David Malone
Danny Mayer ma...@ntp.org writes:

 For traditional TCP (single flow), you need bandwidth*latency as
 sockbuf at both ends plus the same at the bottleneck router. Some
 of the new TCP congestion control systems can do with less, and
 still fill the link if they are the only flow.

Since NTP only uses UDP the packet handling will be different. I'm not
sure why you are talking about TCP here.

Oh - I though we'd drited onto the topic of how much buffering was
sensible in a network. The bandwidth*latency rule of thumb, which
Terje mentioned, is basically derived from the amount of buffering
required for a TCP flow to fill a link. I agree this has nothing
to do with ntp, except that NTP packets will often share a buffer
with TCP packets.

It would be more useful to discuss what happens with UDP flows since
that is what NTP uses.

For ntp, I suspect the required amount of buffering is (number of
peers)*(largest number of packets sent in burst modes), and probably
less in practice?

David.

___
questions mailing list
questions@lists.ntp.org
http://lists.ntp.org/listinfo/questions


Re: [ntp:questions] Detecting bufferbloat via ntp?

2011-02-16 Thread Dave Täht
dwmal...@maths.tcd.ie (David Malone) writes:

 Terje Mathisen terje.mathisen at tmsw.no writes:

The end points needs at least bandwidth*latency buffers simply to keep 
the flow going, while routers in between should have very little buffer 
space, simply because that will allow the end points to discover the 
real channel capacity much sooner.

 For traditional TCP (single flow), you need bandwidth*latency as
 sockbuf at both ends plus the same at the bottleneck router. Some
 of the new TCP congestion control systems can do with less, and
 still fill the link if they are the only flow.

You might claim that a little intermediate buffer space is a good thing, 
in that it can allow a short-term burst of packets to get through 
without having to discard other useful stuff, but only as long as most 
links have spare capacity most of the time.

 There was some work a few years ago that suggested that you needed
 about bandwidth*latency/sqrt(n) buffering at a link with n bottlenecked
 TCP flows, in order to make sure that the flows could actually use
 the link. There was also a suggestion that you could get away with
 less, but that neemed to require a quite large n in practice.

Outer bound according to Kleinrock. 

I think everybody is on the same page here, but at the risk of repeating
myself, TCP sockbuf sized (as per the above) is in a different place
than software tx queues, dma tx queues, and device specific tx queues. 

Receive buffers can be large. TX, not so much.

The end points needs at least bandwidth*latency buffers simply to keep 
the flow going, while routers in between should have very little buffer 
space, simply because that will allow the end points to discover the 
real channel capacity much sooner

Or before a bufferbloated cascade forces a TCP reset.



   David.

-- 
Dave Taht
http://nex-6.taht.net

___
questions mailing list
questions@lists.ntp.org
http://lists.ntp.org/listinfo/questions


Re: [ntp:questions] Detecting bufferbloat via ntp?

2011-02-16 Thread Dave Täht
Terje Mathisen terje.mathisen at tmsw.no writes:

 Rick Jones wrote:
 Kevin Obermanober...@es.net  wrote:

 No, you probably won't. Both theoretical and empirical information
 shows that overly large windows are not a good thing. This is the
 reason all modern network stacks have implemented dynamic window
 sizing.

 As far as I know, Linux, MacOS (I think), Windows, and BSD (at least
 FreeBSD) all do this and do it better then it is possible to do
 manually. N.B. Windows XP probably does not qualify as modern.

 Sadly, I see Linux's dynamic window sizing take the window to 4MB when
 128KB would do.  I'm not familiar with the behaviour of the other
 stacks'

I did a little testing with rick a couple days ago. It turned out his
problem was not in his end nodes, but somewhere in his path between his
two sites is something rather bloated.

I suggested he try tcp vegas or veno as those attempt to deal with
buffering in their own ways. Vegas is actually sort of malfunctioning
nowadays in that it was designed to cope with sane levels of buffering,
not what we are seeing today.

Actually finding the most bloated device in the path is something of a
hard problem...

 There's a huge difference between the window sizes at the ends of a
 link and those employed at the various nodes in between:

 The end points needs at least bandwidth*latency buffers simply to keep
 the flow going, while routers in between should have very little
 buffer space, simply because that will allow the end points to
 discover the real channel capacity much sooner.

Exactly.  Yea! You get it.


 You might claim that a little intermediate buffer space is a good
 thing, in that it can allow a short-term burst of packets to get
 through without having to discard other useful stuff, but only as long
 as most links have spare capacity most of the time.

A *little* is just fine. Bloated buffers - containing hundreds,
thousands, tens of thousands of packets - which is what we are seeing
today - is not.


 Terje

-- 
Dave Taht
http://nex-6.taht.net

___
questions mailing list
questions@lists.ntp.org
http://lists.ntp.org/listinfo/questions


Re: [ntp:questions] Detecting bufferbloat via ntp?

2011-02-16 Thread Rob
Dave T??ht d...@taht.net wrote:
 You might claim that a little intermediate buffer space is a good
 thing, in that it can allow a short-term burst of packets to get
 through without having to discard other useful stuff, but only as long
 as most links have spare capacity most of the time.

 A *little* is just fine. Bloated buffers - containing hundreds,
 thousands, tens of thousands of packets - which is what we are seeing
 today - is not.

So basically what we see is equipment designed by incompetent designers,
who probably have no experience with historic networks.

When everything was still very slow (my first experience with TCP/IP
was on amateur packet radio), the effects of bugs like this was very
apparent, and one could immediately see the effects of changing parameters
and implementation details.

After that, probably a lot of engineers entered the scene that never
saw a network slower than 100 Mbit ethernet, and made decisions without
knowledge of early research that went into the design of TCP and other
internet protocols.

It is unfortunate that this incompetence now apparently affects the
operation of the internet for everyone (although I have not recognized
any adverse effects in daily use myself).

On the network I manage myself, I always set a reasonable TCP window
instead of the OS vendor default.  Better a slight cap on the performance
of a single session, than a congestion collapse of the network as a
whole...

(on amateur packet radio we used a TCP window of 864 bytes :-)

___
questions mailing list
questions@lists.ntp.org
http://lists.ntp.org/listinfo/questions


Re: [ntp:questions] Detecting bufferbloat via ntp?

2011-02-16 Thread Terje Mathisen

Dave Täht wrote:

Terje Mathisenterje.mathisen at tmsw.no  writes:

There's a huge difference between the window sizes at the ends of a
link and those employed at the various nodes in between:

The end points needs at least bandwidth*latency buffers simply to keep
the flow going, while routers in between should have very little
buffer space, simply because that will allow the end points to
discover the real channel capacity much sooner.


Exactly.  Yea! You get it.


Thanks. I've been working with communication protocols and file transfer 
since around 1982, including a year-long sabbatical at Novell in 91-92.


Terje

--
- Terje.Mathisen at tmsw.no
almost all programming can be viewed as an exercise in caching

___
questions mailing list
questions@lists.ntp.org
http://lists.ntp.org/listinfo/questions


Re: [ntp:questions] Detecting bufferbloat via ntp?

2011-02-16 Thread Terje Mathisen

Rob wrote:

It is unfortunate that this incompetence now apparently affects the
operation of the internet for everyone (although I have not recognized
any adverse effects in daily use myself).

On the network I manage myself, I always set a reasonable TCP window
instead of the OS vendor default.  Better a slight cap on the performance
of a single session, than a congestion collapse of the network as a
whole...

(on amateur packet radio we used a TCP window of 864 bytes :-)


I've been a ham since 1978 (la8nw), but never got into packet radio. :-(

Terje

--
- Terje.Mathisen at tmsw.no
almost all programming can be viewed as an exercise in caching

___
questions mailing list
questions@lists.ntp.org
http://lists.ntp.org/listinfo/questions


Re: [ntp:questions] Detecting bufferbloat via ntp?

2011-02-16 Thread Terje Mathisen

David Malone wrote:

Danny Mayerma...@ntp.org  writes:


For traditional TCP (single flow), you need bandwidth*latency as
sockbuf at both ends plus the same at the bottleneck router. Some
of the new TCP congestion control systems can do with less, and
still fill the link if they are the only flow.



Since NTP only uses UDP the packet handling will be different. I'm not
sure why you are talking about TCP here.


Oh - I though we'd drited onto the topic of how much buffering was
sensible in a network. The bandwidth*latency rule of thumb, which
Terje mentioned, is basically derived from the amount of buffering
required for a TCP flow to fill a link. I agree this has nothing
to do with ntp, except that NTP packets will often share a buffer
with TCP packets.


This is the key here: As long as NTP has to share the same transmit 
queues as all the TCP packets, any (excessive) intermediate buffering 
will show up as increased latency for the NTP packets.



It would be more useful to discuss what happens with UDP flows since
that is what NTP uses.


For ntp, I suspect the required amount of buffering is (number of
peers)*(largest number of packets sent in burst modes), and probably
less in practice?


Much less: NTP, even on very buzy S1/S2 servers, uses little bandwidth.

On my home NTP/GPS server, the symmetric 30 Mbit/s fiber is sufficient 
that I never notice the NTP traffic. :-)


Terje

--
- Terje.Mathisen at tmsw.no
almost all programming can be viewed as an exercise in caching

___
questions mailing list
questions@lists.ntp.org
http://lists.ntp.org/listinfo/questions


Re: [ntp:questions] Detecting bufferbloat via ntp?

2011-02-16 Thread Rick Jones
Terje Mathisen terje.mathisen at tmsw.no wrote:
 On my home NTP/GPS server, the symmetric 30 Mbit/s fiber is sufficient 
 that I never notice the NTP traffic. :-)

Clearly more of us need to try to get time from your home server :)

rick jones
-- 
The glass is neither half-empty nor half-full. The glass has a leak.
The real question is Can it be patched?
these opinions are mine, all mine; HP might not want them anyway... :)
feel free to post, OR email to rick.jones2 in hp.com but NOT BOTH...

___
questions mailing list
questions@lists.ntp.org
http://lists.ntp.org/listinfo/questions


Re: [ntp:questions] ntp-4.2.6p3-1.el5 - minpoll local PPS source

2011-02-16 Thread Q

Miroslav Lichvar mlich...@redhat.com wrote in message 
news:20110202133307.GM2248@localhost...
 On Sun, Jan 30, 2011 at 07:11:07PM -, Q wrote:
 My local PPS source is set for 'minpoll 4' (16 sec) this has had the 
 knock
 on effect that the other network based servers have all decided to poll 
 at
 64sec intervals.

 I can confirm this with ntp-4.2.6p3 and a recent ntp-dev. But it seems
 to be a design decision to use similar polling interval for all
 sources, even when they have very different jitter.

 As others have said, a workaround is to set minpoll to 10 for the NTP
 sources.

Alas sorry for the late reply.

The box died just after I made that post and I've only just got it back up 
and running.

At the moment its still with its default config - I'm just waiting for it to 
calm down a little before I start changing things again. Once I've changed 
it I'll let you know what happens  how it behaves.


Cheers 


___
questions mailing list
questions@lists.ntp.org
http://lists.ntp.org/listinfo/questions


Re: [ntp:questions] GPX18x LVC 3.50 firmware - high serial delay problem workround

2011-02-16 Thread Q

David J Taylor david-tay...@blueyonder.co.uk.invalid wrote in message 
news:igu5i1$hn9$1...@news.eternal-september.org...
 unruh un...@wormhole.physics.ubc.ca wrote in message 
 news:slrnij3r1n.a4g.un...@wormhole.physics.ubc.ca...
 []
 Your referent is somewhat unclear.
 If you are saying that your unit is out of spec, then return it.

 When operated with earlier firmware, the unit is in spec, but may be out 
 of spec with the V3.50 firmware.  Rather than return the unit, I am hoping 
 to work with Garmin to produce a better firmware for all users.

V3.60 was out at the start of Jan I think - but there is nothing in the 
notes to say this issue is resolved. Do we need to chase this with Garmin 
UK? 


___
questions mailing list
questions@lists.ntp.org
http://lists.ntp.org/listinfo/questions


Re: [ntp:questions] Detecting bufferbloat via ntp?

2011-02-16 Thread Rick Jones
Dave Täht d...@taht.net wrote:
 Terje Mathisen terje.mathisen at tmsw.no writes:

  Rick Jones wrote:
  Kevin Obermanober...@es.net  wrote:
 
  No, you probably won't. Both theoretical and empirical information
  shows that overly large windows are not a good thing. This is the
  reason all modern network stacks have implemented dynamic window
  sizing.
 
  As far as I know, Linux, MacOS (I think), Windows, and BSD (at least
  FreeBSD) all do this and do it better then it is possible to do
  manually. N.B. Windows XP probably does not qualify as modern.
 
  Sadly, I see Linux's dynamic window sizing take the window to 4MB when
  128KB would do.  I'm not familiar with the behaviour of the other
  stacks'

 I did a little testing with rick a couple days ago. It turned out his
 problem was not in his end nodes, but somewhere in his path between his
 two sites is something rather bloated.

Or rather, that even after setting the tx queue lengths to 32 packets,
a test between that system and one 7ms away still resulted in 4MB
socket buffers by the end of the test. Ie confirming that the linux
autotuning code was still willing to grow the windows larger than
necessary.

 A *little* is just fine. Bloated buffers - containing hundreds,
 thousands, tens of thousands of packets - which is what we are seeing
 today - is not.

Well, the BDP of a 10GbE link might actually be measured in thousands
of packets or more...  if my systems 7 ms apart were joined by a 10
GbE link, that would be a bit more then 5800, 1500 byte packets.  I'm
thinking that while we may have to configure queues in terms of number
of packets, we shouldn't think of them that way, but as length of
time.

rick jones
-- 
Process shall set you free from the need for rational thought. 
these opinions are mine, all mine; HP might not want them anyway... :)
feel free to post, OR email to rick.jones2 in hp.com but NOT BOTH...

___
questions mailing list
questions@lists.ntp.org
http://lists.ntp.org/listinfo/questions

Re: [ntp:questions] Detecting bufferbloat via ntp?

2011-02-16 Thread E-Mail Sent to this address will be added to the BlackLists
Rob wrote:
 So basically what we see is equipment designed by
  incompetent designers, who probably have no experience
  with historic networks.

Really?

 Which products do you perceive, don't appear to have the
  necessary capabilities to deal with the bufferbloat issue?


 As far as I can tell the products have all the knobs
   they need to be properly configured to deal with the issue.

  The ISPs / NOC staff / IT departments / ... just aren't
   taking the time, to properly / fully configure the products,
   rather than just the minimal configuration necessary to make it work.

-- 
E-Mail Sent to this address blackl...@anitech-systems.com
  will be added to the BlackLists.

___
questions mailing list
questions@lists.ntp.org
http://lists.ntp.org/listinfo/questions


Re: [ntp:questions] Detecting bufferbloat via ntp?

2011-02-16 Thread Dave Täht
Rick Jones rick.jon...@hp.com writes:

 Dave Täht d...@taht.net wrote:
 Terje Mathisen terje.mathisen at tmsw.no writes:

  Rick Jones wrote:
  Kevin Obermanober...@es.net  wrote:
 
  No, you probably won't. Both theoretical and empirical information
  shows that overly large windows are not a good thing. This is the
  reason all modern network stacks have implemented dynamic window
  sizing.
 
  As far as I know, Linux, MacOS (I think), Windows, and BSD (at least
  FreeBSD) all do this and do it better then it is possible to do
  manually. N.B. Windows XP probably does not qualify as modern.
 
  Sadly, I see Linux's dynamic window sizing take the window to 4MB when
  128KB would do.  I'm not familiar with the behaviour of the other
  stacks'

 I did a little testing with rick a couple days ago. It turned out his
 problem was not in his end nodes, but somewhere in his path between his
 two sites is something rather bloated.

 Or rather, that even after setting the tx queue lengths to 32 packets,
 a test between that system and one 7ms away still resulted in 4MB
 socket buffers by the end of the test. Ie confirming that the linux
 autotuning code was still willing to grow the windows larger than
 necessary.

Better said. 

The evidence of bloat is not conclusive. Did you give vegas
a shot?


 A *little* is just fine. Bloated buffers - containing hundreds,
 thousands, tens of thousands of packets - which is what we are seeing
 today - is not.

 Well, the BDP of a 10GbE link might actually be measured in thousands
 of packets or more...  if my systems 7 ms apart were joined by a 10
 GbE link, that would be a bit more then 5800, 1500 byte packets.  I'm
 thinking that while we may have to configure queues in terms of number
 of packets, we shouldn't think of them that way, but as length of
 time.

I agree, the dynamic range of todays devices presents a problem.


 rick jones

-- 
Dave Taht
http://nex-6.taht.net

___
questions mailing list
questions@lists.ntp.org
http://lists.ntp.org/listinfo/questions

Re: [ntp:questions] Detecting bufferbloat via ntp?

2011-02-16 Thread David Woolley

Danny Mayer wrote:


It would be more useful to discuss what happens with UDP flows since
that is what NTP uses.


UDP tends to rely on TCP dominating the traffic, so that there is 
something that does respond to congestion control mechanisms.  TCP tends 
to be sacrificed in favour of UDP.


___
questions mailing list
questions@lists.ntp.org
http://lists.ntp.org/listinfo/questions


[ntp:questions] Getting PPS to work with Oncore ref clock

2011-02-16 Thread Chris Albertson
I have to admit I know nothing abut Linux serial PPS.   My guess is I
need to somehow set this up before I try to get it to work with NTP.
My clockstats file is filled with the the messages quoted below.   Is
there something I can read.   I've build ntpd with the required pps
support, have the correct Linux kernel and header files, made links
in /dev.  Hard to know what's missing.  The error messages are not
very helpful

55609 21559.296 127.127.30.0 ONCORE[0]: ONCORE: oncore_get_timestamp,
error serial pps
55609 21589.292 127.127.30.0 ONCORE[0]: ONCORE: oncore_get_timestamp,
error serial pps
55609 21619.312 127.127.30.0 ONCORE[0]: ONCORE: oncore_get_timestamp,
error serial pps


-- 
=
Chris Albertson
Redondo Beach, California
___
questions mailing list
questions@lists.ntp.org
http://lists.ntp.org/listinfo/questions