Bill,

Hey, somebody mentioned LRD (long-range dependency aka heavy-tail), 
which is the tendency for infrequenct events to take unexpectedly long 
times. I first noticed this in the ARPAnet, which showed occasional 
delays up to thirty seconds. I could not account for where in the 
network a packet could get held up that long. That Internet traffic 
delays are indeed LRD has been demonstrated in several papers.

Suspecting such could be the case between NTP servers and clients, I 
designed an experiment to detect such things and found small but 
significant LRD effects with lags up to TWO WEEKS! At short lags up to 
several network turns this can be explained by packet lengths, 
buffering, retransmissions, etc., but at much longer lags you have to 
look for routing flaps, provisioning changes, etc.

Somewhere buried on the NTP project collection is a briefing on the 
results. Briefly put, and without the gory math, the Hurst parameter 
defines the influence of LRD - at 0.5 the distribution is exponential; 
at 1.0 it is pure random-walk. I found most NTP paths had Hurst 
parameters in the 0.7 range. What is unclear is whether the effect are 
due to the network itself or the computer oscillators.

Dave

Bill Unruh wrote:

> On Wed, 30 Apr 2008, Greg Dowd wrote:
> 
> 
>>PTP default profile operation is 1 way sync transmissions from the
>>grandmaster to all slaves (via multicast) with an implicit "occasional"
>>delay request/response for ranging from each slave.  The work we are
>>doing in the telecom space (through ITU and IETF) defines a new
>>application profile for PTP which is unicast based and has a 1:1
>>correspondence between sync and delay request/response.  It also allows
>>higher sync and delay request rates.
>>
>>So, while NTP and PTP essentially have a like set of timestamps and
>>fundamental assumptions, I wouldn't say they do the same thing.  The
>>small LAN is where PTP default profile is optimized for operation.
>>While everything from a single LAN segment to the big, hairy, scary
>>Internet is the target for NTP (along with lousy oscillators).
>>
>>On a small LAN, with light traffic, it is likely all moot.
>>
>>I'm not sure we agree on the effectiveness of the clock servo in ntp.
> 
> 
> We probably do.
> 
> 
>>First, there is no protocol requirement to only use 8 taps, it could be
> 
> 
> It is not 8 taps. The clock filter is a "smallest delay amongst the last 8
> samples" filter, whic throws away about 85% of the samples, which I find a
> profligate use of data.
> 
> 
>>changed.  I've checked :-) IIRC, there are 20 taps in the ACTS reference
>>clock driver.  However, since the network client filter is 8 entries
>>deep, and the poll interval can climb to 1024 seconds, I wonder if you
>>feel like the frequency stability of a pc is useful out at observation
>>intervals in the 10k seconds range?  My guess would be that
> 
> 
> I agree. This is what makes ntp so bad on most computers (ie much worse
> response than optimal given the data collected) and so slow to respond to
> changes.
> 
> 
>>environmentals would stomp on those samples.  Also, wander can be
>>introduced by the LRD characteristics of network traffic.
> 
> 
> LRD?
> 
> 
>>However, moving closer in, I still think the higher update rate has
>>value.  If you start using higher quality oscillators and hardware
>>timestamping, the dominant noise source becomes the delay variation in
>>the network.  Since the remote clock can't average this (it's not
>>uniform or Gaussian), it needs to use some intelligent filtering.
>>Higher packet rates mean that there are more samples to pick from.
> 
> 
> Sure, but one of the goals of ntp is to minimize the impact on the servers.
> 
> 
> 
>>Also, one thing a lot of these discussions miss is the natural tradeoff
>>between trying to be the most accurate vs trying to be the most stable.
> 
> 
> Of course. If you average over a month, you will be very stable. But
> probably very inaccurate. If you try to respond to each and every
> fluctuation, the opposite will occur. Somewhere in there is the optimum,
> and that optimum depends on the exact character of the noise, both phase
> and frequency. NTPs assumption that there is an alan optimum point, fixed
> for all situations is a poor approximation, both because that point varies
> greatly with the exact network connectivity and the frequency fluctuations
> are not 1/f noise, but dominantly environmental, which has strong periods (
> day night).
> 
> 
> 
>>These tradeoffs, as well as differences in noise processes, mean there
>>is no one "correct" servo.   Having spent some time studying networks
>>with multiple network load generators connected through and across
>>network where sync is transferred (mostly for wireless backhaul), I've
>>developed a healthy respect for many of the various sampling and
>>filtering functions in ntp.
>>
>>
>>
>>
>>
>>Greg Dowd
>>gdowd at symmetricom dot com (antispam format)
>>Symmetricom, Inc.
>>www.symmetricom.com
>>"Everything should be made as simple as possible, but no simpler" Albert
>>Einstein
> 
> 
> 
> And I think ntp is too simple.
> 
> 
> 
>>-----Original Message-----
>>From: Bill Unruh [mailto:[EMAIL PROTECTED]
>>Sent: Wednesday, April 30, 2008 3:21 PM
>>To: Greg Dowd
>>Cc: questions@lists.ntp.org
>>Subject: RE: [ntp:questions] frequency adjusting only
>>
>>On Wed, 30 Apr 2008, Greg Dowd wrote:
>>
>>
>>>As noted, these are really stability measurements of the difference
>>>between two clocks.  The absolute accuracies, particularly once you
>>>reach the submillisecond domain, are impacted by the sum of all biases
>>
>>>in the measurement system, os, stack, driver, dma controller, bus,
>>>mac, phy, physical layer, switching/routing matrix and protocols
>>>(ARP/STP/QoS) and phy,mac,bus,driver,stack,os,app on the other end.
>>>Not just jitter and delay variation, but biases. Sometimes the biases
>>>are complentary and cancel and sometimes they don't.
>>
>>Agreed. However, ntp and PTP is software do almost the same thing
>>(unless ptp really uses broadcast in which case it is much worse than
>>ntp-- broadcast is horrible since it cannot see those sudden increases
>>in delays due to congestion, etc. NTP is far to aggressive in throwing
>>away packets-- keeping only about 1/8 of the packets due to the clock
>>filter algorithm But ptp is if what you say is correct, much worst,
>>since broadcast mode is really only good to ms due to those variable
>>delays.
>>
>>
>>
>>>There is a real difference available which is the followup message.
>>>It is possible to have the system record the timestamp of actual
>>>transmission and send it in a followup in ptp.  I did some testing
>>>with this a few years ago and achieved the same results in timestamp
>>>transmission with both protocols.  Having said that, I presume that
>>>one REAL benefit for time transfer is that PTP can, and does, run at a
>>
>>>higher sync rate than ntp.  It is also synchronizing to a single
>>
>>clock.
>>
>>The higher sync rate can be a benefit. It can also be bad because the
>>Markovian clock discipline means that no use can be made of long time
>>baselines to get better clock frequency accuracy (one of the great
>>advantages of chrony in situations where the phase noise dominates).
>>ntp's handling is a kludge.
>>
>>
>>
>>>Also, the default ptp app is using multicast "broadcasts" with ttl 1
>>>and the client uses a slightly funky "point to point" multicast
>>>transmission as a ranging request to calculate propagation delay.  The
>>
>>>delay is then added to sync to arrive at value for local clock
>>>comparison.  However, I don't think that there is a multi tap filter.
>>
>>>In fact, in the open source ptp, I think the servo is just pretty much
>>
>>>a jam hack.  The point was to show the protocol.
>>
>>It looked like it. But both ntp and ptp use simply markovian response
>>filters. They preserve no memory, which is silly.
>>
>>
>>
>>>All of this is good dialogue but it is VERY important to note that
>>>what you test in a small LAN has very little bearing on the
>>>performance possible in various types of real networks of greater
>>
>>scale..
>>
>>Agreed.
>>But the OP wanted to use it in a small lan.
>>
>>
>>>
>>>Greg Dowd
>>>gdowd at symmetricom dot com (antispam format) Symmetricom, Inc.
>>>www.symmetricom.com
>>>"Everything should be made as simple as possible, but no simpler"
>>>Albert Einstein
>>>
>>>-----Original Message-----
>>>From: [EMAIL PROTECTED]
>>>[mailto:[EMAIL PROTECTED] On
>>>Behalf Of Bill Unruh
>>>Sent: Wednesday, April 30, 2008 1:20 PM
>>>To: questions@lists.ntp.org
>>>Subject: Re: [ntp:questions] frequency adjusting only
>>>
>>>[EMAIL PROTECTED] (maxime louvel) writes:
>>>
>>>
>>>>On Tue, Apr 29, 2008 at 6:27 PM, Unruh <[EMAIL PROTECTED]>
>>>
>>>wrote:
>>>
>>>
>>>>>[EMAIL PROTECTED] (maxime louvel) writes:
>>>>>
>>>>>
>>>>>>Hi,
>>>>>
>>>>>>I have know run a lot of tests.
>>>>>>Just to let you know what I've got so far.
>>>>>>I have tried NTP, and NTP + PTP (Precision Time Protocol).
>>>>>>I haven't tried Chrony nor TSClock.
>>>>>>I have used the software only implementation of PTP (ptpd).
>>>>>
>>>>>>With NTP only I have got an accuracy around 1ms,
>>>
>>>Actually, I have no idea what the difference is between the "software
>>>implimentation" of PTP and standard NTP is. The advantage of PTP is
>>>the HARDWARE timestamping of the packets as they come into the
>>>ethernet card (special purpose ethernet cards with clocks on board)
>>>and possibly PTP aware switches which race through the PTP packets
>>
>>without delay.
>>
>>>Software only means
>>>that PTP uses exactly the same kernel routines, etc. to read the
>>>computer clock as does ntp I assume. I cannot see how it can be better
>>
>>>unless there are some severe bugs in NTP.
>>>What version of NTP are you running?
>>>
>>>
>>>_______________________________________________
>>>questions mailing list
>>>questions@lists.ntp.org
>>>https://lists.ntp.org/mailman/listinfo/questions
>>>
>>
>>
> 

_______________________________________________
questions mailing list
questions@lists.ntp.org
https://lists.ntp.org/mailman/listinfo/questions

Reply via email to