First some background information:

The following behaviour has been observed on a HP ProLiant DL360 G7 running Oracle Linux Server release 6.3 and ntp 4.2.4p8.

The following ntpd configuration is used, ntp servers are all local stratum 1 dedicated servers.

driftfile /var/lib/ntp/drift
server ntp1 version 4 iburst minpoll 5 maxpoll 5 prefer
server ntp2 version 4 iburst minpoll 5 maxpoll 5
server ntp3 version 4 iburst minpoll 5 maxpoll 5
peer peer1 version 4 iburst minpoll 5 maxpoll 5

Ntpd is running with the following command line:

  ntpd -u ntp:ntp -p /var/run/ntpd.pid -g

We have an application running on the client which needs a synchronised time. The application is only started once NTPD reports that the local clock is synchronised and the offset within 300ms. For safety reasons, the application is terminated when NTPD reports an offset larger than 300ms.

Now the actual problem description:

To speed up initial synchronisation after a system reboot, we are using iburst. According to the documentation, using iburst is supposed to send a burst of eight packets when the server is unreachable.

This seems to be untrue, I only ever observed four packets to be sent. The observed behaviour more looks like it can send up to eight packets, but stops as soon as it is synchronised to a server.

Is the documentation incorrect here?

NTPD reports a successful synchronisation about 7 seconds after it starts, which is pretty good.

But quite frequently the system clock is off more than 128ms after a reboot, causing NTPD to perform a clock reset immediately after becoming synchronised and again report the clock as unsynchronised. This reset also seems to entail throwing away all available data samples in the clock filter.

Only about three times the fixed poll interval of 32s later the clock is again reported as synchronised.

I have experimented with running ntpdate (or ntpd -g -q) once before starting ntpd which indeed helps as it brings the system time close enough to the correct time to avoid the clock reset when ntpd is started as a daemon, but according to http://support.ntp.org/bin/view/Dev/DeprecatingNtpdate ntpdate is deprecated and anyway it shouldn't be needed to be run before starting ntpd, so this maybe isn't the best solution.

So is there some way to speed up the second waiting period (after the clock reset) using a suitable configuration for NTPD? Or can NTPD be configured to also send a burst of packets after the clock has been reset?

Markus

_______________________________________________
questions mailing list
questions@lists.ntp.org
http://lists.ntp.org/listinfo/questions

Reply via email to