On Tue, Mar 25, 2014 at 7:30 AM,  <[email protected]> wrote:
> http://docs.fedoraproject.org/en-US/Fedora/13/html/Virtualization_Guide/chap-Virtualization-KVM_guest_timing_management.html
>
> The problem is intrinsic. With "full" virtual systems the kernel of the
> guest can never monopolize the CPU and therefore timings will skew dependent
> on the particular load on the host. The hardware clock doesn't matter to
> much for ntp because it is only read/set occasionally. The problem lies in
> the detection of jitter/skew which ntp tries to do which *can* fail badly
> inside a virtual guest. With low loaded systems you will never see a
> problem.

I will agree with heavy CPU contention and/or the lack of constant_tsc
as problems... fortunately, I tend to have good luck with low CPU
contention.  But looking at my collected data, I do have occasional
(~weekly) spikes in error estimate (with corresponding ringing in PLL
offset) that seem to correspond to increased CPU contention as
measured by steal%, but the worst it typically gets is 4 ms off, from
its baseline of ~180 us.

This is still better than what my furnace's programmable thermostat
does to my home computers.  It's worse than what you'd get from "real"
dedicated timekeeping hardware, or even a well-deployed NTP server in
a temperature-controlled environment, but it's not terrible,
especially in the context of a network service.

The important thing, though, is to monitor it.  munin has plugins to
track ntpd, and other tools probably do as well.  Past performance is
not a guarantee of future results.  :-)

Thanks for the response... I actually hadn't thought to correlate
steal% and error estimate before, so it was a nice trip through my
graphs.  -rt

-- 
Ryan Tucker <[email protected]>
_______________________________________________
pool mailing list
[email protected]
http://lists.ntp.org/listinfo/pool

Reply via email to