For i386 with TSC, the kernel calibrates how much CPU cycles will fit 
between two timer interrupts. That value corresponds to 10000 
microseconds. Ideally.

In practice however the timer interrupts do not happen exactly every 
10000 us (for hardware reasons).  When interpolating time between ticks 
that calibration value is used.

When using NTP (or adjusting tick manually) the value added every tick 
may be different from 10000us.

If that value is larger, the time seems to jump ahead at the beginning 
of each tick; if the value is smaller, the time may seem to get stuck, 
get slow, or jump back at the beginning of a new tick.

Therefore I added experimental code to scale the value used for tick 
interpolation according to these corrections. As it seems to me, the 
clock quality improves, and the performance penalty only appears when 
the correction value changes.

I haven't done the non-TSC case or other architectures. For 
microseconds it may seem neglectible, but not for nanoseconds.

If anybody has an interesting opinion on this, please Mail.

Regards,
Ulrich
P.S. Not subscribed here.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to