On Wed, 20 Jan 2016, Peter Zijlstra wrote: > On Wed, Jan 20, 2016 at 05:00:33PM +0100, Daniel Lezcano wrote: > > +static void sched_irq_timing_handler(unsigned int irq, ktime_t timestamp, > > void *dev_id) > > +{ > > + u32 diff; > > + unsigned int cpu = raw_smp_processor_id(); > > + struct wakeup *w = per_cpu(wakeups[irq], cpu); > > + > > + /* > > + * It is the first time the interrupt occurs of the series, we > > + * can't do any stats as we don't have an interval, just store > > + * the timestamp and exit. > > + */ > > + if (ktime_equal(w->timestamp, ktime_set(0, 0))) { > > + w->timestamp = timestamp; > > + return; > > + } > > + > > + /* > > + * Microsec resolution is enough for our purpose. > > + */ > > It is also a friggin pointless /1000. The cpuidle code also loves to do > this, and its silly, u64 add/sub are _way_ cheaper than u64 / 1000.
For the purpose of this code, nanoseconds simply provides too many bits for what we care. Computing the variance implies squared values. *However* we can simply do diff = (timestamp - w->timestamp) >> 10 instead. No need to have an exact microsecs base. > > + ktime_t now = ktime_get(); > > Why !?! do we care about NTP correct timestamps? Not at all. Using sched_clock() instead would be more than good enough indeed. Nicolas