On Wed, Jan 20, 2016 at 02:17:57PM -0500, Nicolas Pitre wrote:
> > It is also a friggin pointless /1000. The cpuidle code also loves to do
> > this, and its silly, u64 add/sub are _way_ cheaper than u64 / 1000.
> 
> For the purpose of this code, nanoseconds simply provides too many bits 
> for what we care.  Computing the variance implies squared values.
> 
> *However* we can simply do diff = (timestamp - w->timestamp) >> 10 
> instead.  No need to have an exact microsecs base.

Right, you could also reduce bits at the variance computation, but yes.

Reply via email to