On Tue, 2013-11-26 at 09:52 +0100, Peter Zijlstra wrote:
> On Tue, Nov 26, 2013 at 12:12:31AM -0800, Davidlohr Bueso wrote:
> 
> > I am becoming hesitant about this approach. The following are some
> > results, from my quad-core laptop, measuring the latency of nthread
> > wakeups (1 at a time). In addition, failed wait calls never occur -- so
> > we don't end up including the (otherwise minimal) overhead of the list
> > queue+dequeue, only measuring the smp_mb() usage when !empty list never
> > occurs.
> > 
> > +---------+--------------------+--------+-------------------+--------+----------+
> > | threads | baseline time (ms) | stddev | patched time (ms) | stddev | 
> > overhead |
> > +---------+--------------------+--------+-------------------+--------+----------+
> > |     512 | 4.2410             | 0.9762 | 12.3660           | 5.1020 | 
> > +191.58% |
> > |     256 | 2.7750             | 0.3997 | 7.0220            | 2.9436 | 
> > +153.04% |
> > |     128 | 1.4910             | 0.4188 | 3.7430            | 0.8223 | 
> > +151.03% |
> > |      64 | 0.8970             | 0.3455 | 2.5570            | 0.3710 | 
> > +185.06% |
> > |      32 | 0.3620             | 0.2242 | 1.1300            | 0.4716 | 
> > +212.15% |
> > +---------+--------------------+--------+-------------------+--------+----------+
> > 
> 
> Whee, this is far more overhead than I would have expected... pretty
> impressive really for a simple mfence ;-)

*sigh* I just realized I had some extra debugging options in the .config
I ran for the patched kernel. This probably explains why the huge
overhead. I'll rerun and report shortly.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to