SPINLOCKS         HOLD            WAIT
   UTIL  CON    MEAN(  MAX )   MEAN(  MAX )(% CPU)     TOTAL NOWAIT SPIN
RJECT  NAME

   7.4%  2.8%  0.1us( 143us)  3.3us( 147us)( 1.4%)  75262432 97.2%  2.8%
    0%  lock_sock_nested+0x30
  29.5%  6.6%  0.5us( 148us)  0.9us( 143us)(0.49%)  37622512 93.4%  6.6%
    0%  tcp_v4_rcv+0xb30
   3.0%  5.6%  0.1us( 142us)  0.9us( 143us)(0.14%)  13911325 94.4%  5.6%
    0%  release_sock+0x120
   9.6% 0.75%  0.1us( 144us)  0.7us( 139us)(0.08%)  75262432 99.2% 0.75%
    0%  release_sock+0x30
...
Still, does this look like something worth persuing?  In a past life/OS
when one was able to eliminate one percentage point of spinlock
contention, two percentage points of improvement ensued.


Rick, this looks like good stuff, we're seeing more and more issues
like this as systems become more multi-core and have more interrupts
per NIC (think MSI-X)

MSI-X - haven't even gotten to that - discussion of that probably overlaps with some "pci" mailing list right?

Let me know if there is something I can do to help.

I suppose one good step would be to reproduce the results on some other platform. After that, I need to understand what those routines are doing much better than I currently do, particularly from an "architecture" perspective - I think that it may involve all the prequeue/try to get the TCP processing on the user's stack stuff but I'm _far_ from certain.

rick jones

-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to