I measure a huge slope, however. Starting at 1usec for back-to-back system
calls, it rises to 2usec after interleaving calls with a count to 20
million.
4usec is hit after 110 million.
The graph, with semi-scientific error-bars is on
http://ds9a.nl/tmp/recvfrom-usec-vs-wait.png
The code to generate it is on:
http://ds9a.nl/tmp/recvtimings.c
I'm investigating this further for other system calls. It might be that my
measurements are off, but it appears even a slight delay between calls
incurs a large penalty.
The slope appears to be flattening-out the farther out to the right it
goes. Perhaps that is the length of time it takes to take all the
requisite cache misses.
Some judicious use of HW perf counters might be in order via say papi or
pfmon. Otherwise, you could try a test where you don't delay, but do
try to blow-out the cache(s) between recvfrom() calls. If the delay
there starts to match the delay as you go out to the right on the graph
it would suggest that it is indeed cache effects.
rick jones
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/