On 10/09/2012 12:35 PM, Toke Høiland-Jørgensen wrote:
Rick Jones <rick.jon...@hp.com> writes:
Hi Rick
Thanks for your feedback.
The script looks reasonable. Certainly cleaner than any Python I've
yet written :) I might be a little worried about skew error though
(assuming I've not mis-read the script and example ini file). That is
why I use the "demo mode" of netperf in
http://www.netperf.org/svn/netperf2/trunk/doc/examples/bloat.sh though
it does make the post-processing rather more involved.
Ah, I see. I'll look into that. As far as I can tell from that script,
you're basically running it with the demo mode enabled, and graphing the
results with each line as a data point?
Mostly. The smallest step size in rrdtool is one second, so I create
rrd's with that as the step size, but I try for sub-second samples from
the interval results to mitigate (or try to) some of rrdtools averaging.
There's a comment about using negative values for -D to increase
accuracy at a cost of performance. Is this cost significant? And if it
is, would there be any reason why it wouldn't be possible to just use
positive values and then fix the error by interpolating values to be at
fixed intervals when graphing?
When the demo mode was introduced into netperf, a gettimeofday() call
was still relatively expensive, and I wanted to mitigate the effect of
demo mode on overall performance. So, what the code does is guess how
many units of work will complete within the desired output interval.
Then, once that quantity of work has been completed, netperf makes the
gettimeofday() call to see if it is time to emit an interim result,
adjusting the guesstimate for units of work per time interval
accordingly. However, if things slow-down considerably, that can lead
to a rather long interval between interim results.
So, now that at least some platforms have an "inexpensive"
gettimeofday() call, if a negative value is used, that signals netperf
to make the gettimeofday() call after each unit of work (each send or
recv call, or pair in the case of an RR test). That should result in
hitting the desired interval more frequently, save for when a single one
of those calls takes longer.
I used rrdtool so I could get all the tests "snapped" to the same set of
one second intervals starting on one second boundaries.
I see you are running the TCP_RR test for less time than the
TCP_STREAM/TCP_MAERTS test. What do you then do to show the latency
without the bulk transfer load?
I ran the TCP_RR test by itself to get a baseline result. The idea with
the different lengths is to start the streams, wait two seconds, and
then run the roundtrip so that it finished two seconds before the
streams (i.e. the roundtrip test is only running while the streams are).
This is for my test setup, which is just a few computers connected with
ethernet; so no sudden roundtrip variances should occur. I can see how
it would be useful to get something that can be graphed over time; I'll
try to look into getting that working.
You might give the two bulk transfer tests a bit longer to get going -
say 15 seconds or so. In at least some of my runs of bloat.sh I've seen
the throughput take a while to build-up. That is perhaps part of the
reason why Dave Taht is calling for long RTT tests?
I was thinking of trying to write a version of bloat.sh in python but
before I did I wanted to know if python was sufficiently available in
most folks bufferbloat testing environments. I figure in
"full-featured" *nix systems that isn't an issue, but what about in
the routers?
Is there any reason why it wouldn't be possible to run the python script
on a full-featured machine and run the netperf instances via an ssh
tunnel on the router (on a different interface than the one being
tested, of course)?
Not really, apart from script complexity and brittleness. The
netperf_by_flavor.py script does something along those lines for
OpenStack Nova instances. (One of only two python scripts I've written
thusfar, the second being post_proc.py which is used by that to
post-process results of its run of the runemomniaggdemo.sh script)
happy benchmarking,
rick
_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat