aggregate _RR/packets per second for many VMs on
the same system would be in order.
happy benchmarking,
rick jones
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org
will be very much
involved in matters of congestion window and such. I suppose it is even
possible that if the packet trace is on a VM receiver that some delays
in getting the VM running could mean that GRO would end-up making large
segments being pushed up the stack.
happy benchmarking,
rick jones
:
netperf -H otherguy -c -C -l 30 -i 30,3 -t UDP_RR -- -r 512
(I was guessing as to what netperf options you may have been using already)
happy benchmarking,
rick jones
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo
make a
setsockopt(TCP_NODELAY) call.
happy benchmarking,
rick jones
If the transport is slow, TCP stack will automatically collapse several
write into single skbs (assuming TSO or GSO is on), and you'll see big
GSO packets with tcpdump [1]. So TCP will help you to get less overhead
sessions from 10?
rick jones
Netperf Local VM to VM test:
- VM1 and its vcpu/vhost thread in numa node 0
- VM2 and its vcpu/vhost thread in numa node 1
- a script is used to lauch the netperf with demo mode and do the postprocessing
to measure the aggreagte result with the help of timestamp
to at present)?
happy benchmarking,
rick jones
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
the queue which
built-up to be in the VM itself and would more accurately represent what
a real NIC of that bandwidth would do.
happy benchmarking,
rick jones
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info
things should be run rather than
the networking code.
rick jones
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On 07/08/2012 08:23 PM, Jason Wang wrote:
On 07/07/2012 12:23 AM, Rick Jones wrote:
On 07/06/2012 12:42 AM, Jason Wang wrote:
Which mechanism to address skew error? The netperf manual describes
more than one:
This mechanism is missed in my test, I would add them to my test scripts.
http
would help confirm/refute any
non-trivial change in (effective) path length between the two cases.
Yes, I would test this thanks.
Excellent.
happy benchmarking,
rick jones
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
cases.
happy benchmarking,
rick jones
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On 04/30/2012 02:12 AM, Michael S. Tsirkin wrote:
On Tue, Jun 28, 2011 at 10:19:48AM -0700, Rick Jones wrote:
one of these days I'll have to find a good way to get accurate
overall CPU utilization from within a guest and teach netperf about
it.
I think the cleanest way would be to run another
if they were 2 vCPU VMs,
but that is just my gut talking. Certainly looking at the summary table
I'm wondering where between 4 and 12 VMs the curve starts its downward
trend. Does 12 and 24, 2vCPU VMs force moving around more than say 16
or 32 would?
happy benchmarking,
rick jones
benchmarking,
rick jones
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
of the changes on a TCP_RR test
would probably be goodness as well.
happy benchmarking,
rick jones
one of these days I'll have to find a good way to get accurate overall
CPU utilization from within a guest and teach netperf about it.
The impact is every skb allocation consumed one more pointer in skb
, that is one of the reasons I added the burst mode to the _RR
test - because it could take a Very Large Number of concurrent netperfs
to take a link to saturation, at which point it might have been just as
much a context switching benchmark as anything else :)
happy benchmarking,
rick jones
specify the destination IP and port for the data
connection explicitly via the test-specific options. In that mode the only
stats reported are those local to netperf rather than netserver.
happy benchmarking,
rick jones
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body
Michael S. Tsirkin wrote:
On Mon, Jan 24, 2011 at 10:27:55AM -0800, Rick Jones wrote:
Just to block netperf you can send it SIGSTOP :)
Clever :) One could I suppose achieve the same result by making the
remote receive socket buffer size smaller than the UDP message size
and then not worry
by how much the rate varied.
rick jones
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
file. Currently that is the one for human output, which has
a four line restriction. I will try to make it smarter as I go.
happy benchmarking,
rick jones
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info
,
rick jones
PS - the enhanced latency statistics from -j are only available in the omni
version of the TCP_RR test. To get that add a --enable-omni to the ./configure
- and in this case both netperf and netserver have to be recompiled. For very
basic output one can peruse the output of:
src
Michael S. Tsirkin wrote:
On Tue, Jan 18, 2011 at 11:41:22AM -0800, Rick Jones wrote:
PS - the enhanced latency statistics from -j are only available in
the omni version of the TCP_RR test. To get that add a
--enable-omni to the ./configure - and in this case both netperf and
netserver have
the same.
happy benchmarking,
rick jones
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Sridhar Samudrala wrote:
On Thu, 2010-04-08 at 17:14 -0700, Rick Jones wrote:
Here are the results with netperf TCP_STREAM 64K guest to host on a
8-cpu Nehalem system.
I presume you mean 8 core Nehalem-EP, or did you mean 8 processor Nehalem-EX?
Yes. It is a 2 socket quad-core Nehalem. so
to run things like single-instance TCP_RR
and multiple-instance, multiple transaction (./configure --enable-burst)
TCP_RR tests, particularly when concerned with scaling issues.
happy benchmarking,
rick jones
It shows cumulative bandwidth in Mbps and host
CPU utilization.
Current default single
25 matches
Mail list logo