Re: [PATCH net-next RFC 2/2] vhost_net: basic polling support

2015-10-22 Thread Rick Jones
aggregate _RR/packets per second for many VMs on the same system would be in order. happy benchmarking, rick jones -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org

Re: [QA-TCP] How to send tcp small packages immediately?

2014-10-24 Thread Rick Jones
will be very much involved in matters of congestion window and such. I suppose it is even possible that if the packet trace is on a VM receiver that some delays in getting the VM running could mean that GRO would end-up making large segments being pushed up the stack. happy benchmarking, rick jones

Re: 8% performance improved by change tap interact with kernel stack

2014-01-28 Thread Rick Jones
: netperf -H otherguy -c -C -l 30 -i 30,3 -t UDP_RR -- -r 512 (I was guessing as to what netperf options you may have been using already) happy benchmarking, rick jones -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo

Re: TCP small packets throughput and multiqueue virtio-net

2013-03-08 Thread Rick Jones
make a setsockopt(TCP_NODELAY) call. happy benchmarking, rick jones If the transport is slow, TCP stack will automatically collapse several write into single skbs (assuming TSO or GSO is on), and you'll see big GSO packets with tcpdump [1]. So TCP will help you to get less overhead

Re: [rfc net-next v6 0/3] Multiqueue virtio-net

2012-10-30 Thread Rick Jones
sessions from 10? rick jones Netperf Local VM to VM test: - VM1 and its vcpu/vhost thread in numa node 0 - VM2 and its vcpu/vhost thread in numa node 1 - a script is used to lauch the netperf with demo mode and do the postprocessing to measure the aggreagte result with the help of timestamp

Re: NIC emulation with built-in rate limiting?

2012-09-17 Thread Rick Jones
to at present)? happy benchmarking, rick jones -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

NIC emulation with built-in rate limiting?

2012-09-11 Thread Rick Jones
the queue which built-up to be in the VM itself and would more accurately represent what a real NIC of that bandwidth would do. happy benchmarking, rick jones -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info

Re: [PATCHv4] virtio-spec: virtio network device multiqueue support

2012-09-10 Thread Rick Jones
things should be run rather than the networking code. rick jones -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [net-next RFC V5 0/5] Multiqueue virtio-net

2012-07-09 Thread Rick Jones
On 07/08/2012 08:23 PM, Jason Wang wrote: On 07/07/2012 12:23 AM, Rick Jones wrote: On 07/06/2012 12:42 AM, Jason Wang wrote: Which mechanism to address skew error? The netperf manual describes more than one: This mechanism is missed in my test, I would add them to my test scripts. http

Re: [net-next RFC V5 0/5] Multiqueue virtio-net

2012-07-06 Thread Rick Jones
would help confirm/refute any non-trivial change in (effective) path length between the two cases. Yes, I would test this thanks. Excellent. happy benchmarking, rick jones -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org

Re: [net-next RFC V5 0/5] Multiqueue virtio-net

2012-07-05 Thread Rick Jones
cases. happy benchmarking, rick jones -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: getting host CPU utilization (was Re: [PATCH V7 2/4 net-next] skbuff: Add userspace zero-copy buffers in skb)

2012-04-30 Thread Rick Jones
On 04/30/2012 02:12 AM, Michael S. Tsirkin wrote: On Tue, Jun 28, 2011 at 10:19:48AM -0700, Rick Jones wrote: one of these days I'll have to find a good way to get accurate overall CPU utilization from within a guest and teach netperf about it. I think the cleanest way would be to run another

Re: [RFC PATCH 1/1] NUMA aware scheduling per cpu vhost thread

2012-03-23 Thread Rick Jones
if they were 2 vCPU VMs, but that is just my gut talking. Certainly looking at the summary table I'm wondering where between 4 and 12 VMs the curve starts its downward trend. Does 12 and 24, 2vCPU VMs force moving around more than say 16 or 32 would? happy benchmarking, rick jones

Re: [RFC PATCH 1/1] NUMA aware scheduling per cpu vhost thread

2012-03-23 Thread Rick Jones
benchmarking, rick jones -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [PATCH V7 2/4 net-next] skbuff: Add userspace zero-copy buffers in skb

2011-06-28 Thread Rick Jones
of the changes on a TCP_RR test would probably be goodness as well. happy benchmarking, rick jones one of these days I'll have to find a good way to get accurate overall CPU utilization from within a guest and teach netperf about it. The impact is every skb allocation consumed one more pointer in skb

Re: Network performance with small packets - continued

2011-03-09 Thread Rick Jones
, that is one of the reasons I added the burst mode to the _RR test - because it could take a Very Large Number of concurrent netperfs to take a link to saturation, at which point it might have been just as much a context switching benchmark as anything else :) happy benchmarking, rick jones

Re: Flow Control and Port Mirroring Revisited

2011-01-24 Thread Rick Jones
specify the destination IP and port for the data connection explicitly via the test-specific options. In that mode the only stats reported are those local to netperf rather than netserver. happy benchmarking, rick jones -- To unsubscribe from this list: send the line unsubscribe kvm in the body

Re: Flow Control and Port Mirroring Revisited

2011-01-24 Thread Rick Jones
Michael S. Tsirkin wrote: On Mon, Jan 24, 2011 at 10:27:55AM -0800, Rick Jones wrote: Just to block netperf you can send it SIGSTOP :) Clever :) One could I suppose achieve the same result by making the remote receive socket buffer size smaller than the UDP message size and then not worry

Re: Flow Control and Port Mirroring Revisited

2011-01-21 Thread Rick Jones
by how much the rate varied. rick jones -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Flow Control and Port Mirroring Revisited

2011-01-20 Thread Rick Jones
file. Currently that is the one for human output, which has a four line restriction. I will try to make it smarter as I go. happy benchmarking, rick jones -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info

Re: Flow Control and Port Mirroring Revisited

2011-01-18 Thread Rick Jones
, rick jones PS - the enhanced latency statistics from -j are only available in the omni version of the TCP_RR test. To get that add a --enable-omni to the ./configure - and in this case both netperf and netserver have to be recompiled. For very basic output one can peruse the output of: src

Re: Flow Control and Port Mirroring Revisited

2011-01-18 Thread Rick Jones
Michael S. Tsirkin wrote: On Tue, Jan 18, 2011 at 11:41:22AM -0800, Rick Jones wrote: PS - the enhanced latency statistics from -j are only available in the omni version of the TCP_RR test. To get that add a --enable-omni to the ./configure - and in this case both netperf and netserver have

Re: [PATCH] vhost: Make it more scalable by creating a vhost thread per device.

2010-04-12 Thread Rick Jones
the same. happy benchmarking, rick jones -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [PATCH] vhost: Make it more scalable by creating a vhost thread per device.

2010-04-09 Thread Rick Jones
Sridhar Samudrala wrote: On Thu, 2010-04-08 at 17:14 -0700, Rick Jones wrote: Here are the results with netperf TCP_STREAM 64K guest to host on a 8-cpu Nehalem system. I presume you mean 8 core Nehalem-EP, or did you mean 8 processor Nehalem-EX? Yes. It is a 2 socket quad-core Nehalem. so

Re: [PATCH] vhost: Make it more scalable by creating a vhost thread per device.

2010-04-08 Thread Rick Jones
to run things like single-instance TCP_RR and multiple-instance, multiple transaction (./configure --enable-burst) TCP_RR tests, particularly when concerned with scaling issues. happy benchmarking, rick jones It shows cumulative bandwidth in Mbps and host CPU utilization. Current default single