Re: [kvm-devel] PV network performance comparison
Zhao Forrest wrote: When running KVM(kvm.rtl) and xen-HVM(xen.um) on the same machine, I feel that the guest OS on top of KVM is much faster responsive than the one on top of xen-HVM. But this test result showed that xen-HVM is more responsive than KVM. Weird. I onced tried KVM-36 and xen-3.0.1 and got such impression. KVM certainly has an edge in latency because there are fewer layers and schedulers involved. Regarding throughput, the numbers for kvm.rtl look lower than expected while xen.um's numbers are unrealistically high. The test needs to be done more carefully (using a recent kvm, too). -- error compiling committee.c: too many arguments to function - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] PV network performance comparison
On 10/12/07, James Dykman [EMAIL PROTECTED] wrote: Dor, I ran some netperf tests with your PV virtio drivers, along with some Xen PV cases and a few others for comparison. I thought you (and the list) might be interested in the numbers. I am going to start looking for bottlenecks, unless you need help with the new hypercall updates. I'll re-run when that is available. Jim Tests were run with Netperf-2.4.3, TCP Socket buffers were 256k. All of the tests were run with netserver in the guest, netperf in the host/dom0. No bridge was used. Hardware: IBM HS21 blade Dual Xeon w/HT @ 1.6GHz, 4GB The host/Dom0 configuration: kvm.*: Host is 32 bit Ubuntu 7.04 server running Dor's 2.6.22-rc3 kernel. xen.*: Dom0 is 32 bit Ubuntu 7.04 server running the 2.6.18 kernel from xen3.1 The guest configurations: All guests/domUs are 512MB, 1 CPU kvm.rtl: (KVM with emulated RTL8029) Fedora 7 32 bit guest Standard 2.6.21-1.3194.fc7 kernel kvm.pv: (KVM w/Dor's paravirt drivers) Fedora 7 32 bit guest running Dor's 2.6.22-rc3 kernel. xen.pv: (Xen paravirt) Ubuntu 7.04 server w/2.6.18-xen kernel xen.um: (Xen HVM with unmodified drivers) Ubuntu 7.04 server w/2.6.18-xen kernel, unmodified drivers compiled from xen3.1 kvm.lo: (Host loopback) TCP REQUEST/RESPONSE (Trans. Rate per sec) size kvm.rtl kvm.pvxen.pvxen.umkvm.lo 1 2191.47 9533.74 18052.37 13593.58 42400.73 642184.30 9518.13 17979.93 13557.98 42260.53 128 2177.52 9482.45 17940.08 13588.54 40983.90 256 2160.49 9465.97 17788.21 13492.42 41170.45 512 2130.99 9403.33 17655.11 13489.64 40765.26 1024 2074.85 9204.90 17293.06 13572.01 39437.78 2048 416.18 4750.41 12907.57 11571.07 37252.42 4096 265.22 3691.90 10990.67 9943.64 31905.03 8192 116.80 1892.25 8439.83 6604.64 24397.95 16384 92.06 1004.58 4535.86 3924.68 17460.30 TCP STREAM (Throughput 10^6bits/sec) sizekvm.rtl kvm.pv xen.pv xen.um kvm.lo 204833.06 507.21 555.94 1442.38 5409.73 409633.16 526.75 848.26 2359.42 6152.48 819233.13 527.99 997.69 2418.87 7267.73 1638433.08 525.95 1107.64 2379.50 8434.29 3276833.13 525.38 1199.08 2375.81 8857.09 6553633.20 523.39 1255.33 2473.92 9248.35 13107233.11 520.87 1292.54 2605.49 8559.21 When running KVM(kvm.rtl) and xen-HVM(xen.um) on the same machine, I feel that the guest OS on top of KVM is much faster responsive than the one on top of xen-HVM. But this test result showed that xen-HVM is more responsive than KVM. Weird. I onced tried KVM-36 and xen-3.0.1 and got such impression. Thanks, Forrest - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel
Re: [kvm-devel] PV network performance comparison
James Dykman wrote: Dor, I ran some netperf tests with your PV virtio drivers, along with some Xen PV cases and a few others for comparison. I thought you (and the list) might be interested in the numbers. Thanks for the tests it indeed interesting. Actually except for a small optimization (receiving several msgs from the tap and sending a single irq) I haven't had the time to optimize the code. It's also interesting to check what lguest is doing since the qemu path is not polished, also lguest has newer virtio drivers. I am going to start looking for bottlenecks, unless you need help with the new hypercall updates. I'll re-run when that is available. Any help would be great. I also need to move towards the latest virtio patch that includes a change in the shared memory and pci like config space. I planned on doing this starting mid next week. W.r.t performance the following can improve: - Avi's shorten latency tap patch - Using scatter gather in qemu tap That's why using bigger pkts don't help performance. - Minimize guest tx hypercalls - Running oprofile - Host side kernel driver. Thanks, Dor. Jim Tests were run with Netperf-2.4.3, TCP Socket buffers were 256k. All of the tests were run with netserver in the guest, netperf in the host/dom0. No bridge was used. Hardware: IBM HS21 blade Dual Xeon w/HT @ 1.6GHz, 4GB The host/Dom0 configuration: kvm.*: Host is 32 bit Ubuntu 7.04 server running Dor's 2.6.22-rc3 kernel. xen.*: Dom0 is 32 bit Ubuntu 7.04 server running the 2.6.18 kernel from xen3.1 The guest configurations: All guests/domUs are 512MB, 1 CPU kvm.rtl: (KVM with emulated RTL8029) Fedora 7 32 bit guest Standard 2.6.21-1.3194.fc7 kernel kvm.pv: (KVM w/Dor's paravirt drivers) Fedora 7 32 bit guest running Dor's 2.6.22-rc3 kernel. xen.pv: (Xen paravirt) Ubuntu 7.04 server w/2.6.18-xen kernel xen.um: (Xen HVM with unmodified drivers) Ubuntu 7.04 server w/2.6.18-xen kernel, unmodified drivers compiled from xen3.1 kvm.lo: (Host loopback) TCP REQUEST/RESPONSE (Trans. Rate per sec) size kvm.rtl kvm.pvxen.pvxen.umkvm.lo 1 2191.47 9533.74 18052.37 13593.58 42400.73 642184.30 9518.13 17979.93 13557.98 42260.53 128 2177.52 9482.45 17940.08 13588.54 40983.90 256 2160.49 9465.97 17788.21 13492.42 41170.45 512 2130.99 9403.33 17655.11 13489.64 40765.26 1024 2074.85 9204.90 17293.06 13572.01 39437.78 2048 416.18 4750.41 12907.57 11571.07 37252.42 4096 265.22 3691.90 10990.67 9943.64 31905.03 8192 116.80 1892.25 8439.83 6604.64 24397.95 16384 92.06 1004.58 4535.86 3924.68 17460.30 TCP STREAM (Throughput 10^6bits/sec) sizekvm.rtl kvm.pv xen.pv xen.um kvm.lo 204833.06 507.21 555.94 1442.38 5409.73 409633.16 526.75 848.26 2359.42 6152.48 819233.13 527.99 997.69 2418.87 7267.73 1638433.08 525.95 1107.64 2379.50 8434.29 3276833.13 525.38 1199.08 2375.81 8857.09 6553633.20 523.39 1255.33 2473.92 9248.35 13107233.11 520.87 1292.54 2605.49 8559.21 - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now http://get.splunk.com/ ___ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel