> -----Original Message----- > From: Abel Gordon [mailto:ab...@il.ibm.com] > Sent: Monday, April 08, 2013 7:04 PM > To: Zhangleiqiang > Cc: anth...@codemonkey.ws; Luohao (brian); Haofeng; 张磊强; > qemu-devel@nongnu.org; Stefan Hajnoczi; Stefan Hajnoczi > Subject: Re: 答复: [Qemu-devel] 答复: 答复: 答复: question about > performance of dataplane > > Zhangleiqiang <zhangleiqi...@huawei.com> wrote on 08/04/2013 12:06:17 > PM: > > > I think maybe Anthony is right. In previous benchmarks, maybe the > > non-dataplane already reached the physical disk's IOPS upper limit. > > Yep, agree. Try to run the same benchmark in the host to see > what is the bare-metal performance of your system (upper limit) > and how far are dataplane and non-dataplane from this value. > Note your are currently focusing on throughput but you should also > consider latency and CPU utilization. > > > So I did another benchmark which ensures the vcpus is less than the > > host's cores, but also make continuous IO pressure by one VM when > > testing in the other VM. The result showed that dataplane did have > > some advantage over non-dataplane. > > > > 1. IO Pressure Mode: 8 worker, 16K IO size, 25% Read, 100% Random, > > and 50 outstanding IO > > 2. Benchmark Mode: 8 worker, 16K IO size, 0% Read, 100% Random, > > and 50 outstanding IO > > 2. Testing Results: > > a). IOPS: 178.324867 (non-dataplane) vs 230.956328 > (dataplane) > > b). MBPS: 2.786326 (non-dataplane) vs 3.608693 (dataplane) > > Note that running other VM just to "synthetically" degrade the > performance of the system may cause some side effects and confuse the > results (e.g. the "other" VM may stress the system differently and > do more pressure when you use dataplane than when you don't use > dataplane) >
I think do multiple benchmarks with the same situation and calc the average value will eliminate the "side effects". > Last thing, IMHO, you should also evaluate scalability: > how dataplane and no-dataplane perform when you run multiple VMs ? > > For example, > first 1 VM with 2 VCPUs > then 2 VMs with 2 VCPUs each > then 3 VMs with 2 VCPUs each > ... > up to 12 VMs with 2 VCPUs each > > It seems like you unintentionally tested what happens with 2 VMs when > you added the "other" VM to create I/O pressure. Indeed, the fact I used 2 VMs in previous benchmark is to ensure the vcpus is less than the host's cores, eg, each vm had 8 vpus. Thanks for your advice, I will evaluate the scalability, :)