Hi, 

Thanks for you reply.

I do more test here and things change more strange, now i only could get about 
4k iops in VM:
1. use fio with ioengine rbd to test the volume on the real machine
[global]
ioengine=rbd
clientname=admin
pool=vol_ssd
rbdname=volume-4f4f9789-4215-4384-8e65-127a2e61a47f
rw=randwrite
bs=4k
group_reporting=1

[rbd_iodepth32]
iodepth=32
[rbd_iodepth1]
iodepth=32
[rbd_iodepth28]
iodepth=32
[rbd_iodepth8]
iodepth=32

could achive about 18k iops.

2. test the same volume in VM, achive about 4.3k iops
[global]
rw=randwrite
bs=4k
ioengine=libaio
#ioengine=sync
iodepth=128
direct=1
group_reporting=1
thread=1
filename=/dev/vdb

[task1]
iodepth=32
[task2]
iodepth=32
[task3]
iodepth=32
[task4]
iodepth=32

Using cep osd perf to check the osd latency, all less than 1 ms.
Using iostat to check the osd %util, about 10 in case 2 test.
Using dstat to check VM status:
----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system--
usr sys idl wai hiq siq| read  writ| recv  send|  in   out | int   csw 
  2   4  51  43   0   0|   0    17M| 997B 3733B|   0     0 |3476  6997 
  2   5  51  43   0   0|   0    18M| 714B 4335B|   0     0 |3439  6915 
  2   5  50  43   0   0|   0    17M| 594B 3150B|   0     0 |3294  6617 
  1   3  52  44   0   0|   0    18M| 648B 3726B|   0     0 |3447  6991 
  1   5  51  43   0   0|   0    18M| 582B 3208B|   0     0 |3467  7061 

Finally, using iptraf to check the package size in the VM, almost packages's
size are around 1 to 70 and 71 to 140 bytes. That's different from real machine.

But maybe iptraf on the VM can't prove anything, i check the real machine which 
the VM located on. 
It seems no abnormal.

BTW, my VM is located on the ceph storage node.

Anyone can give me more sugestions?

Thanks!



hzwuli...@gmail.com
 
From: Alexandre DERUMIER
Date: 2015-10-20 19:36
To: hzwulibin
CC: ceph-users
Subject: Re: [ceph-users] [performance] rbd kernel module versus qemu librbd
Hi,
 
I'm able to reach around same performance with qemu-librbd vs qemu-krbd,
when I compile qemu with jemalloc
(http://git.qemu.org/?p=qemu.git;a=commit;h=7b01cb974f1093885c40bf4d0d3e78e27e531363)
 
on my test, librbd with jemalloc still use 2x more cpu than krbd,
so cpu could be bottleneck too.
 
with fasts cpu (3.1ghz), I'm able to reach around 70k iops 4k with rbd volume, 
both with krbd or librbd
 
 
----- Mail original -----
De: hzwuli...@gmail.com
À: "ceph-users" <ceph-us...@ceph.com>
Envoyé: Mardi 20 Octobre 2015 10:22:33
Objet: [ceph-users] [performance] rbd kernel module versus qemu librbd
 
Hi, 
I have a question about the IOPS performance for real machine and virtual 
machine. 
Here is my test situation: 
1. ssd pool (9 OSD servers with 2 osds on each server, 10Gb networks for public 
& cluster networks) 
2. volume1: use rbd create a 100G volume from the ssd pool and map to the real 
machine 
3. volume2: use cinder create a 100G volume form the ssd pool and atach to a 
guest host 
4. disable rbd cache 
5. fio test on the two volues: 
[global] 
rw=randwrite 
bs=4k 
ioengine=libaio 
iodepth=64 
direct=1 
size=64g 
runtime=300s 
group_reporting=1 
thread=1 
 
volume1 got about 24k IOPS and volume got about 14k IOPS. 
 
We could see performance of volume2 is not good compare to volume1, so is it 
normal behabior of guest host? 
If not, what maybe the problem? 
 
Thanks! 
 
hzwuli...@gmail.com 
 
_______________________________________________ 
ceph-users mailing list 
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to