Hi, I'm able to reach around same performance with qemu-librbd vs qemu-krbd, when I compile qemu with jemalloc (http://git.qemu.org/?p=qemu.git;a=commit;h=7b01cb974f1093885c40bf4d0d3e78e27e531363)
on my test, librbd with jemalloc still use 2x more cpu than krbd, so cpu could be bottleneck too. with fasts cpu (3.1ghz), I'm able to reach around 70k iops 4k with rbd volume, both with krbd or librbd ----- Mail original ----- De: hzwuli...@gmail.com À: "ceph-users" <ceph-us...@ceph.com> Envoyé: Mardi 20 Octobre 2015 10:22:33 Objet: [ceph-users] [performance] rbd kernel module versus qemu librbd Hi, I have a question about the IOPS performance for real machine and virtual machine. Here is my test situation: 1. ssd pool (9 OSD servers with 2 osds on each server, 10Gb networks for public & cluster networks) 2. volume1: use rbd create a 100G volume from the ssd pool and map to the real machine 3. volume2: use cinder create a 100G volume form the ssd pool and atach to a guest host 4. disable rbd cache 5. fio test on the two volues: [global] rw=randwrite bs=4k ioengine=libaio iodepth=64 direct=1 size=64g runtime=300s group_reporting=1 thread=1 volume1 got about 24k IOPS and volume got about 14k IOPS. We could see performance of volume2 is not good compare to volume1, so is it normal behabior of guest host? If not, what maybe the problem? Thanks! hzwuli...@gmail.com _______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com