Hi Zoltan,
you are right ( but this was two running systems...).

I see also an big failure: "--filename=/mnt/test.bin" (use simply
copy/paste without to much thinking :-( )
The root filesystem is not on ceph (on both servers).
So my measurements are not valid!!

I would do the measurements clean tomorow.


Udo


On 22.11.2015 14:29, Zoltan Arnold Nagy wrote:
> It would have been more interesting if you had tweaked only one option
> as now we can’t be sure which changed had what impact… :-)
>
>> On 22 Nov 2015, at 04:29, Udo Lembke <ulem...@polarzone.de
>> <mailto:ulem...@polarzone.de>> wrote:
>>
>> Hi Sean,
>> Haomai is right, that qemu can have a huge performance differences.
>>
>> I have done two test to the same ceph-cluster (different pools, but
>> this should not do any differences).
>> One test with proxmox ve 4 (qemu 2.4, iothread for device, and
>> cache=writeback) gives 14856 iops
>> Same test with proxmox ve 3.4 (qemu 2.2.1, cache=writethrough) gives
>> 5070 iops only.
>>
>> Here the results in long:
>> ############### proxmox ve 3.x ###############
>> kvm --version
>> QEMU emulator version 2.2.1, Copyright (c) 2003-2008 Fabrice Bellard
>>
>> VM:
>> virtio2: ceph_file:vm-405-disk-1,cache=writethrough,backup=no,size=4096G
>>
>> root@fileserver:/daten/support/test# fio --time_based
>> --name=benchmark --size=4G --filename=/mnt/test.bin --ioengine=libaio
>> --randrepeat=0 --iodepth=128 --direct=1 --invalidate=1 --verify=0
>> --verify_fatal=0 --numjobs=4 --rw=randwrite --blocksize=4k
>> --group_reporting
>> fio: time_based requires a runtime/timeout setting
>> benchmark: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K,
>> ioengine=libaio, iodepth=128
>> ...
>> fio-2.1.11
>> Starting 4 processes
>> benchmark: Laying out IO file(s) (1 file(s) / 4096MB)
>> Jobs: 1 (f=1): [_(1),w(1),_(2)] [100.0% done] [0KB/40024KB/0KB /s]
>> [0/10.6K/0 iops] [eta 00m:00s]
>> benchmark: (groupid=0, jobs=4): err= 0: pid=7821: Sun Nov 22 04:07:47
>> 2015
>>   write: io=16384MB, bw=20282KB/s, iops=5070, runt=827178msec
>>     slat (usec): min=0, max=2531.7K, avg=778.68, stdev=12757.26
>>     clat (usec): min=508, max=2755.2K, avg=99980.14, stdev=146967.17
>>      lat (msec): min=1, max=2755, avg=100.76, stdev=147.54
>>     clat percentiles (msec):
>>      |  1.00th=[   10],  5.00th=[   14], 10.00th=[   19], 20.00th=[  
>> 28],
>>      | 30.00th=[   36], 40.00th=[   43], 50.00th=[   51], 60.00th=[  
>> 63],
>>      | 70.00th=[   81], 80.00th=[  128], 90.00th=[  237], 95.00th=[ 
>> 367],
>>      | 99.00th=[  717], 99.50th=[  889], 99.90th=[ 1516], 99.95th=[
>> 1713],
>>      | 99.99th=[ 2573]
>>     bw (KB  /s): min=    4, max=30726, per=26.90%, avg=5456.84,
>> stdev=3014.45
>>     lat (usec) : 750=0.01%, 1000=0.01%
>>     lat (msec) : 2=0.01%, 4=0.01%, 10=1.11%, 20=10.18%, 50=37.74%
>>     lat (msec) : 100=26.45%, 250=15.22%, 500=6.66%, 750=1.74%, 1000=0.55%
>>     lat (msec) : 2000=0.29%, >=2000=0.03%
>>   cpu          : usr=0.36%, sys=2.31%, ctx=1148702, majf=0, minf=30
>>   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%,
>> >=64=100.0%
>>      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>> >=64=0.0%
>>      complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>> >=64=0.1%
>>      issued    : total=r=0/w=4194304/d=0, short=r=0/w=0/d=0
>>      latency   : target=0, window=0, percentile=100.00%, depth=128
>>
>> Run status group 0 (all jobs):
>>   WRITE: io=16384MB, aggrb=20282KB/s, minb=20282KB/s, maxb=20282KB/s,
>> mint=827178msec, maxt=827178msec
>>
>> Disk stats (read/write):
>>     dm-0: ios=0/4483641, merge=0/0, ticks=0/104928824,
>> in_queue=105927128, util=100.00%, aggrios=1/4469640,
>> aggrmerge=0/14788, aggrticks=64/103711096, aggrin_queue=104165356,
>> aggrutil=100.00%
>>   vda: ios=1/4469640, merge=0/14788, ticks=64/103711096,
>> in_queue=104165356, util=100.00%
>>
>> ##############################################
>>
>> ############### proxmox ve 4.x ###############
>> kvm --version
>> QEMU emulator version 2.4.0.1 pve-qemu-kvm_2.4-12, Copyright (c)
>> 2003-2008 Fabrice Bellard
>>
>> grep ceph /etc/pve/qemu-server/102.conf
>> virtio1: ceph_test:vm-102-disk-1,cache=writeback,iothread=on,size=100G
>>
>> root@fileserver-test:/daten/tv01/test# fio --time_based
>> --name=benchmark --size=4G --filename=/mnt/test.bin --ioengine=libaio
>> --randrepeat=0 --iodepth=128 --direct=1 --invalidate=1 --verify=0
>> --verify_fatal=0 --numjobs=4 --rw=randwrite --blocksize=4k
>> --group_reporting          
>> fio: time_based requires a runtime/timeout
>> setting                                                                      
>>                 
>>
>> benchmark: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K,
>> ioengine=libaio,
>> iodepth=128                                                             
>> ...                                                                          
>>                                                                       
>>
>> fio-2.1.11
>> Starting 4 processes
>> Jobs: 4 (f=4): [w(4)] [99.6% done] [0KB/56148KB/0KB /s] [0/14.4K/0
>> iops] [eta 00m:01s]
>> benchmark: (groupid=0, jobs=4): err= 0: pid=26131: Sun Nov 22
>> 03:51:04 2015
>>   write: io=0B, bw=59425KB/s, iops=14856, runt=282327msec
>>     slat (usec): min=6, max=216925, avg=261.78, stdev=1802.78
>>     clat (msec): min=1, max=330, avg=34.04, stdev=27.78
>>      lat (msec): min=1, max=330, avg=34.30, stdev=27.87
>>     clat percentiles (msec):
>>      |  1.00th=[   10],  5.00th=[   13], 10.00th=[   14], 20.00th=[  
>> 16],
>>      | 30.00th=[   18], 40.00th=[   19], 50.00th=[   21], 60.00th=[  
>> 24],
>>      | 70.00th=[   33], 80.00th=[   62], 90.00th=[   81], 95.00th=[  
>> 87],
>>      | 99.00th=[   95], 99.50th=[  100], 99.90th=[  269], 99.95th=[ 
>> 277],
>>      | 99.99th=[  297]
>>     bw (KB  /s): min=    3, max=42216, per=25.10%, avg=14917.03,
>> stdev=2990.50
>>     lat (msec) : 2=0.01%, 4=0.01%, 10=1.13%, 20=45.52%, 50=28.23%
>>     lat (msec) : 100=24.61%, 250=0.35%, 500=0.16%
>>   cpu          : usr=2.20%, sys=14.42%, ctx=2462199, majf=0, minf=40
>>   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%,
>> >=64=100.0%
>>      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>> >=64=0.0%
>>      complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>> >=64=0.1%
>>      issued    : total=r=0/w=4194304/d=0, short=r=0/w=0/d=0
>>      latency   : target=0, window=0, percentile=100.00%, depth=128
>>
>> Run status group 0 (all jobs):
>>   WRITE: io=16384MB, aggrb=59424KB/s, minb=59424KB/s, maxb=59424KB/s,
>> mint=282327msec, maxt=282327msec
>>
>> Disk stats (read/write):
>>     dm-0: ios=0/4192044, merge=0/0, ticks=0/35093432,
>> in_queue=35116888, util=99.70%, aggrios=0/4194626, aggrmerge=0/14,
>> aggrticks=0/34902692, aggrin_queue=34903976, aggrutil=99.65%
>>   vda: ios=0/4194626, merge=0/14, ticks=0/34902692,
>> in_queue=34903976, util=99.65%
>> ##############################################
>>
>> regards
>>
>> Udo

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to