Attempting to send 256 concurrent 4MiB writes via librbd will pretty
quickly hit the default "objecter_inflight_op_bytes = 100 MiB" limit,
which will drastically slow (stall) librados. I would recommend
re-testing librbd w/ a much higher throttle override.
On Thu, Nov 15, 2018 at 11:34 AM 赵赵贺东 <zhaohed...@gmail.com> wrote:
>
> Thank you for your attention.
>
> Our test are in run in physical machine environments.
>
> Fio for KRBD:
> [seq-write]
> description="seq-write"
> direct=1
> ioengine=libaio
> filename=/dev/rbd0
> numjobs=1
> iodepth=256
> group_reporting
> rw=write
> bs=4M
> size=10T
> runtime=180
>
> */dev/rbd0 mapped by rbd_pool/image2, so KRBD & librbd fio test use the same 
> image.
>
> Fio for librbd:
> [global]
> direct=1
> numjobs=1
> ioengine=rbd
> clientname=admin
> pool=rbd_pool
> rbdname=image2
> invalidate=0    # mandatory
> rw=write
> bs=4M
> size=10T
> runtime=180
>
> [rbd_iodepth32]
> iodepth=256
>
>
> Image info:
> rbd image 'image2':
> size 50TiB in 13107200 objects
> order 22 (4MiB objects)
> data_pool: ec_rbd_pool
> block_name_prefix: rbd_data.8.148bb6b8b4567
> format: 2
> features: layering, data-pool
> flags:
> create_timestamp: Wed Nov 14 09:21:18 2018
>
> * data_pool is a EC pool
>
> Pool info:
> pool 8 'rbd_pool' replicated size 2 min_size 1 crush_rule 0 object_hash 
> rjenkins pg_num 256 pgp_num 256 last_change 82627 flags hashpspool 
> stripe_width 0 application rbd
> pool 9 'ec_rbd_pool' erasure size 6 min_size 5 crush_rule 4 object_hash 
> rjenkins pg_num 256 pgp_num 256 last_change 82649 flags 
> hashpspool,ec_overwrites stripe_width 16384 application rbd
>
>
> Rbd cache: Off (Because I think in tcmu , rbd cache will mandatory off, and 
> our cluster will export disk by iscsi in furture.)
>
>
> Thanks!
>
>
> 在 2018年11月15日,下午1:22,Gregory Farnum <gfar...@redhat.com> 写道:
>
> You'll need to provide more data about how your test is configured and run 
> for us to have a good idea. IIRC librbd is often faster than krbd because it 
> can support newer features and things, but krbd may have less overhead and is 
> not dependent on the VM's driver configuration in QEMU...
>
> On Thu, Nov 15, 2018 at 8:22 AM 赵赵贺东 <zhaohed...@gmail.com> wrote:
>>
>> Hi cephers,
>>
>>
>> All our cluster osds are deployed in armhf.
>> Could someone say something about what is the rational performance rates for 
>> librbd VS KRBD ?
>> Or rational performance loss range when we use librbd compare to KRBD.
>> I googled a lot, but I could not find a solid criterion.
>> In fact , it confused me for a long time.
>>
>> About our tests:
>> In a small cluster(12 osds), 4m seq write performance for Librbd VS KRBD is 
>> about 0.89 : 1 (177MB/s : 198MB/s ).
>> In a big cluster (72 osds), 4m seq write performance for Librbd VS KRBD is 
>> about  0.38: 1 (420MB/s : 1080MB/s).
>>
>> We expect even increase  osd numbers, Librbd performance can keep being 
>> close to KRBD.
>>
>> PS:     Librbd performance are tested both in  fio rbd engine & iscsi 
>> (tcmu+librbd).
>>
>> Thanks.
>>
>>
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Jason
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to