You'll need to provide more data about how your test is configured and run
for us to have a good idea. IIRC librbd is often faster than krbd because
it can support newer features and things, but krbd may have less overhead
and is not dependent on the VM's driver configuration in QEMU...

On Thu, Nov 15, 2018 at 8:22 AM 赵赵贺东 <zhaohed...@gmail.com> wrote:

> Hi cephers,
>
>
> All our cluster osds are deployed in armhf.
> Could someone say something about what is the rational performance rates
> for librbd VS KRBD ?
> Or rational performance loss range when we use librbd compare to KRBD.
> I googled a lot, but I could not find a solid criterion.
> In fact , it confused me for a long time.
>
> About our tests:
> In a small cluster(12 osds), 4m seq write performance for Librbd VS KRBD
> is about 0.89 : 1 (177MB/s : 198MB/s ).
> In a big cluster (72 osds), 4m seq write performance for Librbd VS KRBD is
> about  0.38: 1 (420MB/s : 1080MB/s).
>
> We expect even increase  osd numbers, Librbd performance can keep being
> close to KRBD.
>
> PS:     Librbd performance are tested both in  fio rbd engine & iscsi
> (tcmu+librbd).
>
> Thanks.
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to