Thanks for verifying at your end Jason.

It’s pretty weird that the difference is >~10X, with 
"rbd_cache_writethrough_until_flush = true" I see ~400 IOPS vs with 
"rbd_cache_writethrough_until_flush = false" I see them to be ~6000 IOPS. 

The QEMU cache is none for all of the rbd drives. On that note, would older 
librbd versions (like Hammer) have any caching issues while dealing with Jewel 
clusters?

Thanks,
-Pavan.

On 10/21/16, 8:17 PM, "Jason Dillaman" <jdill...@redhat.com> wrote:

    QEMU cache setting for the rbd drive?
    
    

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to