From my VMs that have cinder provisioned volumes, I tried dd / fio (like below) 
to find the IOPS to be less, even a sync before the runs didn’t help. Same runs 
by setting the option to false yield better results.

Both the clients and the cluster are running 10.2.3, perhaps the only 
difference is that the clients are on Trusty and the cluster is Xenial.

dd if=/dev/zero of=/dev/vdd bs=4K count=1000 oflag=direct

fio -name iops -rw=write -bs=4k -direct=1  -runtime=60 -iodepth 1 -filename 
/dev/vde -ioengine=libaio 

Thanks,
-Pavan.

On 10/21/16, 6:15 PM, "Jason Dillaman" <jdill...@redhat.com> wrote:

    It's in the build and has tests to verify that it is properly being
    triggered [1].
    
    $ git tag --contains 5498377205523052476ed81aebb2c2e6973f67ef
    v10.2.3
    
    What are your tests that say otherwise?
    
    [1] 
https://github.com/ceph/ceph/pull/10797/commits/5498377205523052476ed81aebb2c2e6973f67ef
    
    On Fri, Oct 21, 2016 at 7:42 AM, Pavan Rallabhandi
    <prallabha...@walmartlabs.com> wrote:
    > I see the fix for write back cache not getting turned on after flush has 
made into Jewel 10.2.3 ( http://tracker.ceph.com/issues/17080 ) but our testing 
says otherwise.
    >
    > The cache is still behaving as if its writethrough, though the setting is 
set to true. Wanted to check if it’s still broken in Jewel 10.2.3 or am I 
missing anything here?
    >
    > Thanks,
    > -Pavan.
    >
    > _______________________________________________
    > ceph-users mailing list
    > ceph-users@lists.ceph.com
    > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
    
    
    
    -- 
    Jason
    

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to