On 03/09/2015 11:15 AM, Nick Fisk wrote:
> Hi Mike,
> 
> I was using bs_aio with the krbd and still saw a small caching effect. I'm
> not sure if it was on the ESXi or tgt/krbd page cache side, but I was
> definitely seeing the IO's being coalesced into larger ones on the krbd

I am not sure what you mean here. By coalescing you mean merging right?
That is not the same as caching. Coalescing/merging is expected for both
aio and rdwr.


> device in iostat. Either way, it would make me potentially nervous to run it
> like that in a HA setup.
> 
> 
>> tgt itself does not do any type of caching, but depending on how you have
>> tgt access the underlying block device you might end up using the normal
> old
>> linux page cache like you would if you did
>>
>> dd if=/dev/rbd0 of=/dev/null bs=4K count=1 dd if=/dev/rbd0 of=/dev/null
>> bs=4K count=1
>>
>> This is what Ronnie meant in that thread when he was saying there might be
>> caching in the underlying device.
>>
>> If you use tgt bs_rdwr.c (--bstype=rdwr) with the default settings and
> with
>> krbd then you will end up doing caching, because the krbd's block device
> will
>> be accessed like in the dd example above (no direct bits set).
>>
>> You can tell tgt bs_rdwr devices to use O_DIRECT or O_SYNC. When you
>> create the lun pass in the "--bsoflags {direct | sync }". Here is an
> example
>> from the man page:
>>
>> tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 1
> --bsoflags="sync" -
>> -backing-store=/data/100m_image.raw
>>
>>
>> If you use bs_aio.c then we always set O_DIRECT when opening the krbd
>> device, so no page caching is done. I think linux aio might require this
> or at
>> least it did at the time it was written.
>>
>> Also the cache settings exported to the other OS's initiator with that
>> modepage command might affect performance then too. It might change
>> how that OS does writes like send cache syncs down or do some sort of
>> barrier or FUA.
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
> 
> 

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to