make sense - makes the cases for ec pools smaller though.

Jesper



Sent from myMail for iOS


Sunday, 9 June 2019, 17.48 +0200 from paul.emmer...@croit.io  
<paul.emmer...@croit.io>:
>Caching is handled in BlueStore itself, erasure coding happens on a higher 
>layer.
>
>
>Paul
>
>-- 
>Paul Emmerich
>
>Looking for help with your Ceph cluster? Contact us at  https://croit.io
>
>croit GmbH
>Freseniusstr. 31h
>81247 München
>www.croit.io
>Tel:  +49 89 1896585 90
>
>On Sun, Jun 9, 2019 at 8:43 AM < jes...@krogh.cc > wrote:
>>Hi.
>>
>>I just changed some of my data on CephFS to go to the EC pool instead
>>of the 3x replicated pool. The data is "write rare / read heavy" data
>>being served to an HPC cluster.
>>
>>To my surprise it looks like the OSD memory caching is done at the
>>"split object level" not at the "assembled object level", as a
>>consequence - even though the dataset is fully memory cached it
>>actually deliveres a very "heavy" cross OSD network traffic to
>>assemble the objects back.
>>
>>Since (as far as I understand) no changes can go the the underlying
>>object without going though the primary pg - then caching could be
>>more effectively done at that level.
>>
>>The caching on the 3x replica does not retrieve all 3 copies to compare
>>and verify on a read request (or I at least cannot see any network
>>traffic supporting that it should be the case).
>>
>>Is above configurable? Or would that be a feature/performance request?
>>
>>Jesper
>>
>>_______________________________________________
>>ceph-users mailing list
>>ceph-users@lists.ceph.com
>>http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to