Hi.

I just changed some of my data on CephFS to go to the EC pool instead
of the 3x replicated pool. The data is "write rare / read heavy" data
being served to an HPC cluster.

To my surprise it looks like the OSD memory caching is done at the
"split object level" not at the "assembled object level", as a
consequence - even though the dataset is fully memory cached it
actually deliveres a very "heavy" cross OSD network traffic to
assemble the objects back.

Since (as far as I understand) no changes can go the the underlying
object without going though the primary pg - then caching could be
more effectively done at that level.

The caching on the 3x replica does not retrieve all 3 copies to compare
and verify on a read request (or I at least cannot see any network
traffic supporting that it should be the case).

Is above configurable? Or would that be a feature/performance request?

Jesper

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to