Hi All,

We have been observing that if we let our MDS run for some time, the
bandwidth usage of the disks in the metadata pool starts increasing
significantly (whilst IOPS is about constant), even though the number of
clients, the workloads or anything else doesn't change.

However, after restarting the MDS, the issue goes away for some time and
the same workloads require 1/10th of the metadata disk bandwidth whilst
doing the same IOPS.

We run our CephFS cluster in a cloud environment where the disk throughput
/ bandwidth capacity is quite expensive to increase and we are hitting
bandwidth / throughput limits, even though we still have a lot of IOPS
capacity left.

We suspect that somehow the journaling of the MDS becomes more extensive
(i.e. larger journal updates for each operation), but we couldn't really
pin down which parameter might affect this.

I attach a plot of how the Bytes / Operation (throughput in MBps / IOPS)
evolves over time, when we restart the MDS, it drops to around 32kb (even
though the min block size for the metadata pool OSDs is 4kb in our
settings) and then increases over time to around 300kb.

Any ideas on how to "fix" this and have a significantly lower bandwidth
usage would be really-really appreciated!

Thank you and kind regards,

Andras
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to