I would start by viewing "ceph status", drive IO with: "iostat -x 1
/dev/sd{a..z}" and the CPU/RAM usage of the active MDS. If "ceph status"
warns that the MDS cache is oversized, that may be an easy fix.

On Thu, Dec 26, 2019 at 7:33 AM renjianxinlover <renjianxinlo...@163.com>
wrote:

> hello,
>        recently, after deleting some fs data in a small-scale ceph
> cluster, some clients IO performance became bad, specially latency. for
> example, opening a tiny text file by vim maybe consumed nearly twenty
>  seconds, i am not clear about how to diagnose the cause, could anyone give
> some guidence?
>
> Brs
> renjianxinlover
> renjianxinlo...@163.com
>
> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=renjianxinlover&uid=renjianxinlover%40163.com&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png&items=%5B%22renjianxinlover%40163.com%22%5D>
> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to