On Tue, Aug 8, 2023 at 1:18 AM Zhang Bao <lonsdale8...@gmail.com> wrote:
>
> Hi, thanks for your help.
>
> I am using ceph Pacific 16.2.7.
>
> Before my Ceph stuck at `ceph fs status fsname`, one of my cephfs became 
> readonly.

Probably the ceph-mgr is stuck (the "volumes" plugin) somehow talking
to the read-only CephFS. That's not a scenario we've tested well.

> The metadata pool of the readonly cephfs grew up from 10GB to 3TB. Then I 
> shut down the readonly mds.

Your metadata pool grew from 10GB to 3TB in read-only mode? That's unbelievable!

We would need a lot more information to help figure the cause. Such
as: `ceph tell mds.X perf dump`, `ceph tell mds.X status`, `ceph fs
dump`, and `ceph tell mgr.X perf dump` while this is occurring.


--
Patrick Donnelly, Ph.D.
He / Him / His
Red Hat Partner Engineer
IBM, Inc.
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to