Thanks, I created a ticket. http://tracker.ceph.com/issues/40906
----------------
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1


On Mon, Jul 22, 2019 at 11:45 PM Yan, Zheng <uker...@gmail.com> wrote:

> please create a ticket at http://tracker.ceph.com/projects/cephfs and
> upload mds log with debug_mds =10
>
> On Tue, Jul 23, 2019 at 6:00 AM Robert LeBlanc <rob...@leblancnet.us>
> wrote:
> >
> > We have a Luminous cluster which has filled up to 100% multiple times
> and this causes an inode to be left in a bad state. Doing anything to these
> files causes the client to hang which requires evicting the client and
> failing over the MDS. Usually we move the parent directory out of the way
> and things mostly are okay. However in this last fill up, we have a
> significant amount of storage that we have moved out of the way and really
> need to reclaim that space. I can't delete the files around it as listing
> the directory causes a hang.
> >
> > We can get the inode that is bad from the logs/blocked_ops, how can we
> tell MDS that the inode is lost and to forget about it without trying to do
> any checks on it (checking the RADOS objects may be part of the problem)?
> Once the inode is out of CephFS, we can clean up the RADOS objects manually
> or leave them there to rot.
> >
> > Thanks,
> > Robert LeBlanc
> > ----------------
> > Robert LeBlanc
> > PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to