Re: [ceph-users] Mark CephFS inode as lost

2019-07-23 Thread Robert LeBlanc
Thanks, I created a ticket. http://tracker.ceph.com/issues/40906 Robert LeBlanc PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 On Mon, Jul 22, 2019 at 11:45 PM Yan, Zheng wrote: > please create a ticket at http://tracker.ceph.com/projects/cephfs and >

Re: [ceph-users] Mark CephFS inode as lost

2019-07-23 Thread Yan, Zheng
please create a ticket at http://tracker.ceph.com/projects/cephfs and upload mds log with debug_mds =10 On Tue, Jul 23, 2019 at 6:00 AM Robert LeBlanc wrote: > > We have a Luminous cluster which has filled up to 100% multiple times and > this causes an inode to be left in a bad state. Doing

[ceph-users] Mark CephFS inode as lost

2019-07-22 Thread Robert LeBlanc
We have a Luminous cluster which has filled up to 100% multiple times and this causes an inode to be left in a bad state. Doing anything to these files causes the client to hang which requires evicting the client and failing over the MDS. Usually we move the parent directory out of the way and