Re: [ceph-users] cephfs automatic data pool cleanup

2017-12-14 Thread Yan, Zheng
On Thu, Dec 14, 2017 at 12:52 AM, Jens-U. Mozdzen wrote: > Hi Yan, > > Zitat von "Yan, Zheng" : >> >> [...] >> >> It's likely some clients had caps on unlinked inodes, which prevent >> MDS from purging objects. When a file gets deleted, mds notifies all >> clients, clients are supposed to drop cor

Re: [ceph-users] cephfs automatic data pool cleanup

2017-12-13 Thread Jens-U. Mozdzen
Hi Yan, Zitat von "Yan, Zheng" : [...] It's likely some clients had caps on unlinked inodes, which prevent MDS from purging objects. When a file gets deleted, mds notifies all clients, clients are supposed to drop corresponding caps if possible. You may hit a bug in this area, some clients fail

Re: [ceph-users] cephfs automatic data pool cleanup

2017-12-13 Thread Yan, Zheng
On Wed, Dec 13, 2017 at 10:11 PM, Jens-U. Mozdzen wrote: > Hi *, > > during the last weeks, we noticed some strange behavior of our CephFS data > pool (not metadata). As things have worked out over time, I'm just asking > here so that I can better understand what to look out for in the future. > >

Re: [ceph-users] cephfs automatic data pool cleanup

2017-12-13 Thread Jens-U. Mozdzen
Hi Webert, Zitat von Webert de Souza Lima : I have experienced delayed free in used space before, in Jewel, but that just stopped happening with no intervention. thank you for letting me know. If none of the developers remembers fixing this issue, it might be a still pending problem. Bac

Re: [ceph-users] cephfs automatic data pool cleanup

2017-12-13 Thread Jens-U. Mozdzen
Hi John, Zitat von John Spray : On Wed, Dec 13, 2017 at 2:11 PM, Jens-U. Mozdzen wrote: [...] Then we had one of the nodes crash for a lack of memory (MDS was > 12 GB, plus the new Bluestore OSD and probably the 12.2.1 BlueStore memory leak). We brought the node back online and at first had M

Re: [ceph-users] cephfs automatic data pool cleanup

2017-12-13 Thread Webert de Souza Lima
I have experienced delayed free in used space before, in Jewel, but that just stopped happening with no intervention. Back then, umounting all client's fs would make it free the space rapidly. I don't know if that's related. Regards, Webert Lima DevOps Engineer at MAV Tecnologia *Belo Horizonte

Re: [ceph-users] cephfs automatic data pool cleanup

2017-12-13 Thread John Spray
On Wed, Dec 13, 2017 at 2:11 PM, Jens-U. Mozdzen wrote: > Hi *, > > during the last weeks, we noticed some strange behavior of our CephFS data > pool (not metadata). As things have worked out over time, I'm just asking > here so that I can better understand what to look out for in the future. > >

[ceph-users] cephfs automatic data pool cleanup

2017-12-13 Thread Jens-U. Mozdzen
Hi *, during the last weeks, we noticed some strange behavior of our CephFS data pool (not metadata). As things have worked out over time, I'm just asking here so that I can better understand what to look out for in the future. This is on a three-node Ceph Luminous (12.2.1) cluster with o