[ceph-users] Re: cephfs health warn

2023-10-04 Thread Ben
Hi Venky, thanks for help on this. Will change to multimds with subtree pinning. For the moment, it needs to get the segments list items go by loop of expiring -> expired -> trimmed. It is observed that each problematic mds has a few expiring segment stuck in the road of trimming. the segment

[ceph-users] Re: cephfs health warn

2023-10-03 Thread Venky Shankar
Hi Ben, On Tue, Oct 3, 2023 at 8:56 PM Ben wrote: > > Yes, I am. 8 active + 2 standby, no subtree pinning. What if I restart the > mds with trimming issues? Trying to figure out what happens with restarting. We have come across instances in the past where multimds without subtree pinning can

[ceph-users] Re: cephfs health warn

2023-10-03 Thread Ben
Yes, I am. 8 active + 2 standby, no subtree pinning. What if I restart the mds with trimming issues? Trying to figure out what happens with restarting. Venky Shankar 于2023年10月3日周二 12:39写道: > Hi Ben, > > Are you using multimds without subtree pinning? > > On Tue, Oct 3, 2023 at 10:00 AM Ben

[ceph-users] Re: cephfs health warn

2023-10-02 Thread Venky Shankar
Hi Ben, Are you using multimds without subtree pinning? On Tue, Oct 3, 2023 at 10:00 AM Ben wrote: > > Dear cephers: > more log captures(see below) show the full segments list(more than 3 to > be trimmed stuck, growing over time). any ideas to get out of this? > > Thanks, > Ben > > > debug

[ceph-users] Re: cephfs health warn

2023-10-02 Thread Ben
Dear cephers: more log captures(see below) show the full segments list(more than 3 to be trimmed stuck, growing over time). any ideas to get out of this? Thanks, Ben debug 2023-09-30T14:34:14.557+ 7f9c29bb1700 5 mds.4.log trim already expiring segment 195341004/893374309813, 180 events

[ceph-users] Re: cephfs health warn

2023-09-28 Thread Ben
Hi Venky, and cephers Thanks for reply. no config changes had been made before the issues occurred. It suspects to be client bug. Please see following message about the log segment accumulation to be trimmed.for the moment problematic client nodes can not be rebooted.evicting client will

[ceph-users] Re: cephfs health warn

2023-09-27 Thread Venky Shankar
Hi Ben, On Tue, Sep 26, 2023 at 6:02 PM Ben wrote: > > Hi, > see below for details of warnings. > the cluster is running 17.2.5. the warnings have been around for a while. > one concern of mine is num_segments growing over time. Any config changes related to trimming that was done? A slow

[ceph-users] Re: cephfs health warn

2023-09-27 Thread Ben
some further investigation about three mds with trimming behind problem: logs captured over two days show that, some log segments are stuck in trimming process. It looks like a bug with trimming log segment? Any thoughts? ==log capture 9/26: debug 2023-09-26T16:50:59.004+