[ceph-users] Ceph MDS WRN replayed op client.$id

2018-09-12 Thread Stefan Kooman
Hi, Once in a while, today a bit more often, the MDS is logging the following: mds.mds1 [WRN] replayed op client.15327973:15585315,15585103 used ino 0x19918de but session next is 0x1873b8b Nothing of importance is logged in the mds (debug_mds_log": "1/5"). What does this warning messag

Re: [ceph-users] Ceph MDS WRN replayed op client.$id

2018-09-13 Thread Eugen Block
Hi Stefan, mds.mds1 [WRN] replayed op client.15327973:15585315,15585103 used ino 0x19918de but session next is 0x1873b8b Nothing of importance is logged in the mds (debug_mds_log": "1/5"). What does this warning message mean / indicate? we face these messages on a regular basis. The

Re: [ceph-users] Ceph MDS WRN replayed op client.$id

2018-09-13 Thread John Spray
On Wed, Sep 12, 2018 at 2:59 PM Stefan Kooman wrote: > > Hi, > > Once in a while, today a bit more often, the MDS is logging the > following: > > mds.mds1 [WRN] replayed op client.15327973:15585315,15585103 used ino > 0x19918de but session next is 0x1873b8b > > Nothing of importance is lo

Re: [ceph-users] Ceph MDS WRN replayed op client.$id

2018-09-13 Thread Stefan Kooman
Hi John, Quoting John Spray (jsp...@redhat.com): > On Wed, Sep 12, 2018 at 2:59 PM Stefan Kooman wrote: > > When replaying a journal (either on MDS startup or on a standby-replay > MDS), the replayed file creation operations are being checked for > consistency with the state of the replayed cli

Re: [ceph-users] Ceph MDS WRN replayed op client.$id

2018-09-13 Thread John Spray
On Thu, Sep 13, 2018 at 11:01 AM Stefan Kooman wrote: > > Hi John, > > Quoting John Spray (jsp...@redhat.com): > > > On Wed, Sep 12, 2018 at 2:59 PM Stefan Kooman wrote: > > > > When replaying a journal (either on MDS startup or on a standby-replay > > MDS), the replayed file creation operations

Re: [ceph-users] Ceph MDS WRN replayed op client.$id

2018-09-14 Thread Stefan Kooman
Quoting John Spray (jsp...@redhat.com): > On Thu, Sep 13, 2018 at 11:01 AM Stefan Kooman wrote: > We implement locking, and it's correct that another client can't gain > the lock until the first client is evicted. Aside from speeding up > eviction by modifying the timeout, if you have another mec

Re: [ceph-users] Ceph MDS WRN replayed op client.$id

2018-09-17 Thread Eugen Block
Hi, from your response I understand that these messages are not expected if everything is healthy. We face them every now and then, three or four times a week, but there's no real connection to specific jobs or a high load in our cluster. It's a Luminous cluster (12.2.7) with 1 active, 1

Re: [ceph-users] Ceph MDS WRN replayed op client.$id

2018-09-17 Thread John Spray
On Mon, Sep 17, 2018 at 2:49 PM Eugen Block wrote: > > Hi, > > from your response I understand that these messages are not expected > if everything is healthy. I'm not 100% sure of that. It could be that there's a path through the code that's healthy, but just wasn't anticipated at the point tha

Re: [ceph-users] Ceph MDS WRN replayed op client.$id

2018-09-19 Thread Eugen Block
Hi John, I'm not 100% sure of that. It could be that there's a path through the code that's healthy, but just wasn't anticipated at the point that warning message was added. I wish a had a more unambiguous response to give! then I guess we'll just keep ignoring these warnings from the replay

Re: [ceph-users] Ceph MDS WRN replayed op client.$id

2018-09-19 Thread John Spray
On Wed, Sep 19, 2018 at 10:37 AM Eugen Block wrote: > > Hi John, > > > I'm not 100% sure of that. It could be that there's a path through > > the code that's healthy, but just wasn't anticipated at the point that > > warning message was added. I wish a had a more unambiguous response > > to give

Re: [ceph-users] Ceph MDS WRN replayed op client.$id

2018-09-19 Thread Eugen Block
Yeah, since we haven't knowingly done anything about it, it would be a (pleasant) surprise if it was accidentally resolved in mimic ;-) Too bad ;-) Thanks for your help! Eugen Zitat von John Spray : On Wed, Sep 19, 2018 at 10:37 AM Eugen Block wrote: Hi John, > I'm not 100% sure of that.