[ceph-users] Re: Newer linux kernel cephfs clients is more trouble?

2023-05-29 Thread 胡 玮文
Hi Dan, We also experienced very high network usage and memory pressure with our machine learning workload. This patch [1] (currently testing, may be merged in 6.5) may fix it. See [2] for more about my experiment about this issue. [1]:

[ceph-users] Re: Newer linux kernel cephfs clients is more trouble?

2023-05-29 Thread Dan van der Ster
Hi, Sorry for poking this old thread, but does this issue still persist in the 6.3 kernels? Cheers, Dan __ Clyso GmbH | https://www.clyso.com On Wed, Dec 7, 2022 at 3:42 AM William Edwards wrote: > > > > Op 7 dec. 2022 om 11:59 heeft Stefan Kooman het volgende >

[ceph-users] Pacific - mds How to know how many sequences still to be replayed

2023-05-29 Thread Emmanuel Jaep
Hi, I just restarted one of our mds servers. I can find some "progress" in logs as below: mds.beacon.icadmin006 Sending beacon up:replay seq 461 mds.beacon.icadmin006 received beacon reply up:replay seq 461 rtt 0 How I know how long is the sequence (ie. when the node will be finished replaying)?

[ceph-users] Re: BlueStore fragmentation woes

2023-05-29 Thread Hector Martin
On 29/05/2023 22.26, Igor Fedotov wrote: > So fragmentation score calculation was improved recently indeed, see  > https://github.com/ceph/ceph/pull/49885 > > > And yeah one can see some fragmentation in allocations for the first two > OSDs. Doesn't look that dramatic as fragmentation scores

[ceph-users] Re: BlueStore fragmentation woes

2023-05-29 Thread Igor Fedotov
Hi Stefan, given that allocation probes include every allocation (including short 4K ones) your stats look pretty high indeed. Although you omitted historic probes so it's hard to tell if there is negative trend in it.. As I mentioned in my reply to Hector one might want to make further

[ceph-users] Re: BlueStore fragmentation woes

2023-05-29 Thread Igor Fedotov
So fragmentation score calculation was improved recently indeed, seehttps://github.com/ceph/ceph/pull/49885 And yeah one can see some fragmentation in allocations for the first two OSDs. Doesn't look that dramatic as fragmentation scores tell though. Additionally you might want to collect

[ceph-users] Re: Recoveries without any misplaced objects?

2023-05-29 Thread Hector Martin
On 29/05/2023 20.55, Anthony D'Atri wrote: > Check the uptime for the OSDs in question I restarted all my OSDs within the past 10 days or so. Maybe OSD restarts are somehow breaking these stats? > >> On May 29, 2023, at 6:44 AM, Hector Martin wrote: >> >> Hi, >> >> I'm watching a cluster

[ceph-users] Re: Troubleshooting "N slow requests are blocked > 30 secs" on Pacific

2023-05-29 Thread Milind Changire
An MDS-wide lock is acquired before the cache dump is done. After the dump is complete, the lock is released. So, the MDS freezing temporarily during the cache dump is expected. On Fri, May 26, 2023 at 12:51 PM Emmanuel Jaep wrote: > Hi Milind, > > I finally managed to dump the cache and find

[ceph-users] Recoveries without any misplaced objects?

2023-05-29 Thread Hector Martin
Hi, I'm watching a cluster finish a bunch of backfilling, and I noticed that quite often PGs end up with zero misplaced objects, even though they are still backfilling. Right now the cluster is down to 6 backfilling PGs: data: volumes: 1/1 healthy pools: 6 pools, 268 pgs

[ceph-users] Re: [Ceph | Quency ]The scheduled snapshots are not getting created till we create a manual backup.

2023-05-29 Thread Sake Paulusma
Hi! I noticed the same that the snapshot scheduler seemed to do nothing , but after a manager fail over the creation of snapshots started to work (including the retention rules).. Best regards, Sake From: Lokendra Rathour Sent: Monday, May 29, 2023 10:11:54

[ceph-users] Re: Unexpected behavior of directory mtime after being set explicitly

2023-05-29 Thread Sandip Divekar
Hi Chris / Gregory, Did you get a chance to investigate this issue ? Thanks and Regards Sandip Divekar From: Sandip Divekar Sent: Thursday, May 25, 2023 11:16 PM To: Chris Palmer ; ceph-users@ceph.io Cc: d...@ceph.io; Gavin Lucas ; Joseph Fernandes ; Simon Crosland Subject: RE:

[ceph-users] Re: Creating a bucket with bucket constructor in Ceph v16.2.7

2023-05-29 Thread Robert Hish
Ramin, I think youre still going to experience what Casey described. If your intent is to completely isolate bucket metadata/data in one zonegroup from another, then I believe you need multiple independent realms. Each with its own endpoint. For instance; Ceph Cluster A Realm1/zonegroup1/zone1

[ceph-users] [Ceph | Quency ]The scheduled snapshots are not getting created till we create a manual backup.

2023-05-29 Thread Lokendra Rathour
Hi Team, *Problem:* Create scheduled snapshots of the ceph subvolume. *Expected Result:* The scheduled snapshots should be created at the given scheduled time. *Actual Result:* The scheduled snapshots are not getting created till we create a manual backup. *Description:* *Ceph