[ceph-users] Re: Unexpected behavior of directory mtime after being set explicitly

2023-06-05 Thread Xiubo Li
Raised one PR to fix this, please see https://github.com/ceph/ceph/pull/51931. Thanks - Xiubo On 5/24/23 23:52, Sandip Divekar wrote: Hi Team, I'm writing to bring to your attention an issue we have encountered with the "mtime" (modification time) behavior for directories in the Ceph

[ceph-users] How to show used size of specific storage class in Radosgw?

2023-06-05 Thread Huy Nguyen
Hi, I'm not able to find the information about used size of a storage class. - bucket stats - usage show - user stats ... Does Radosgw support it? Thanks ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to

[ceph-users] RGW: bucket notification issue with Kafka

2023-06-05 Thread Huy Nguyen
Hi, In Ceph Radosgw 15.2.17, I get this issue when trying to create a push endpoint to Kafka Here is push endpoint configuration: endpoint_args = 'push-endpoint=kafka://abcef:123456@kafka.endpoint:9093=true=/etc/ssl/certs/ca.crt' attributes = {nvp[0] : nvp[1] for nvp in

[ceph-users] Updating the Grafana SSL certificate in Quincy

2023-06-05 Thread Thorne Lawler
Hi everyone! I have a containerised (cephadm built) 17.2.6 cluster where I have installed a custom commercial SSL certificate under dashboard. Before I upgraded from 17.2 to 17.2.6, I successfully installed the custom SSL cert everywhere, including grafana, but since the upgrade I am

[ceph-users] Re: The pg_num from 1024 reduce to 32 spend much time, is there way to shorten the time?

2023-06-05 Thread Janne Johansson
If you can stop the rgws, you can make a new pool with 32 PGs and then rados cppool this one over the new one, then rename them so this one has the right name (and application) and start the rgws again. Den mån 5 juni 2023 kl 16:43 skrev Louis Koo : > > ceph version is 16.2.13; > > The pg_num is

[ceph-users] Re: PGs stuck undersized and not scrubbed

2023-06-05 Thread Nicola Mori
Dear Wes, thank you for your suggestion! I restarted OSDs 57 and 79 and the recovery operations restarted as well. In the log I found that for both of them a kernel issue raised, but they were not in error state. Probably they got stuck because of this. Thanks again for your help, Nicola

[ceph-users] Re: PGs stuck undersized and not scrubbed

2023-06-05 Thread Wesley Dillingham
When PGs are degraded they won't scrub, further, if an OSD is involved with recovery of another PG it wont accept scrubs either so that is the likely explanation of your not-scrubbed-in time issue. Its of low concern. Are you sure that recovery is not progressing? I see: "7349/147534197 objects

[ceph-users] Re: CephFS metadata pool grows by two orders of magnitude while trimming (?) snapshots

2023-06-05 Thread Janek Bevendorff
That said, our MON store size has also been growing slowly from 900MB to 5.4GB. But we also have a few remapped PGs right now. Not sure if that would have an influence. On 05/06/2023 17:48, Janek Bevendorff wrote: Hi Patrick, hi Dan! I got the MDS back and I think the issue is connected to

[ceph-users] PGs stuck undersized and not scrubbed

2023-06-05 Thread Nicola Mori
Dear Ceph users, after an outage and recovery of one machine I have several PGs stuck in active+recovering+undersized+degraded+remapped. Furthermore, many PGs have not been (deep-)scrubbed in time. See below for status and health details. It's been like this for two days, with no recovery I/O

[ceph-users] Re: CephFS metadata pool grows by two orders of magnitude while trimming (?) snapshots

2023-06-05 Thread Janek Bevendorff
Hi Patrick, hi Dan! I got the MDS back and I think the issue is connected to the "newly corrupt dentry" bug [1]. Even though I couldn't see any particular reason for the SIGABRT at first, I then noticed one of these awfully familiar stack traces. I rescheduled the two broken MDS ranks on

[ceph-users] The pg_num from 1024 reduce to 32 spend much time, is there way to shorten the time?

2023-06-05 Thread Louis Koo
ceph version is 16.2.13; The pg_num is 1024, and the target_pg_num is 32; there is no any data in the pool of ".rgw.buckets.index", but it spend much time to reduce the pg num. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an

[ceph-users] Quincy release -Swift integration with Keystone

2023-06-05 Thread fsbiz
Hi folks, My ceph cluster with Quincy and Rocky9 is up and running. But I'm having issues with swift authenticating with keystone. Was wondering if I'm missed anything in the configuration. >From the debug logs below, it appears that radosgw is still trying to >authenticate with Swift instead of

[ceph-users] How to disable S3 ACL in radosgw

2023-06-05 Thread Rasool Almasi
Hi, Is it possible to disable ACL in favor of bucket policy (on a bucket or globally)? The goal is to forbid users to use any bucket/object ACLs and only allow bucket policies. Seems there is no documentation in that regard which applies to Ceph RGW. Apology if I am sending this in the wrong

[ceph-users] Re: CephFS metadata pool grows by two orders of magnitude while trimming (?) snapshots

2023-06-05 Thread Janek Bevendorff
I just had the problem again that MDS were constantly reporting slow metadata IO and the pool was slowly growing. Hence I restarted the MDS and now ranks 4 and 5 don't come up again. Every time, they get to the resolve stage, the crash with a SIGABRT without an error message (not even at

[ceph-users] Re: Duplicate help statements in Prometheus metrics in 16.2.13

2023-06-05 Thread Konstantin Shalygin
Hi Andreas, > On 5 Jun 2023, at 14:57, Andreas Haupt wrote: > > after the update to CEPH 16.2.13 the Prometheus exporter is wrongly > exporting multiple metric help & type lines for ceph_pg_objects_repaired: > > [mon1] /root #curl -sS http://localhost:9283/metrics > # HELP

[ceph-users] Duplicate help statements in Prometheus metrics in 16.2.13

2023-06-05 Thread Andreas Haupt
Dear all, after the update to CEPH 16.2.13 the Prometheus exporter is wrongly exporting multiple metric help & type lines for ceph_pg_objects_repaired: [mon1] /root #curl -sS http://localhost:9283/metrics # HELP ceph_pg_objects_repaired Number of objects repaired in a pool Count # TYPE

[ceph-users] [RGW] what is log_meta and log_data config in a multisite config?

2023-06-05 Thread Gilles Mocellin
Hi Cephers, In a multisite config, with one zonegroup and 2 zones, when I look at `radiosgw-admin zonegroup get`, I see by defaut these two parameters : "log_meta": "false", "log_data": "true", Where can I find documentation on these, I can't find. I set log_meta to

[ceph-users] Re: 16.2.13: ERROR:ceph-crash:directory /var/lib/ceph/crash/posted does not exist; please create

2023-06-05 Thread Zakhar Kirpichenko
Any other thoughts on this, please? Should I file a bug report? /Z On Fri, 2 Jun 2023 at 06:11, Zakhar Kirpichenko wrote: > Thanks, Josh. The cluster is managed by cephadm. > > On Thu, 1 Jun 2023, 23:07 Josh Baergen, wrote: > >> Hi Zakhar, >> >> I'm going to guess that it's a permissions

[ceph-users] Re: Unexpected behavior of directory mtime after being set explicitly

2023-06-05 Thread Xiubo Li
Yeah, it's a bug. I have raised on ceph tracker to follow this: https://tracker.ceph.com/issues/61584 And I have found the root cause, more detail please see my comments on the above tracker. I am still going through the code to find one way to fix it. Thanks - Xiubo On 6/5/23 13:42,