[ceph-users] Re: quincy v17.2.7 QE Validation status

2023-10-17 Thread Nizamudeen A
dashboard approved! On Tue, Oct 17, 2023 at 12:22 AM Yuri Weinstein wrote: > Details of this release are summarized here: > > https://tracker.ceph.com/issues/63219#note-2 > Release Notes - TBD > > Issue https://tracker.ceph.com/issues/63192 appears to be failing several > runs. > Should it be

[ceph-users] Re: quincy v17.2.7 QE Validation status

2023-10-17 Thread Venky Shankar
On Tue, Oct 17, 2023 at 12:23 AM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/63219#note-2 > Release Notes - TBD > > Issue https://tracker.ceph.com/issues/63192 appears to be failing several > runs. > Should it be fixed for this

[ceph-users] Re: Ceph 16.2.x mon compactions, disk writes

2023-10-17 Thread Zakhar Kirpichenko
Many thanks for this, Eugen! I very much appreciate yours and Mykola's efforts and insight! Another thing I noticed was a reduction of RocksDB store after the reduction of the total PG number by 30%, from 590-600 MB: 65M 3675511.sst 65M 3675512.sst 65M 3675513.sst 65M 3675514.sst

[ceph-users] Re: Nautilus - Octopus upgrade - more questions

2023-10-17 Thread Tyler Stachecki
On Tue, Oct 17, 2023, 8:19 PM Dave Hall wrote: > Hello, > > I have a Nautilus cluster built using Ceph packages from Debian 10 > Backports, deployed with Ceph-Ansible. > > I see that Debian does not offer Ceph 15/Octopus packages. However, > download.ceph.com does offer such packages. > >

[ceph-users] Re: Fixing BlueFS spillover (pacific 16.2.14)

2023-10-17 Thread Chris Dunlop
Hi Igor, Thanks for the suggestions. You may have already seen my followup message where the solution was to use "ceph-bluestore-tool bluefs-bdev-migrate" to get the lingering 128KiB of data moved from the slow to the fast device. I wonder if your suggested "ceph-volume lvm migrate" would do

[ceph-users] Nautilus - Octopus upgrade - more questions

2023-10-17 Thread Dave Hall
Hello, I have a Nautilus cluster built using Ceph packages from Debian 10 Backports, deployed with Ceph-Ansible. I see that Debian does not offer Ceph 15/Octopus packages. However, download.ceph.com does offer such packages. Question: Is it a safe upgrade to install the download.ceph.com

[ceph-users] How to trigger scrubbing in Ceph on-demand ?

2023-10-17 Thread Jayjeet Chakraborty
Hi all, I am trying to trigger deep scrubbing in Ceph reef (18.2.0) on demand on a set of files that I randomly write to CephFS. I have tried both invoking deep-scrub on CephFS using ceph tell and just deep scrubbing a particular PG. Unfortunately, none of that seems to be working for me. I am

[ceph-users] NFS - HA and Ingress completion note?

2023-10-17 Thread andreas
NFS - HA and Ingress: [ https://docs.ceph.com/en/latest/mgr/nfs/#ingress ] Referring to Note#2, is NFS high-availability functionality considered complete (and stable)? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to

[ceph-users] Re: quincy v17.2.7 QE Validation status

2023-10-17 Thread Radoslaw Zarzynski
+1. On Tue, Oct 17, 2023 at 1:18 AM Laura Flores wrote: > On behalf of @Radoslaw Zarzynski , rados approved. > > Summary of known failures here: > https://tracker.ceph.com/projects/rados/wiki/QUINCY#Quincy-v1727-validation > > On Mon, Oct 16, 2023 at 3:17 PM Ilya Dryomov wrote: > >> On Mon,

[ceph-users] Re: Ceph 16.2.x mon compactions, disk writes

2023-10-17 Thread Eugen Block
Hi Zakhar, I took a closer look into what the MONs really do (again with Mykola's help) and why manual compaction is triggered so frequently. With debug_paxos=20 I noticed that paxosservice and paxos triggered manual compactions. So I played with these values: paxos_service_trim_max =

[ceph-users] Re: quincy v17.2.7 QE Validation status

2023-10-17 Thread Prashant Dhange
Hi Yuri, > Issue https://tracker.ceph.com/issues/63192 appears to be failing several > runs. > Should it be fixed for this release? These failures are related to Quincy PR#53042 . I am reviewing the logs now. The smoke tests need fixing as we are yet to

[ceph-users] Re: Dashboard and Object Gateway

2023-10-17 Thread Tim Holloway
SOLVED! OK, there was some last-minute flailing around so I can't quite report a cookbook recipe, but it goes something like this: 1. ceph config set client.mousetech rgw_admin_entry admin Note: the standard example is for client.rgw, but I named my RGW "mousetech" to make it distinguishable

[ceph-users] Re: How do you handle large Ceph object storage cluster?

2023-10-17 Thread Wesley Dillingham
Well you are probably in the top 1% of cluster size. I would guess that trying to cut your existing cluster in half while not encountering any downtime as you shuffle existing buckets between old cluster and new cluster would be harder than redirecting all new buckets (or users) to a second

[ceph-users] Re: quincy v17.2.7 QE Validation status

2023-10-17 Thread Adam King
orch approved On Mon, Oct 16, 2023 at 2:52 PM Yuri Weinstein wrote: > Details of this release are summarized here: > > https://tracker.ceph.com/issues/63219#note-2 > Release Notes - TBD > > Issue https://tracker.ceph.com/issues/63192 appears to be failing several > runs. > Should it be fixed

[ceph-users] Re: Dashboard and Object Gateway

2023-10-17 Thread Casey Bodley
you're right that many docs still mention ceph.conf, after the mimic release added a centralized config database to ceph-mon. you can read about the mon-based 'ceph config' commands in https://docs.ceph.com/en/reef/rados/configuration/ceph-conf/#commands to modify rgw_admin_entry for all radosgw

[ceph-users] Re: [EXTERNAL] [Pacific] ceph orch device ls do not returns any HDD

2023-10-17 Thread Johan
Which OS are your running? What is the outcome of these two tests? cephadm --image quay.io/ceph/ceph:v16.2.10-20220920 ceph-volume inventory cephadm --image quay.io/ceph/ceph:v16.2.11-20230125 ceph-volume inventory /Johan Den 2023-10-16 kl. 08:25, skrev 544463...@qq.com: I encountered a

[ceph-users] Re: Dashboard and Object Gateway

2023-10-17 Thread Tim Holloway
Thanks, Casey! I'm not really certain where to set this option. While Ceph is very well-behaved once you know what to do, the nature of Internet-based documentation (and occasionally incompletely-updated manuals) is that stale information is often given equal weight to the obsolete information.

[ceph-users] Re: quincy v17.2.7 QE Validation status

2023-10-17 Thread Casey Bodley
On Mon, Oct 16, 2023 at 2:52 PM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/63219#note-2 > Release Notes - TBD > > Issue https://tracker.ceph.com/issues/63192 appears to be failing several > runs. > Should it be fixed for this

[ceph-users] Re: Unable to delete rbd images

2023-10-17 Thread Eugen Block
Hi, I would check the trash to see if the image has been moved there. If it is, try to restore it to check its watchers. If you're able to restore it, try blacklisting the specific client session, so something like this: # check trash rbd -p iscsi-images trash ls --all # try restoring

[ceph-users] Re: Dashboard and Object Gateway

2023-10-17 Thread Casey Bodley
hey Tim, your changes to rgw_admin_entry probably aren't taking effect on the running radosgws. you'd need to restart them in order to set up the new route there also seems to be some confusion about the need for a bucket named 'default'. radosgw just routes requests with paths starting with

[ceph-users] Re: Ceph 16.2.14: how to set mon_rocksdb_options to enable RocksDB compression?

2023-10-17 Thread Zakhar Kirpichenko
Thanks for this, Eugen. I think I'll stick to adding the option to the config file, it seems like a safer way to do it. /Z On Tue, 17 Oct 2023, 15:21 Eugen Block, wrote: > Hi, > > I managed to get the compression setting into the MONs by using the > extra-entrypoint-arguments [1]: > > ceph01:~

[ceph-users] Re: Ceph 16.2.14: how to set mon_rocksdb_options to enable RocksDB compression?

2023-10-17 Thread Eugen Block
Hi, I managed to get the compression setting into the MONs by using the extra-entrypoint-arguments [1]: ceph01:~ # cat mon-specs.yaml service_type: mon placement: hosts: - ceph01 - ceph02 - ceph03 extra_entrypoint_args: -

[ceph-users] Re: Dashboard and Object Gateway

2023-10-17 Thread Tim Holloway
Thank you, Ondřej! Yes, I set the admin entry set to "default". It's just the latest result of failed attempts ("admin" didn't work for me either). I did say there were some horrors in there! If I got your sample URL pattern right, the results of a GET on "http://x.y.z/default; return 404,

[ceph-users] RGW: How to trigger to recalculate the bucket stats?

2023-10-17 Thread Huy Nguyen
Hi, For some reason, I need to recalculate the bucket stats. Is this possible? Thanks ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: stuck MDS warning: Client HOST failing to respond to cache pressure

2023-10-17 Thread Frank Schilder
Hi Stefan, probably. Its 2 compute nodes and there are jobs running. Our epilogue script will drop the caches, at which point I indeed expect the warning to disappear. We have no time limit on these nodes though, so this can be a while. I was hoping there was an alternative to that, say, a

[ceph-users] Re: [EXTERNAL] [Pacific] ceph orch device ls do not returns any HDD

2023-10-17 Thread Johan
The problem appears in v16.2.11-20230125. I have no insight into the different commits. /Johan Den 2023-10-16 kl. 08:25, skrev 544463...@qq.com: I encountered a similar problem on ceph17.2.5, could you found which commit caused it? ___ ceph-users

[ceph-users] Re: stuck MDS warning: Client HOST failing to respond to cache pressure

2023-10-17 Thread Stefan Kooman
On 17-10-2023 09:22, Frank Schilder wrote: Hi all, I'm affected by a stuck MDS warning for 2 clients: "failing to respond to cache pressure". This is a false alarm as no MDS is under any cache pressure. The warning is stuck already for a couple of days. I found some old threads about cases

[ceph-users] stuck MDS warning: Client HOST failing to respond to cache pressure

2023-10-17 Thread Frank Schilder
Hi all, I'm affected by a stuck MDS warning for 2 clients: "failing to respond to cache pressure". This is a false alarm as no MDS is under any cache pressure. The warning is stuck already for a couple of days. I found some old threads about cases where the MDS does not update flags/triggers

[ceph-users] Re: Dashboard and Object Gateway

2023-10-17 Thread Ondřej Kukla
Hello Tim, I was also struggling with this when I was configuring the object gateway for the first time. There is a few things that you should check to make sure the dashboard would work. 1. You need to have the admin api enabled on all rgws with the rgw_enable_apis option. (As far as I know