[ceph-users] Ceph iSCSI GW is too slow when compared with Raw RBD performance

2023-06-22 Thread Work Ceph
Hello guys, We have a Ceph cluster that runs just fine with Ceph Octopus; we use RBD for some workloads, RadosGW (via S3) for others, and iSCSI for some Windows clients. We started noticing some unexpected performance issues with iSCSI. I mean, an SSD pool is reaching 100MB of write speed for an

[ceph-users] changing crush map on the fly?

2023-06-22 Thread Angelo Höngens
Hey, Just to confirm my understanding: If I set up a 3-osd cluster really fast with an EC42 pool, and I set the crush map to osd failover domain, the data will be distributed among the osd's, and of course there won't be protection against host failure. And yes, I know that's a bad idea, but I

[ceph-users] Re: Removing the encryption: (essentially decrypt) encrypted RGW objects

2023-06-22 Thread Casey Bodley
hi Jayanth, i don't know that we have a supported way to do this. the s3-compatible method would be to copy the object onto itself without requesting server-side encryption. however, this wouldn't prevent default encryption if rgw_crypt_default_encryption_key was still enabled. furthermore, rgw

[ceph-users] ceph orch host label rm : does not update label removal

2023-06-22 Thread Adiga, Anantha
Hi , Not sure if the lables are really removed or the update is not working? root@fl31ca104ja0201:/# ceph orch host ls HOST ADDR LABELS STATUS fl31ca104ja0201 XX.XX.XXX.139 ceph clients mdss mgrs monitoring mons osds rgws

[ceph-users] Re: Grafana service fails to start due to bad directory name after Quincy upgrade

2023-06-22 Thread Adiga, Anantha
Hi Eugen, Thank you so much for the details. Here is the update (comments in-line >>): Regards, Anantha -Original Message- From: Eugen Block Sent: Monday, June 19, 2023 5:27 AM To: ceph-users@ceph.io Subject: [ceph-users] Re: Grafana service fails to start due to bad directory name

[ceph-users] Re: How does a "ceph orch restart SERVICE" affect availability?

2023-06-22 Thread Mikael Öhman
Thank you Eugen! After finding what the target name actually was it all worked like a charm. Best regards, Mikael On Wed, Jun 21, 2023 at 11:05 AM Eugen Block wrote: > Hi, > > > Will that try to be smart and just restart a few at a time to keep things > > up and available. Or will it just

[ceph-users] CephFS snapshots: impact of moving data

2023-06-22 Thread Kuhring, Mathias
Dear Ceph community, We want to restructure (i.e. move around) a lot of data (hundreds of terabyte) in our CephFS. And now I was wondering what happens within snapshots when I move data around within a snapshotted folder. I.e. do I need to account for a lot increased storage usage due to older

[ceph-users] Re: 1 PG stucked in "active+undersized+degraded for long time

2023-06-22 Thread Damian
Hi Siddhit You need more OSD's. Please read: https://docs.ceph.com/en/quincy/rados/troubleshooting/troubleshooting-pg/#erasure-coded-pgs-are-not-active-clean Greetings Damian On 2023-06-20 15:53, siddhit.ren...@nxtgen.com wrote: Hello All, Ceph version: 14.2.5-382-g8881d33957

[ceph-users] Re: Ceph Pacific bluefs enospc bug with newly created OSDs

2023-06-22 Thread Igor Fedotov
Quincy brings support for 4K allocation unit but doesn't start using it immediately. Instead it falls back to 4K when bluefs is unable to allocate more space with the default size. And even this mode isn't permanent, bluefs attempts to bring larger units back from time to time. Thanks, Igor

[ceph-users] ceph quincy repo update to debian bookworm...?

2023-06-22 Thread Christian Peters
Hi ceph users/maintainers, I installed ceph quincy on debian bullseye as a ceph client and now want to update to bookworm. I see that there is at the moment only bullseye supported. https://download.ceph.com/debian-quincy/dists/bullseye/ Will there be an update of deb

[ceph-users] Re: 1 PG stucked in "active+undersized+degraded for long time

2023-06-22 Thread Eugen Block
Hi, have you tried restarting the primary OSD (currently 343)? It looks like this PG is part of an EC pool, are there enough hosts available, assuming your failure-domain is host? I assume that ceph isn't able to recreate the shard on a different OSD. You could share your osd tree and

[ceph-users] Re: OSDs cannot join cluster anymore

2023-06-22 Thread Stefan Kooman
On 6/21/23 11:20, Malte Stroem wrote: Hello Eugen, recovery and rebalancing was finished however now all PGs show missing OSDs. Everything looks like the PGs are missing OSDs although it finished correctly. As if we shut down the servers immediately. But we removed the nodes the way it