[ceph-users] why rgw generates large quantities orphan objects?

2022-10-11 Thread 郑亮
Hi all, Description of problem: [RGW] Buckets/objects deletion is causing large quantities orphan raods objects The cluster was running a cosbench workload, then remove the partial data by deleting objects from the cosbench client, then we have deleted all the buckets with the help of `s3cmd rb

[ceph-users] Re: Updating Git Submodules -- a documentation question

2022-10-11 Thread Brad Hubbard
For untracked files (eg. src/pybind/cephfs/cephfs.c) all you need is 'git clean -fdx' which you ran last in this case. Just about everything can be solved by a combination of these commands. git submodule update --init --recursive git clean -fdx git submodule foreach git clean -fdx If you have

[ceph-users] Re: Updating Git Submodules -- a documentation question

2022-10-11 Thread John Zachary Dover
The following console output, which is far too long to include in tutorial-style documentation that people are expected to read, shows the sequence of commands necessary to diagnose and repair submodules that have fallen out of sync with the submodules in the upstream repository. In this example,

[ceph-users] Re: encrypt OSDs after creation

2022-10-11 Thread Alexander E. Patrakov
ср, 12 окт. 2022 г. в 00:32, Ali Akil : > > Hallo folks, > > i created before couple of months a quincy ceph cluster with cephadm. I > didn't encpryt the OSDs at that time. > What would be the process to encrypt these OSDs afterwards? > The documentation states only adding `encrypted: true` to the

[ceph-users] Re: How to force PG merging in one step?

2022-10-11 Thread Frank Schilder
Hi Eugen, thanks, that was a great hint! I have a strong déjà vu feeling, we discussed this before with increasing pg_num, didn't we? I just set it to 1 and it did exactly what I wanted. Its the same number of PGs backfilling, but pgp_num=1024, so while the rebalancing load is the same, I got

[ceph-users] Re: crush hierarchy backwards and upmaps ...

2022-10-11 Thread Dan van der Ster
Hi Chris, Just curious, does this rule make sense and help with the multi level crush map issue? (Maybe it also results in zero movement, or at least less then the alternative you proposed?) step choose indep 4 type rack step chooseleaf indep 2 type chassis Cheers, Dan On Tue, Oct

[ceph-users] Re: crush hierarchy backwards and upmaps ...

2022-10-11 Thread Christopher Durham
Dan, Thank you. I did what you said regarding --test-map-pgs-dump and it wants to move 3 OSDs in every PG. Yuk. So before i do that, I tried this rule, after changing all my 'pod' bucket definitions to 'chassis', and compiling andinjecting the new crushmap to an osdmap: rule mypoolname {    

[ceph-users] encrypt OSDs after creation

2022-10-11 Thread Ali Akil
Hallo folks, i created before couple of months a quincy ceph cluster with cephadm. I didn't encpryt the OSDs at that time. What would be the process to encrypt these OSDs afterwards? The documentation states only adding `encrypted: true` to the osd manifest, which will work only upon creation.

[ceph-users] Re: Inherited CEPH nightmare

2022-10-11 Thread Tino Todino
Hi Janne, I've changed some elements of the config now and the results are much better but still quite poor relative to what I would consider normal SSD performance. The osd_memory_target is now set to 12GB for 3 of the 4 hosts (each of these hosts has 1.5TB RAM so I can allocate loads if

[ceph-users] Autoscaler stopped working after upgrade Octopus -> Pacific

2022-10-11 Thread Andreas Haupt
Dear all, just upgraded our cluster from Octopus to Pacific (16.2.10). This introduced an error in autoscaler: 2022-10-11T14:47:40.421+0200 7f3ec2d03700 0 [pg_autoscaler ERROR root] pool 17 has overlapping roots: {-4, -1} 2022-10-11T14:47:40.423+0200 7f3ec2d03700 0 [pg_autoscaler ERROR root]

[ceph-users] Re: How to force PG merging in one step?

2022-10-11 Thread Eugen Block
Hi Frank, I don't think it's the autoscaler interferring here but the default 5% target_max_misplaced_ratio. I haven't tested the impacts of increasing that to a much higher value, so be careful. Regards, Eugen Zitat von Frank Schilder : Hi all, I need to reduce the number of PGs in a

[ceph-users] Re: Invalid crush class

2022-10-11 Thread Eugen Block
The only way I could reproduce this was by removing the existing class from an OSD and setting it: ---snip--- pacific:~ # ceph osd crush rm-device-class 0 done removing class of osd(s): 0 pacific:~ # ceph osd crush set-device-class jbod.hdd 0 set osd(s) 0 to class 'jbod.hdd' pacific:~ # ceph