[ceph-users] Re: Does CEPH limit the pgp_num which it will increase in one go?

2022-02-15 Thread Dan van der Ster
Hi Maarten, With `ceph osd pool ls detail` does it have pgp_num_target set to 2248? If so, yes it's moving gradually to that number. Cheers, Dan > On 02/15/2022 8:55 AM Maarten van Ingen wrote: > > > Hi, > > After enabling the balancer (and set to upmap) on our environment it’s time > to g

[ceph-users] Re: Does CEPH limit the pgp_num which it will increase in one go?

2022-02-15 Thread Maarten van Ingen
Hi Dan, Thanks for your (very) prompt response. pg_num 4096 pgp_num 2108 pgp_num_target 2248 Also I see this: #ceph balancer eval current cluster score 0.068634 (lower is better) #ceph balancer status { "last_optimize_duration": "0:00:00.025029", "plans": [], "mode": "upmap",

[ceph-users] Re: Does CEPH limit the pgp_num which it will increase in one go?

2022-02-15 Thread Dan van der Ster
Hi, You're confused: the `ceph balancer` is not related to pg splitting. The balancer is used to move PGs around to achieve a uniform distribution. What you're doing now by increasing pg num and pgp_num is splitting --> large PGs in split into smaller ones. This is achieved through backfilling.

[ceph-users] Re: Does CEPH limit the pgp_num which it will increase in one go?

2022-02-15 Thread Janne Johansson
Den tis 15 feb. 2022 kl 08:56 skrev Maarten van Ingen : > Hi, > After enabling the balancer (and set to upmap) on our environment it’s time > to get the pgp_num on one of the pools on par with the pg_num. > This pool has pg_num set to 4096 and pgp_num to 2048 (by our mistake). > I just set the pgp

[ceph-users] Re: Does CEPH limit the pgp_num which it will increase in one go?

2022-02-15 Thread Maarten van Ingen
Hi, We have the pg_num set on 4096 for quite some time (months) but only now we increased the pgp_num. So if I understand correctly, the splitting should have been done already months ago. Increasing the pgp_num should only make sure the newly created pg's are actually moved into place. I read

[ceph-users] Re: Does CEPH limit the pgp_num which it will increase in one go?

2022-02-15 Thread Maarten van Ingen
Hi, I understand this warning, but not why it was not there before. We have set the pg_num on 4096 months (maybe even a year...) ago but forgot the pgp_num. I think with current releases this should not have happened, but we have it __ (probably while we still were running Mimic). So that's the

[ceph-users] Re: Does CEPH limit the pgp_num which it will increase in one go?

2022-02-15 Thread Maarten van Ingen
Hi, I did a small test to see what would happen if it set the amount of "allowed" misplaced object and this indeed does change the amount of PG's will do simultaneously. While probably not the balancer itself it, at least, shares this setting: ceph config set mgr target_max_misplaced_ratio .01

[ceph-users] Re: Does CEPH limit the pgp_num which it will increase in one go?

2022-02-15 Thread Dan van der Ster
Hi again, target_max_misplaced_ratio is a configuration of the mgr balancer. What's happening here is you are simultaneously splitting and balancing :-) Cheers, Dan > On 02/15/2022 11:47 AM Maarten van Ingen wrote: > > > Hi, > > I did a small test to see what would happen if it set the amou

[ceph-users] Re: Something akin to FSIMAGE in ceph

2022-02-15 Thread Robert Gallop
Thanks William…. I’m going to mess with it, see how it does. I hadn’t thought about utilizing mlocate for this case, but wouldn’t be the worst thing if it can keep up. The major issue we are having with most solutions is just time. It takes so long for most utilities to run through even my modes

[ceph-users] question about radosgw-admin bucket check

2022-02-15 Thread Scheurer François
Dear Ceph Experts, The docu about this rgw command is a bit unclear: radosgw-admin bucket check --bucket --fix --check-objects Is this command still maintained and safe to use? (we are still on nautilus) Is it working with sharded buckets? and also in multi-site? I heard it will clear inval

[ceph-users] Announcing go-ceph v0.14.0

2022-02-15 Thread John Mulligan
I'm happy to announce another release of the go-ceph API library. This is a regular release following our every-two-months release cadence. https://github.com/ceph/go-ceph/releases/tag/v0.14.0 Changes include additions to the rados package and a new mgr admin package. More details are available

[ceph-users] Re: MDS crash when unlink file

2022-02-15 Thread Venky Shankar
On Mon, Feb 14, 2022 at 5:33 PM Arnaud MARTEL wrote: > > Hi Venky, > > Thank's a lot for your answer. I needed to reduce the number of running MDS > before set debug_mds to 20 but, now, I was able to reproduce the crash and > generate the full logfile. > You can download it with the following li