[ceph-users] Re: The pg_num from 1024 reduce to 32 spend much time, is there way to shorten the time?

2023-06-08 Thread Louis Koo
the pgp_num reduce quickly but pg_num is still slowly. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: The pg_num from 1024 reduce to 32 spend much time, is there way to shorten the time?

2023-06-08 Thread Louis Koo
Thanks, I will take a look. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: The pg_num from 1024 reduce to 32 spend much time, is there way to shorten the time?

2023-06-08 Thread Louis Koo
Thanks, other question is how to know where this option is set, mon or mgr? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: The pg_num from 1024 reduce to 32 spend much time, is there way to shorten the time?

2023-06-08 Thread Eugen Block
Sure: https://docs.ceph.com/en/latest/rados/operations/balancer/#throttling Zitat von Louis Koo : ok, I will try it. Could you show me the archive doc? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to

[ceph-users] Re: The pg_num from 1024 reduce to 32 spend much time, is there way to shorten the time?

2023-06-08 Thread Konstantin Shalygin
Hi, > On 7 Jun 2023, at 10:02, Louis Koo wrote: > > I had set it from 0.05 to 1 with "ceph config set mon > target_max_misplaced_ratio 1.0", it's still invalid. Because is setting for a mgr, not for mon, try `ceph config set mgr target_max_misplaced_ratio 1` Cheers, k

[ceph-users] Re: The pg_num from 1024 reduce to 32 spend much time, is there way to shorten the time?

2023-06-08 Thread Louis Koo
I had set it from 0.05 to 1 with "ceph config set mon target_max_misplaced_ratio 1.0", it's still invalid. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: The pg_num from 1024 reduce to 32 spend much time, is there way to shorten the time?

2023-06-08 Thread Louis Koo
ok, I will try it. Could you show me the archive doc? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: The pg_num from 1024 reduce to 32 spend much time, is there way to shorten the time?

2023-06-08 Thread Louis Koo
ceph df detail: [root@k8s-1 ~]# ceph df detail --- RAW STORAGE --- CLASS SIZEAVAIL USED RAW USED %RAW USED hdd600 GiB 600 GiB 157 MiB 157 MiB 0.03 TOTAL 600 GiB 600 GiB 157 MiB 157 MiB 0.03 --- POOLS --- POOLID PGS

[ceph-users] Re: The pg_num from 1024 reduce to 32 spend much time, is there way to shorten the time?

2023-06-06 Thread Wesley Dillingham
Can you send along the responses from "ceph df detail" and ceph "ceph osd pool ls detail" Respectfully, *Wes Dillingham* w...@wesdillingham.com LinkedIn On Tue, Jun 6, 2023 at 1:03 PM Eugen Block wrote: > I suspect the target_max_misplaced_ratio

[ceph-users] Re: The pg_num from 1024 reduce to 32 spend much time, is there way to shorten the time?

2023-06-06 Thread Eugen Block
I suspect the target_max_misplaced_ratio (default 0.05). You could try setting it to 1 and see if it helps. This has been discussed multiple times on this list, check out the archives for more details. Zitat von Louis Koo : Thanks for your responses, I want to know why it spend much time to

[ceph-users] Re: The pg_num from 1024 reduce to 32 spend much time, is there way to shorten the time?

2023-06-06 Thread Louis Koo
Thanks for your responses, I want to know why it spend much time to reduce the pg num? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: The pg_num from 1024 reduce to 32 spend much time, is there way to shorten the time?

2023-06-05 Thread Janne Johansson
If you can stop the rgws, you can make a new pool with 32 PGs and then rados cppool this one over the new one, then rename them so this one has the right name (and application) and start the rgws again. Den mån 5 juni 2023 kl 16:43 skrev Louis Koo : > > ceph version is 16.2.13; > > The pg_num is