Hi Frank,
I don't think it's the autoscaler interferring here but the default 5%
target_max_misplaced_ratio. I haven't tested the impacts of increasing
that to a much higher value, so be careful.
Regards,
Eugen
Zitat von Frank Schilder <fr...@dtu.dk>:
Hi all,
I need to reduce the number of PGs in a pool from 2048 to 512 and
would really like to do that in a single step. I executed the set
pg_num 512 command, but the PGs are not all merged. Instead I get
this intermediate state:
pool 13 'con-fs2-meta2' replicated size 4 min_size 2 crush_rule 3
object_hash rjenkins pg_num 2048 pgp_num 1946 pg_num_target 512
pgp_num_target 512 autoscale_mode off last_change 916710 lfor
0/0/618995 flags hashpspool,nodelete,selfmanaged_snaps max_bytes
107374182400 stripe_width 0 compression_mode none application cephfs
This is really annoying, because it will not only lead to repeated
redundant data movements and but I also need to rebalance this pool
onto fewer OSDs, which cannot hold the 1946 PGs it will be merged to
intermittently. How can I override the autoscaler interfering with
admin operations in such tight corners?
As you can see, we disabled autoscaler on all pools and also
globally. Still, it interferes with admin commands in an unsolicited
way. I would like the PG merge happen on the fly as the data moves
to the new OSDs.
Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io