One of the things I've learned is that many small changes to the cluster are better than one large change. Adding 20% more OSDs? Don't add them all at once, trickle them in over time. Increasing pg_num & pgp_num from 128 to 1024? Go in steps, not one leap.
I try to avoid operations that will touch more than 20% of the disks simultaneously. When I had journals on HDD, I tried to avoid going over 10% of the disks. Is there a way to execute `ceph osd crush tunables optimal` in a way that takes smaller steps?
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com