Re: [ceph-users] Sanity check on unexpected data movement

2019-04-29 Thread Graham Allan
Now that I dig into this, I can see in the exported crush map that the choose_args weight_set for this bucket id is zero for the 9th member (which I assume corresponds to the evacuated node-98). rack even01 { id -10 # do not change unnecessarily id -14 class ssd

[ceph-users] Sanity check on unexpected data movement

2019-04-29 Thread Graham Allan
I think I need a second set of eyes to understand some unexpected data movement when adding new OSDs to a cluster (Luminous 12.2.11). Our cluster ran low on space sooner than expected; so as a stopgap I recommissioned a couple of older storage nodes while we get new hardware purchases under wa