I recently moved to quincy and cephadm.
I noticed that when I moved some drives from one machine to another they at
some point got marked as weight 0 in the crushmap.
The first time that was fine, I just fixed it and figured it was something
I did wrong with moving the drives.

The second time it has just happened again, a significant number of osd's
on the newest host in the cluster got marked 0 in the crushmap after my
first ceph orch upgrade.
# From 17.2.1:
ceph orch upgrade start --image quay.io/ceph/ceph:v17.2.3


Why would ceph / cephadm change crushmap weights of hdd osds?
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to