Hello,

I've been working to upgrade the hardware on a semi-production ceph cluster, 
following the instructions for OSD removal from 
http://ceph.com/docs/master/rados/operations/add-or-rm-osds/#removing-osds-manual.
 Basically, I've added the new hosts to the cluster and now I'm removing the 
old ones from it. 

What I found curious is that after the sync triggered by the "ceph osd out 
<id>" finishes and I stop the osd process and remove it from the crush map, 
another session of synchronization is triggered - sometimes this one takes 
longer than the first. Also, removing an empty "host" bucket from the crush map 
triggred another resynchronization. 

I noticed that the overall weight of the host bucket does not change in the 
crush map as a result of one OSD being "out", therefore what is happening is 
kinda' normal behavior - however it remains time-consuming. Is there something 
that can be done to avoid the double resync?

I'm running 0.72.2 on top of ubuntu 12.04 on the OSD hosts. 

Thanks,
Dinu
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to