David Turner wrote:
: A couple things.  You didn't `ceph osd crush remove osd.21` after doing the
: other bits.  Also you will want to remove the bucket (re: host) from the
: crush map as it will now be empty.  Right now you have a host in the crush
: map with a weight, but no osds to put that data on.  It has a weight
: because of the 2 OSDs that are still in it that were removed from the
: cluster but not from the crush map.  It's confusing to your cluster.

        OK, this helped. I have removed osd.20 and osd.21 from the crush
map, as well as the bucket for the faulty host. PGs got unstuck, and after
some time, my system now reports HEALTH_OK.

        Thanks for the hint!

-Yenya

-- 
| Jan "Yenya" Kasprzak <kas at {fi.muni.cz - work | yenya.net - private}> |
| http://www.fi.muni.cz/~kas/                         GPG: 4096R/A45477D5 |
> That's why this kind of vulnerability is a concern: deploying stuff is  <
> often about collecting an obscene number of .jar files and pushing them <
> up to the application server.                          --pboddie at LWN <
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to