Hi,

I'm extremely new to ceph and have a small 4-node/20-osd cluster.

I just upgraded from kraken to luminous without much ado, except now when I
run ceph status, I get a health_warn because "2 osds exist in the crush map
but not in the osdmap"

Googling the error message only took me to the source file on github

I tried exporting and decompiling  the crushmap -- there were two osd
devices named differently. The normal name would be something like

device 0 osd.0
device 1 osd.1

but two were named:

device 5 device5
device 11 device11

I had edited the crushmap in the past, so it's possible this was introduced
by me.

I tried changing those to match the rest, recompiling and setting the
crushmap, but ceph status still complains.

Any assistance would be greatly appreciated.

Thanks,
Dan
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to