I don't know if it's relevant here, but I saw similar behavior while 
implementing
a Luminous->Nautilus automated upgrade test. When I used a single-node cluster
with 4 OSDs, the Nautilus cluster would not function properly after the reboot.
IIRC some OSDs were reported by "ceph -s" as up, even though they weren't 
running.

I "fixed" the issue by adding a second node to the cluster. With two nodes (8
OSDs), the upgrade works fine.

I will reproduce the issue again and open a bug report.
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to