I also had this kind of symptoms with nautilus. 
Replacing a failed disk (from cluster ok) generates degraded objects.
Also, we have a proxmox cluster accessing vm images stored in our ceph storage 
with rbd. 
Each time I had some operation on the ceph cluster like adding or removing a 
pool, most of our proxmox vms lost contact with their system disk in ceph and 
crashed (or remount system storage in read-only mode). At first I thought it 
was a network problem, but now I am sure that it's related to ceph becoming 
unresponsive during background operations.
For now, proxmox cannot even access ceph storage using rbd (it fails with 
timeout).
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to