We see something very similar on our Ceph cluster, starting as of today.
We use a 16 node, 102 OSD Ceph installation as the basis for an Icehouse
OpenStack cluster (we applied the RBD patches for live migration etc)
On this cluster we have a big ownCloud installation (Sync & Share) that stores
We've had a an NFS gateway serving up RBD images successfully for over a year.
Ubuntu 12.04 and ceph .73 iirc.
In the past couple of weeks we have developed a problem where the nfs clients
hang while accessing exported rbd containers.
We see errors on the server about nfsd hanging for 120sec
Hi,
I just added new monitor (MON). "$ ceph status" shows the monitor in the
quorum, but the new monitor is not shown in /etc/ceph/ceph.conf. I am
wondering what role the /etc/ceph/ceph.conf plays? Do I need to manually
edit the file on each node and add the monitors?
In addition, there are