Good day,

Firstly I'd like to acknowledge that I consider myself a Ceph noob.

OS: Ubuntu 16.04.3 LTS
Ceph version: 12.2.1

I'm running a small six node POC cluster with three MDS daemons. (One on
each node,  node1, node2 and node3)
I've also configured three ceph file systems fsys1, fsys2 and fsys3.

I'd like to remove two of the file systems (fsys2 and fsys3) and at least
one if not both of the MDS daemons.
I was able to fail MDS on node3 using command "sudo ceph mds fail node3"
followed by "sudo ceph mds rmfailed 0 --yes-i-really-mean-it".
Then I removed the file system using command "sudo ceph fs rm fsys3
--yes-i-really-mean-it".

Running command "sudo ceph fs status" confirms that fsys3 is now failed and
that the MDS daemon on node3 has become a standby MDS.
I've combinations of "ceph mds (fail, deactivate, rm, rmfailed" but I can't
seem to be able to remove the standby daemon.
After rebooting node3 and running command "sudo ceph fs status" -  fsys3 is
no longer a listed file system and node3 is still standby MDS.

I've searched for details on this topic but what I have found has not
helped me.
Could anybody assist with the correct steps for removing MDS daemons and
ceph file systems on nodes?
It would be useful to be able to know how to completely remove all ceph
file systems and MDS daemons should I have no further use for them in a
cluster.

Kind regards
Geoffrey Rhodes
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to