Hi,
Yes! I did play with another cluster before and forgot to completely
clear that node! And the fsid "46e2b13c-dab7-11eb-810b-a5ea707f1ea1"
from that cluster. But then there is an error in CEPH. Because the
mon the existing cluster complained about (with fsid
"1ef45b26-dbac-11eb-a357-61
Hi!
> no problem. Maybe you played around and had this node in the placement
> section previously? Or did it have the mon label? I'm not sure, but
> the important thing is that you can clean it up.
Yes! I did play with another cluster before and forgot to completely clear that
node! And the fsid
Hi,
Hmm. 'cephadm ls' running directly on the node does show that there
is mon. I don't quite understand where it came from and I don't
understand why 'ceph orch ps' didn't show this service.
Thank you very much for your help.
no problem. Maybe you played around and had this node in the p
Hi!
> Was there a MON running previously on that host? Do you see the daemon
> when running 'cephadm ls'? If so, remove it with 'cephadm rm-daemon
> --name mon.s-26-9-17'
Hmm. 'cephadm ls' running directly on the node does show that there is mon. I
don't quite understand where it came from and I
Was there a MON running previously on that host? Do you see the daemon
when running 'cephadm ls'? If so, remove it with 'cephadm rm-daemon
--name mon.s-26-9-17'
Zitat von Fyodor Ustinov :
Hi!
After upgrading to version 16.2.6, my cluster is in this state:
root@s-26-9-19-mon-m1:~# ceph -s