On Fri, Feb 21, 2020 at 03:29:08PM +0100, Marco Gaiarin wrote: > Mandi! Alwin Antreich > In chel di` si favelave... > > > Yes, that looks strange. But as said before, it is deprecated to use > > IDs. Best destroy and re-create the MON one-by-one. The default command > > will create them with the hostname as ID. Then this phenomenon should > > disappear as well. > > Done, via web interface, with a little glitch. > > I've stopped and dropped the monitor, but these don't stop (and drop) > the manager, and so creating a new mon va webinterface lead to: > > Created symlink > /etc/systemd/system/ceph-mon.target.wants/ceph-mon@hulk.service -> > /lib/systemd/system/ceph-mon@.service. > INFO:ceph-create-keys:ceph-mon is not in quorum: u'synchronizing' > INFO:ceph-create-keys:ceph-mon is not in quorum: u'synchronizing' > INFO:ceph-create-keys:ceph-mon is not in quorum: u'probing' > INFO:ceph-create-keys:ceph-mon is not in quorum: u'electing' > INFO:ceph-create-keys:ceph-mon is not in quorum: u'electing' > INFO:ceph-create-keys:ceph-mon is not in quorum: u'electing' > INFO:ceph-create-keys:ceph-mon is not in quorum: u'electing' > INFO:ceph-create-keys:ceph-mon is not in quorum: u'electing' > INFO:ceph-create-keys:Key exists already: /etc/ceph/ceph.client.admin.keyring > INFO:ceph-create-keys:Key exists already: > /var/lib/ceph/bootstrap-osd/ceph.keyring > INFO:ceph-create-keys:Key exists already: > /var/lib/ceph/bootstrap-rgw/ceph.keyring > INFO:ceph-create-keys:Key exists already: > /var/lib/ceph/bootstrap-mds/ceph.keyring > INFO:ceph-create-keys:Talking to monitor... > TASK ERROR: ceph manager directory '/var/lib/ceph/mgr/ceph-hulk' already > exists > > probably because the task try also to fire up a mgr, that was just > created. > > > Anyway, nothing changed. On a rebooted node: > > root@capitanmarvel:~# ps aux | grep ceph[-]mon > ceph 2725 0.5 0.2 522224 98428 ? Ssl feb18 21:14 > /usr/bin/ceph-mon -i capitanmarvel --pid-file > /var/run/ceph/mon.capitanmarvel.pid -c /etc/ceph/ceph.conf --cluster ceph > --setuser ceph --setgroup ceph > > on a node when i do a 'systemctl restart ceph-mgr@<ID>.service': > > root@hulk:~# ps aux | grep ceph[-]mon > ceph 4166380 0.8 0.1 466648 55676 ? Ssl 15:19 0:03 > /usr/bin/ceph-mon -f --cluster ceph --id hulk --setuser ceph --setgroup ceph I don't see this in the systemd unit files for Ceph. Also my test systems do not have the pid file either. Maybe this is something from an previous upgrade?
systemctl cat ceph-mon@<id>.service You can check with the above command how each Ceph service or target should be started. -- Cheers, Alwin _______________________________________________ pve-user mailing list pve-user@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user