Hi,

If nothing special with defined “initial monitos” on cluster, we’ll try to 
remove mon01 from cluster.

I comment about “initial monitor” because in our ceph implementation there is 
only one monitor as “initial:

[root@mon01 ceph]# cat /etc/ceph/ceph.conf
[global]
fsid = 11111111-2222-3333-4444-555555555555
mon_initial_members = mon01
mon_host = 10.10.200.20
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
osd_pool_default_size = 2
public_network = 10.10.200.0/24

So, I could change ceph.conf on every storage related computer, but this does 
not work with monitors.
I have mon05 in a “probing” state trying to contact only with mon01 (down) and 
I change “mon_initial_members” in ceph.conf to find mon02 as initial… and this 
does not work ☹

2019-06-12 03:39:47.033242 7f04e630f700  0 mon.mon05@4(probing).data_health(0) 
update_stats avail 98% total 223 GB, used 4255 MB, avail 219 GB

And  asking to socket:

[root@mon05 ~]# ceph daemon mon.mon05 mon_status
{ "name": "mon05",
  "rank": 4,
  "state": "probing",
  "election_epoch": 0,
  "quorum": [],
  "outside_quorum": [
        "mon05"],
  "extra_probe_peers": [],
  "sync_provider": [],
  "monmap": { "epoch": 21,
      "fsid": "11111111-2222-3333-4444-555555555555",
      "modified": "2019-06-07 16:59:26.729467",
      "created": "0.000000",
      "mons": [
            { "rank": 0,
              "name": "mon01",
              "addr": "10.10.200.20:6789\/0"},
            { "rank": 1,
              "name": "mon02",
              "addr": "10.10.200.21:6789\/0"},
            { "rank": 2,
              "name": "mon03",
              "addr": "10.10.200.22:6789\/0"},
            { "rank": 3,
              "name": "mon04",
              "addr": "10.10.200.23:6789\/0"},
            { "rank": 4,
              "name": "mon05",
              "addr": "10.10.200.24:6789\/0"}]}}

In any case it contacs mon02, mon03 or mon04 that are healty and with quorum:

[root@mon02 ceph-mon02]# ceph daemon mon.mon02 mon_status
{ "name": "mon02",
  "rank": 1,
  "state": "leader",
  "election_epoch": 476,
  "quorum": [
        1,
        2,
        3],
  "outside_quorum": [],
  "extra_probe_peers": [],
  "sync_provider": [],
  "monmap": { "epoch": 21,
      "fsid": "11111111-2222-3333-4444-555555555555",
      "modified": "2019-06-07 16:59:26.729467",
      "created": "0.000000",
      "mons": [
            { "rank": 0,
              "name": "mon01",
              "addr": "10.10.200.20:6789\/0"},
            { "rank": 1,
              "name": "mon02",
              "addr": "10.10.200.21:6789\/0"},
            { "rank": 2,
              "name": "mon03",
              "addr": "10.10.200.22:6789\/0"},
            { "rank": 3,
              "name": "mon04",
              "addr": "10.10.200.23:6789\/0"},
            { "rank": 4,
              "name": "mon05",
              "addr": "10.10.200.24:6789\/0"}]}}

Of course, no communication related problems exists.

So, this is my fear touching monitors…

Regards


De: Paul Emmerich <paul.emmer...@croit.io>
Enviado el: miércoles, 12 de junio de 2019 15:12
Para: Lluis Arasanz i Nonell - Adam <lluis.aras...@adam.es>
CC: ceph-users@lists.ceph.com
Asunto: Re: [ceph-users] [Ceph-community] Monitors not in quorum (1 of 3 live)



On Wed, Jun 12, 2019 at 11:45 AM Lluis Arasanz i Nonell - Adam 
<lluis.aras...@adam.es<mailto:lluis.aras...@adam.es>> wrote:
- Be careful adding or removing monitors in a not healthy monitor cluster: If 
they lost quorum you will be into problems.

safe procedure: remove the dead monitor before adding a new one


Now, we have some work to do:
- Remove mon01 with "ceph mon destroy mon01": we want to remove it from monmap, 
but is the "initial monitor" so we do not know if it is safe to do.

yes that's safe to do, there's nothing special about the first mon. Command is 
"ceph mon remove <name>", though

- Clean and "format" monitor data (as we do on mon02 and mon03) for mon01, but 
we have the same situation: is safe to do when is the "initial mon"?

all (fully synched and in quorum) mons have the exact same data

- Modify monmap, deleting mon01, and inyect it om mon05, but...  what happens 
when we delete "initial mon" from monmap? Is safe?

"ceph mon remove" will modify the mon map for you; manually modifying the mon 
map is only required if the cluster is down




--
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io<http://www.croit.io>
Tel: +49 89 1896585 90



Regards
 _______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to