From: Eugen Block
Sent: 03 May 2021 20:53:51
To: ceph-users@ceph.io
Subject: [ceph-users] Re: [ Ceph MDS MON Config Variables ] Failover Delay issue
I wouldn't recommend a colocated MDS in a production environment.
Zitat von Lokendra Rathour :
> Hello Fr
h-users
Subject: [ceph-users] Re: [ Ceph MDS MON Config Variables ] Failover Delay issue
On Mon, May 3, 2021 at 6:36 AM Lokendra Rathour
wrote:
>
> Hi Team,
> I was setting up the ceph cluster with
>
>- Node Details:3 Mon,2 MDS, 2 Mgr, 2 RGW
>- Deployment Type: Active Stan
: Patrick Donnelly
Sent: 03 May 2021 17:19:37
To: Lokendra Rathour
Cc: Ceph Development; dev; ceph-users
Subject: [ceph-users] Re: [ Ceph MDS MON Config Variables ] Failover Delay
issue
On Mon, May 3, 2021 at 6:36 AM Lokendra Rathour
wrote:
>
> Hi Team,
> I was setting up the cep
___
> From: Patrick Donnelly
> Sent: 03 May 2021 17:19:37
> To: Lokendra Rathour
> Cc: Ceph Development; dev; ceph-users
> Subject: [ceph-users] Re: [ Ceph MDS MON Config Variables ] Failover Delay
> issue
>
> On Mon, May 3, 2021 at 6:36 AM Lokendra Rathour
> wrote:
> &g
Yes Patric,
In the process of killing MDS we are also *killing Monitor along with
OSD,Mgr and RGW*. we are performing Poweroff/Reboot the complete node (with
MDS,Mon,RGW,OSD,Mgr daemon).
Cluster: 2 Nodes with MDS|Mon|RGW|OSD each and third node with 1 Mon.
Note : when I am only stopping the MDS
Ok,
Will try with nautilus as well.
But we are really configuring too many variables to achieve 10 seconds of
failover time.
Is it possible for you to share the setup details.
Like we are using
2 node ceph cluster in health ok (configured replication factor and related
variables)
Hardware is HP,
On Mon, May 3, 2021 at 6:36 AM Lokendra Rathour
wrote:
>
> Hi Team,
> I was setting up the ceph cluster with
>
>- Node Details:3 Mon,2 MDS, 2 Mgr, 2 RGW
>- Deployment Type: Active Standby
>- Testing Mode: Failover of MDS Node
>- Setup : Octopus (15.2.7)
>- OS: centos 8.3
>
Hi,
Yes we tried ceph-standby-replay but could not see much difference in the
handover time. It was comming as 35 to 40 seconds in either case.
Did you also changed these variables (as mentioned above) along with the
hot-standby ?
no, we barely differ from the default configs and haven't
Hello Eugen,
Thankyou for the response.
Yes we tried ceph-standby-replay but could not see much difference in the
handover time. It was comming as 35 to 40 seconds in either case.
Did you also changed these variables (as mentioned above) along with the
hot-standby ?
Couple of seconds is
Also there's a difference between 'standby-replay' (hot standby) and
just 'standby'. We use CephFS for a couple of years now with
standby-replay and the failover takes a couple of seconds max,
depending on the current load. Have you tried to enable the
standby-replay config and tested the
hello
perhaps you should have more than one MDS active.
mds: cephfs:3 {0=cephfs-d=up:active,1=cephfs-e=up:active,2=cephfs-
a=up:active} 1 up:standby-replay
I got 3 active mds and one standby.
I'm using rook in kubernetes for this setup.
oau
Le lundi 03 mai 2021 à 19:06 +0530, Lokendra
11 matches
Mail list logo