Following up on this and other comments, there are 2 different time delays. One 
(1)  is the time it takes from killing an MDS until a stand-by is made an 
active rank, and (2) the time it takes for the new active rank to restore all 
client sessions. My experience is that (1) takes close to 0 seconds while (2) 
can take between 20-30 seconds depending on how busy the clients are; the MDS 
will go through various states before reaching active. We usually have ca. 1600 
client connections to our FS. With fewer clients, MDS fail-over is practically 
instantaneous. We are using latest mimic.

>From what you write, you seem to have a 40 seconds window for (1), which 
>points to a problem different to MON config values. This is supported by your 
>description including a MON election (??? this should never happen). Do you 
>have have services co-located? Which of the times (1) or (2) are you referring 
>to? How many FS clients do you have?

Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: Patrick Donnelly <pdonn...@redhat.com>
Sent: 03 May 2021 17:19:37
To: Lokendra Rathour
Cc: Ceph Development; dev; ceph-users
Subject: [ceph-users] Re: [ Ceph MDS MON Config Variables ] Failover Delay issue

On Mon, May 3, 2021 at 6:36 AM Lokendra Rathour
<lokendrarath...@gmail.com> wrote:
>
> Hi Team,
> I was setting up the ceph cluster with
>
>    - Node Details:3 Mon,2 MDS, 2 Mgr, 2 RGW
>    - Deployment Type: Active Standby
>    - Testing Mode: Failover of MDS Node
>    - Setup : Octopus (15.2.7)
>    - OS: centos 8.3
>    - hardware: HP
>    - Ram:  128 GB on each Node
>    - OSD: 2 ( 1 tb each)
>    - Operation: Normal I/O with mkdir on every 1 second.
>
> T*est Case: Power-off any active MDS Node for failover to happen*
>
> *Observation:*
> We have observed that whenever an active MDS Node is down it takes around*
> 40 seconds* to activate the standby MDS Node.
> on further checking the logs for the new-handover MDS Node we have seen
> delay on the basis of following inputs:
>
>    1. 10 second delay after which Mon calls for new Monitor election
>       1.  [log]  0 log_channel(cluster) log [INF] : mon.cephnode1 calling
>       monitor election

In the process of killing the active MDS, are you also killing a monitor?

--
Patrick Donnelly, Ph.D.
He / Him / His
Principal Software Engineer
Red Hat Sunnyvale, CA
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to