I would take the analogy of a Raid scenario. Basically a standby is considered like a spare drive. If that spare drive goes down. It is good to know about the event, but it does in no way indicate a degraded system, everything keeps running at top speed.
If you had multi active MDS and one goes down then I would say that is a degraded system, but still waiting for that feature. On Tue, Oct 18, 2016 at 10:18 AM Goncalo Borges < goncalo.bor...@sydney.edu.au> wrote: > Hi John. > > That would be good. > > In our case we are just picking that up simply through nagios and some > fancy scripts parsing the dump of the MDS maps. > > Cheers > Goncalo > ________________________________________ > From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of John > Spray [jsp...@redhat.com] > Sent: 18 October 2016 22:46 > To: ceph-users > Subject: [ceph-users] Feedback wanted: health warning when standby MDS > dies? > > Hi all, > > Someone asked me today how to get a list of down MDS daemons, and I > explained that currently the MDS simply forgets about any standby that > stops sending beacons. That got me thinking about the case where a > standby dies while the active MDS remains up -- the cluster has gone > into a non-highly-available state, but we are not giving the admin any > indication. > > I've suggested a solution here: > http://tracker.ceph.com/issues/17604 > > This is probably going to be a bit of a subjective thing in terms of > whether people find it useful or find it to be annoying noise, so I'd > be interested in feedback from people currently running cephfs. > > Cheers, > John > _______________________________________________ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com