Thanks Dhairya for response.

its ceph 17.2.5
I don't have exact output for ceph -s currently as it is past issue.but it was 
like below and all PGs were active + clean AFAIR
 
mds slow requests
Mds behind on trimming

don't know root cause why mds was crashed but i am suspecting its something to 
do with active mon failure.
before the crash 2 nodes consisting 2 active mons were restarted causing those 
mon pods to restart
both nodes were restarted one by one for some maintenance activity.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to