Hello,
Same issue with another cluster.
Here is the coredump tag 41659448-bc1b-4f8a-b563-d1599e84c0ab
Thanks,
Carl
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
On Mon, Jul 20, 2020 at 5:38 AM wrote:
>
> Hi,
>
> I made a fresh install of Ceph Octopus 15.2.3 recently.
> And after a few days, the 2 standby MDS suddenly crashed with segmentation
> fault error.
> I try to restart it but it does not start.
> [...]
Can you please increase MDS debugging:
On 20/07/2020 10:48 pm, carlimeun...@gmail.com wrote:
After trying to restart the mds master, it also failed. Now the cluster state
is :
Try deleting and recreating one of the MDS.
--
Lindsay
___
ceph-users mailing list -- ceph-users@ceph.io
To
After trying to restart the mds master, it also failed. Now the cluster state
is :
# ceph status
cluster:
id: dd024fe1-4996-4fed-ba57-03090e53724d
health: HEALTH_WARN
1 filesystem is degraded
insufficient standby MDS daemons available
29 daemons have recently crashed
services: