upd: we fix some issues, but mds still readonly.
sh-4.4$ ceph -s
cluster:
id: 9213604e-b0b6-49d5-bcb3-f55ab3d79119
health: HEALTH_WARN
1 MDSs are read only
7 daemons have recently crashed
services:
mon: 5 daemons, quorum bd,bj,bm,bn,bo (age 18h)
m
ah, thank you so much Eugen! this makes sense!
i will report what we will have changed and if it worked or not :)
i wish you a nice weekend!
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-
hi Boris,
On Sat, Feb 11, 2023 at 7:07 AM Boris Behrens wrote:
>
> Hi,
> we use rgw as our backup storage, and it basically holds only compressed
> rbd snapshots.
> I would love to move these out of the replicated into a ec pool.
>
> I've read that I can set a default placement target for a user
Hi,
do you have log output from the read-only MDS, probably in debug mode?
Zitat von kreept.s...@gmail.com:
Hello everyone and sorry. Maybe someone has already faced this problem.
A day ago, we restored our Openshift cluster, however, at the
moment, the PVCs cannot connect to the pod. We loo
Hi,
we use rgw as our backup storage, and it basically holds only compressed
rbd snapshots.
I would love to move these out of the replicated into a ec pool.
I've read that I can set a default placement target for a user (
https://docs.ceph.com/en/octopus/radosgw/placement/). What does happen to
th