Hey, I would really appreciate any help I can get on this as googling has
led me to a dead end.
We have 2 data centers each with 4 servers running ceph on kubernetes in
multisite config, everything is working great but recently the master
cluster changed status to HEALTH_WARN and the issues are la
Hi all,
We have 2 Ceph clusters in multisite configuration, both are working fine
(syncing correctly) but 1 of them is showing warning 32 large omap objects
in the log pool.
This seems to be coming from the sync error list
for i in `rados -p wilxite.rgw.log ls`; do echo -n "$i:"; rados -p
wilxit