Hi,
Can anyone shed light on this please?
I have had our cluster crashed and now managed to get everything back up
and running, osds have nearly rebalanced but I am seeing issues with rgw.
2024-02-05T01:29:56.272+ 7f7237e75f40 20 rados->read ofs=0 len=0
2024-02-05T01:29:56.276+
Hi,
I have a small cluster with some faulty disks within it and I want to clone
the data from the faulty disks onto new ones.
The cluster is currently down and I am unable to do things like
ceph-bluestore-fsck but ceph-bluestore-tool bluefs-export does appear to
be working.
Any help would be
Hi,
Due to idiotic behaviour on my part I made a mistake while replacing some
disks in our data centre and our cluster ended up all powered off!
I have been using ceph for many years (since firefly) but only recently
upgraded to reef and moved to the cephadm / podman setup. I am trying to
figure