Hello ,
It could be related to the « erasure-code-profile » which is defined with
different k+m on master and secondary .
For the size of buckets I guess its probalbly due to the parameter :
compression enabled on secondary radosgw « rgw compression : yes » .
Regards
De : Scheurer François
Dear Ceph contributors
While our (new) rgw secondary zone is doing the initial data sync from our
master zone,
we noticed that the reported capacity usage was getting higher than on primary
zone:
Master Zone:
ceph version 14.2.5
zone parameters:
"log_meta":
On 12/31/20 4:16 AM, Glen Baars wrote:
Hello Ceph Users,
Since upgrading from Nautilus to Octopus ( cluster started in luminous ) I have
been trying to debug why the RocksDB/WAL is maxing out the SSD drives. ( QD >
32, 12000 read IOPS, 200 write IOPS ).
From what Nautilus release did you
Hello,
hope you had a nice Xmas and I wish all of you a good and happy new year
in advance...
Yesterday my ceph nautilus 14.2.15 cluster had a disk with unreadable
sectors, after several tries the OSD was marked down and rebalancing
started and has also finished successfully. ceph osd stat shows