Hi,

I have a ceph nautilus (14.2.9) cluster with 10 nodes. Each node has
19x16TB disks attached.

I created radosgw pools. secondaryzone.rgw.buckets.data pool is configured
as EC 8+2 (jerasure).
ceph df shows 2.1PiB MAX AVAIL space.

Then I configured radosgw as a secondary zone and 100TiB of S3 data is
replicated.

But weirdly enough ceph df shows 1.8PiB MAX AVAIL for the same pool. But
 there is only 100TiB of written data. ceph df also confirms it. I can not
figure out where 200TiB capacity is gone.

Would someone please tell me what I am missing?

Thanks.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to