Hello,
I got something strange on a Pacific (16.2.6) cluster.
I have added 8 new empty spinning disk on this running cluster that is
configured with:
# ceph orch ls osd --export
service_type: osd
service_id: ar_osd_hdd_spec
service_name: osd.ar_osd_hdd_spec
placement:
host_pattern: '*'
spec:
Hello,
> Recently, I just noticed that there are a lot of logs about Broken pipe
> error from all RGW nodes.
> We set up the cluster by using Ceph-ansible script. The current version of
> cluster is Octopus (15.2.13). After checking the configuration in RGW
> nodes, I see that there is a config
fix is about listing objects inside
buckets not about the bucket list itself (but I started reading Ceph code quite
recently, so I’m not sure I understand it well for now ;-)).
Do you have an ETA for next Pacific release ?
Thank you.
—
Guillaume
> On 7/20/21 11:28 AM, [AR] Guillaume CephML w
Hi all,
Context :
We are moving a customer users/buckets/objects from another Object
storage to Ceph.
Thiis customer has 2 users : “test” and “prod”, the “test” user has
53069 buckets, the “prod” user has 285291 buckets.
Ceph is in 16.2.5 (installed in 16.2.4 and