[ceph-users] In which cases can the "mon_osd_full_ratio" and the "mon_osd_backfillfull_ratio" be exceeded ?

2023-09-25 Thread Raphael Laguerre
Hello, In which cases can the "mon_osd_full_ratio" and the "mon_osd_backfillfull_ratio" be exceeded ? More specifically, in case a subset of OSDs fail, if there isn't any more space left in the remaining OSDs to migrate the PGs of the failed OSDs without exceeding either the

[ceph-users] Re: Join us for the User + Dev Relaunch, happening this Thursday!

2023-09-25 Thread FastInfo Class
Thanks ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: S3website range requests - possible issue

2023-09-25 Thread Ondřej Kukla
Hello Casey, Thanks a lot for that. I’ve forgot to mention that in my previous message that I was able to trigger the prefetch by header bytes=1-10 You can see the the read 1~10 in the osd logs I’ve sent here - https://pastebin.com/nGQw4ugd Which is wierd as it seems that it is not the same

[ceph-users] Balancer blocked as autoscaler not acting on scaling change

2023-09-25 Thread bc10
Hi Folks, We are currently running with one nearfull OSD and 15 nearfull pools. The most full OSD is about 86% full but the average is 58% full. However, the balancer is skipping a pool on which the autoscaler is trying to complete a pg_num reduction from 131,072 to 32,768

[ceph-users] September Ceph Science Virtual User Group

2023-09-25 Thread Kevin Hrpcek
Hey all, We will be having a Ceph science/research/big cluster call on Wednesday September 27th. If anyone wants to discuss something specific they can add it to the pad linked below. If you have questions or comments you can contact me. This is an informal open call of community members

[ceph-users] Re: rgw: strong consistency for (bucket) policy settings?

2023-09-25 Thread Casey Bodley
On Sat, Sep 23, 2023 at 5:05 AM Matthias Ferdinand wrote: > > On Fri, Sep 22, 2023 at 06:09:57PM -0400, Casey Bodley wrote: > > each radosgw does maintain its own cache for certain metadata like > > users and buckets. when one radosgw writes to a metadata object, it > > broadcasts a notification

[ceph-users] Re: CEPH zero iops after upgrade to Reef and manual read balancer

2023-09-25 Thread Mosharaf Hossain
Greetings Josh, I executed the command today, and it effectively resolved the issue. Within moments, my pools became active, and read/write IOPS started to rise. Furthermore, the Hypervisor and VMs can now communicate seamlessly with the CEPH Cluster. *Command run:* ceph osd rm-pg-upmap-primary

[ceph-users] How to properly remove of cluster_network

2023-09-25 Thread Jan Marek
Hello, I would like to remove cluster_network, because I'm using for in 10Gbps adapters, but for public_network I have two 25Gbps adapters in LAG group... I have cluster with orchestrator. # ceph config dump ... global advanced cluster_network 172.30.0.0/16 global advanced public_network

[ceph-users] outdated mds slow requests

2023-09-25 Thread Ben
Hi, It is running 17.2.5. there are slow requests warnings in cluster log. ceph tell mds.5 dump_ops_in_flight, get the following. These look like outdated and clients were k8s pods. There are warning of the kind in other mds as well. How could they be cleaned from warnings safely? Many thanks.