Hello,
In which cases can the "mon_osd_full_ratio" and the
"mon_osd_backfillfull_ratio" be exceeded ?
More specifically, in case a subset of OSDs fail, if there isn't any more space
left in the remaining OSDs to migrate the PGs of the failed OSDs without
exceeding either the
Thanks
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hello Casey,
Thanks a lot for that.
I’ve forgot to mention that in my previous message that I was able to trigger
the prefetch by header bytes=1-10
You can see the the read 1~10 in the osd logs I’ve sent here -
https://pastebin.com/nGQw4ugd
Which is wierd as it seems that it is not the same
Hi Folks,
We are currently running with one nearfull OSD and 15 nearfull pools. The most
full OSD is about 86% full but the average is 58% full. However, the balancer
is skipping a pool on which the autoscaler is trying to complete a pg_num
reduction from 131,072 to 32,768
Hey all,
We will be having a Ceph science/research/big cluster call on Wednesday
September 27th. If anyone wants to discuss something specific they can
add it to the pad linked below. If you have questions or comments you
can contact me.
This is an informal open call of community members
On Sat, Sep 23, 2023 at 5:05 AM Matthias Ferdinand wrote:
>
> On Fri, Sep 22, 2023 at 06:09:57PM -0400, Casey Bodley wrote:
> > each radosgw does maintain its own cache for certain metadata like
> > users and buckets. when one radosgw writes to a metadata object, it
> > broadcasts a notification
Greetings Josh,
I executed the command today, and it effectively resolved the issue. Within
moments, my pools became active, and read/write IOPS started to rise.
Furthermore, the Hypervisor and VMs can now communicate seamlessly with the
CEPH Cluster.
*Command run:*
ceph osd rm-pg-upmap-primary
Hello,
I would like to remove cluster_network, because I'm using for in
10Gbps adapters, but for public_network I have two 25Gbps
adapters in LAG group...
I have cluster with orchestrator.
# ceph config dump
...
global advanced cluster_network 172.30.0.0/16
global advanced public_network
Hi,
It is running 17.2.5. there are slow requests warnings in cluster log.
ceph tell mds.5 dump_ops_in_flight,
get the following.
These look like outdated and clients were k8s pods. There are warning of
the kind in other mds as well. How could they be cleaned from warnings
safely?
Many thanks.