[ceph-users] Re: Octopus client for Nautilus OSD/MON

2022-06-02 Thread Jiatong Shen
Thank you very much! On Thu, Jun 2, 2022 at 11:23 PM Konstantin Shalygin wrote: > The "next" release is always compatible with "previous one" clusters > > > k > Sent from my iPhone > > > On 2 Jun 2022, at 16:28, Jiatong Shen wrote: > > > > Hello, > > > >where can I find librbd compatility

[ceph-users] Re: Slow delete speed through the s3 API

2022-06-02 Thread Wesley Dillingham
Is it just your deletes which are slow or writes and read as well? On Thu, Jun 2, 2022, 4:09 PM J-P Methot wrote: > I'm following up on this as we upgraded to Pacific 16.2.9 and deletes > are still incredibly slow. The pool rgw is using is a fairly small > erasure coding pool set at 8 + 3. Is

[ceph-users] Re: Help needed picking the right amount of PGs for (Cephfs) metadata pool

2022-06-02 Thread Ramana Venkatesh Raja
On Thu, Jun 2, 2022 at 11:40 AM Stefan Kooman wrote: > > Hi, > > We have a CephFS filesystem holding 70 TiB of data in ~ 300 M files and > ~ 900 M sub directories. We currently have 180 OSDs in this cluster. > > POOL ID PGS STORED (DATA) (OMAP) OBJECTS USED > (DATA)

[ceph-users] Re: Octopus client for Nautilus OSD/MON

2022-06-02 Thread Konstantin Shalygin
The "next" release is always compatible with "previous one" clusters k Sent from my iPhone > On 2 Jun 2022, at 16:28, Jiatong Shen wrote: > > Hello, > >where can I find librbd compatility matrix? For example, Is octopus > client compatible with nautilus server? Thank you. > > -- > >

[ceph-users] Unable to deploy new manager in octopus

2022-06-02 Thread Patrick Vranckx
Hi, On my test cluster, I migrated from Nautilus to Octopus and the converted most of the daemons to cephadm. I got a lot of problem with podman 1.6.4 on CentOS 7 through an https proxy because my servers are on a private network. Now, I'm unable to deploy new managers and the cluster is in

[ceph-users] OSD_FULL raised when osd was not full (octopus 15.2.16)

2022-06-02 Thread Stefan Kooman
Hi, Yesterday we hit OSD_FULL / POOL_FULL conditions for two brief moments. As all OSDs are present in all pools, all IO was stalled. Which impacted a few MDs clients (got evicted). Although the impact was limited, I *really* would like to understand how that could happen, as it should not

[ceph-users] Octopus client for Nautilus OSD/MON

2022-06-02 Thread Jiatong Shen
Hello, where can I find librbd compatility matrix? For example, Is octopus client compatible with nautilus server? Thank you. -- Best Regards, Jiatong Shen ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to

[ceph-users] Re: Moving rbd-images across pools?

2022-06-02 Thread Jan-Philipp Litza
Hey Angelo, what you're asking for is "Live Migration". https://docs.ceph.com/en/latest/rbd/rbd-live-migration/ says: The live-migration copy process can safely run in the background while the new target image is in use. There is currently a requirement to temporarily stop using the source

[ceph-users] Multi-active MDS cache pressure

2022-06-02 Thread Eugen Block
Hi, I'm currently debugging a reoccuring issue with multi-active MDS. The cluster is still on Nautilus and can't be upgraded at this time. There have been many discussions about "cache pressure" and I was able to find the right settings a couple of times, but before I change too much in

[ceph-users] Re: MDS stuck in replay

2022-06-02 Thread Magnus HAGDORN
at this stage we are not so worried about recovery since we moved to our new pacific cluster. The problem arose during one of the nightly syncs of the old cluster to the new cluster. However, we are quite keen to use this as a learning opportunity to see what we can do to bring this filesystem