[ceph-users] Re: Write issues on CephFS mounted with root_squash

2024-05-16 Thread Kotresh Hiremath Ravishankar
On Fri, May 17, 2024 at 11:52 AM Nicola Mori wrote: > Thank you Kotresh! My cluster is currently on Reef 18.2.2, which should > be the current version and which is affected. Will the fix be included > in the next Reef release? > Yes, it's already merged to the reef branch, and should be availabl

[ceph-users] Re: MDS behind on trimming every 4-5 weeks causing issue for ceph filesystem

2024-05-16 Thread Kotresh Hiremath Ravishankar
Hi, ~6K log segments to be trimmed, that's huge. 1. Are there any custom configs configured on this setup ? 2. Is subtree pinning enabled ? 3. Are there any warnings w.r.t rados slowness ? 4. Please share the mds perf dump to check for latencies and other stuff. $ceph tell mds. perf dump Than

[ceph-users] Re: Write issues on CephFS mounted with root_squash

2024-05-16 Thread Nicola Mori
Thank you Kotresh! My cluster is currently on Reef 18.2.2, which should be the current version and which is affected. Will the fix be included in the next Reef release? Cheers, Nicola smime.p7s Description: S/MIME Cryptographic Signature ___ ceph-us

[ceph-users] Re: Write issues on CephFS mounted with root_squash

2024-05-16 Thread Kotresh Hiremath Ravishankar
Hi Nicola, Yes, this issue is already fixed in main [1] and the quincy backport is still pending to be merged. Hopefully will be available in the next Quincy release. [1] https://github.com/ceph/ceph/pull/48027 [2] https://github.com/ceph/ceph/pull/54469 Thanks and Regards, Kotresh H R On We

[ceph-users] MDS behind on trimming every 4-5 weeks causing issue for ceph filesystem

2024-05-16 Thread Akash Warkhade
Hi, We are using rook-ceph with operator 1.10.8 and ceph 17.2.5. we are using ceph filesystem with 4 mds i.e 2 active & 2 standby MDS every 3-4 weeks filesystem is having issue i.e in ceph status we can see below warnings warnings : 2 MDS reports slow requests 2 MDS Behind on Trimming mds.myfs-a(

[ceph-users] Re: cephfs-data-scan orphan objects while mds active?

2024-05-16 Thread Gregory Farnum
It's unfortunately more complicated than that. I don't think that forward scrub tag gets persisted to the raw objects; it's just a notation for you. And even if it was, it would only be on the first object in every file — larger files would have many more objects forward scrub doesn't touch. This

[ceph-users] Re: Please discuss about Slow Peering

2024-05-16 Thread Anthony D'Atri
If using jumbo frames, also ensure that they're consistently enabled on all OS instances and network devices. > On May 16, 2024, at 09:30, Frank Schilder wrote: > > This is a long shot: if you are using octopus, you might be hit by this > pglog-dup problem: > https://docs.clyso.com/blog/osds

[ceph-users] Low rate Call girls service In Mehrauli Delhi l Just hire me | 8800733197

2024-05-16 Thread Call Girls
___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: cephadm basic questions: image config, OS reimages

2024-05-16 Thread Adam King
At least for the current up-to-date reef branch (not sure what reef version you're on) when --image is not provided to the shell, it should try to infer the image in this order 1. from the CEPHADM_IMAGE env. variable 2. if you pass --name with a daemon name to the shell command, it will t

[ceph-users] Re: cephadm basic questions: image config, OS reimages

2024-05-16 Thread Robert Sander
On 5/16/24 17:50, Robert Sander wrote: cephadm osd activate HOST would re-activate the OSDs. Small but important typo: It's ceph cephadm osd activate HOST Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-

[ceph-users] Re: cephadm basic questions: image config, OS reimages

2024-05-16 Thread Robert Sander
Hi, On 5/16/24 17:44, Matthew Vernon wrote: cephadm --image docker-registry.wikimedia.org/ceph shell ...but is there a good way to arrange for cephadm to use the already-downloaded image without having to remember to specify --image each time? You could create a shell alias: alias cephsh

[ceph-users] cephadm basic questions: image config, OS reimages

2024-05-16 Thread Matthew Vernon
Hi, I've some experience with Ceph, but haven't used cephadm much before, and am trying to configure a pair of reef clusters with cephadm. A couple of newbie questions, if I may: * cephadm shell image I'm in an isolated environment, so pulling from a local repository. I bootstrapped OK with

[ceph-users] Re: Please discuss about Slow Peering

2024-05-16 Thread Frank Schilder
This is a long shot: if you are using octopus, you might be hit by this pglog-dup problem: https://docs.clyso.com/blog/osds-with-unlimited-ram-growth/. They don't mention slow peering explicitly in the blog, but its also a consequence because the up+acting OSDs need to go through the PG_log duri

[ceph-users] Re: Reef: RGW Multisite object fetch limits

2024-05-16 Thread Janne Johansson
Den tors 16 maj 2024 kl 07:47 skrev Jayanth Reddy : > > Hello Community, > In addition, we've 3+ Gbps links and the average object size is 200 > kilobytes. So the utilization is about 300 Mbps to ~ 1.8 Gbps and not more > than that. > We seem to saturate the link when the secondary zone fetches big