[ceph-users] Re: pg_num != pgp_num - and unable to change.

2023-07-06 Thread Anthony D'Atri
Indeed. For clarity, this process is not the same as the pg_autoscaler. It's real easy to conflate the two, along with the balancer module, so I like to call that out to reduce confusion. > On Jul 6, 2023, at 18:01, Dan van der Ster wrote: > > Since nautilus, pgp_num (and pg_num) will be

[ceph-users] Re: CephFS snapshots: impact of moving data

2023-07-06 Thread Gregory Farnum
Moving files around within the namespace never changes the way the file data is represented within RADOS. It’s just twiddling metadata bits. :) -Greg On Thu, Jul 6, 2023 at 3:26 PM Dan van der Ster wrote: > Hi Mathias, > > Provided that both subdirs are within the same snap context (subdirs

[ceph-users] Re: CephFS snapshots: impact of moving data

2023-07-06 Thread Dan van der Ster
Hi Mathias, Provided that both subdirs are within the same snap context (subdirs below where the .snap is created), I would assume that in the mv case, the space usage is not doubled: the snapshots point at the same inode and it is just linked at different places in the filesystem. However, if

[ceph-users] Re: Ceph Quarterly (CQ) - Issue #1

2023-07-06 Thread Dan van der Ster
Thanks Zac! I only see the txt attachment here. Where can we get the PDF A4 and letter renderings? Cheers, Dan __ Clyso GmbH | Ceph Support and Consulting | https://www.clyso.com On Mon, Jul 3, 2023 at 10:29 AM Zac Dover wrote: > The

[ceph-users] Re: Cannot get backfill speed up

2023-07-06 Thread Dan van der Ster
Hi Jesper, Indeed many users reported slow backfilling and recovery with the mclock scheduler. This is supposed to be fixed in the latest quincy but clearly something is still slowing things down. Some clusters have better luck reverting to osd_op_queue = wpq. (I'm hoping by proposing this

[ceph-users] Re: pg_num != pgp_num - and unable to change.

2023-07-06 Thread Dan van der Ster
Hi Jesper, > In earlier versions of ceph (without autoscaler) I have only experienced > that setting pg_num and pgp_num took immidiate effect? That's correct -- in recent Ceph (since nautilus) you cannot manipulate pgp_num directly anymore. There is a backdoor setting (set pgp_num_actual ...)

[ceph-users] Re: MON sync time depends on outage duration

2023-07-06 Thread Dan van der Ster
Hi Eugen! Yes that sounds familiar from the luminous and mimic days. Check this old thread: https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/F3W2HXMYNF52E7LPIQEJFUTAD3I7QE25/ (that thread is truncated but I can tell you that it worked for Frank). Also the even older referenced

[ceph-users] Re: Rook on bare-metal?

2023-07-06 Thread Travis Nielsen
Here are the answers to some of the questions. Happy to follow up with more discussion in the Rook Slack , Discussions , or Issues . Thanks! Travis On Thu, Jul 6, 2023 at 4:43 AM Anthony D'Atri

[ceph-users] MON sync time depends on outage duration

2023-07-06 Thread Eugen Block
Hi *, I'm investigating an interesting issue on two customer clusters (used for mirroring) I've not solved yet, but today we finally made some progress. Maybe someone has an idea where to look next, I'd appreciate any hints or comments. These are two (latest) Octopus clusters, main usage

[ceph-users] Re: RBD with PWL cache shows poor performance compared to cache device

2023-07-06 Thread Mark Nelson
On 7/6/23 06:02, Matthew Booth wrote: On Wed, 5 Jul 2023 at 15:18, Mark Nelson wrote: I'm sort of amazed that it gave you symbols without the debuginfo packages installed. I'll need to figure out a way to prevent that. Having said that, your new traces look more accurate to me. The thing

[ceph-users] Re: RBD with PWL cache shows poor performance compared to cache device

2023-07-06 Thread Matthew Booth
On Wed, 5 Jul 2023 at 15:18, Mark Nelson wrote: > I'm sort of amazed that it gave you symbols without the debuginfo > packages installed. I'll need to figure out a way to prevent that. > Having said that, your new traces look more accurate to me. The thing > that sticks out to me is the

[ceph-users] Re: Rook on bare-metal?

2023-07-06 Thread Anthony D'Atri
I’m also using Rook on BM. I had never used K8s before, so that was the learning curve, e.g. translating the example YAML files into the Helm charts we needed, and the label / taint / toleration dance to fit the square peg of pinning services to round hole nodes. We’re using Kubespray ; I

[ceph-users] Re: RGW accessing real source IP address of a client (e.g. in S3 bucket policies)

2023-07-06 Thread Christian Rohmann
Hey Casey, all, On 16/06/2023 17:00, Casey Bodley wrote: But when applying a bucket policy with aws:SourceIp it seems to only work if I set the internal IP of the HAProxy instance, not the public IP of the client. So the actual remote address is NOT used in my case. Did I miss any config

[ceph-users] Re: Rook on bare-metal?

2023-07-06 Thread Joachim Kraftmayer - ceph ambassador
Hello we have been following rook since 2018 and have had our experiences both on bare-metal and in the hyperscalers. In the same way, we have been following cephadm from the beginning. Meanwhile, we have been using both in production for years and the decision which orchestrator to use

[ceph-users] Re: ceph quota qustion

2023-07-06 Thread Konstantin Shalygin
Hi, This is incomplete multiparts I guess, you should remove it first. Don't know how S3 Browser works with this entities k Sent from my iPhone > On 6 Jul 2023, at 07:57, sejun21@samsung.com wrote: > > Hi, I contact you for some question about quota. > > Situation is following below. >