[ceph-users] Re: Workload Separation in Ceph RGW Cluster - Recommended or Not?

2023-06-06 Thread Ramin Najjarbashi
Thank you for your response and for raising an important question regarding the potential bottlenecks within the RGW or the overall Ceph cluster. I appreciate your insight and would like to provide more information about the issues I have been experiencing. In my deployment, RGW instances 17-20

[ceph-users] Re: The pg_num from 1024 reduce to 32 spend much time, is there way to shorten the time?

2023-06-06 Thread Wesley Dillingham
Can you send along the responses from "ceph df detail" and ceph "ceph osd pool ls detail" Respectfully, *Wes Dillingham* w...@wesdillingham.com LinkedIn On Tue, Jun 6, 2023 at 1:03 PM Eugen Block wrote: > I suspect the target_max_misplaced_ratio

[ceph-users] Workload Separation in Ceph RGW Cluster - Recommended or Not?

2023-06-06 Thread Ramin Najjarbashi
Hi I would like to seek your insights and recommendations regarding the practice of workload separation in a Ceph RGW (RADOS Gateway) cluster. I have been facing challenges with large queues in my deployment and would appreciate your expertise in determining whether workload separation is a

[ceph-users] Re: The pg_num from 1024 reduce to 32 spend much time, is there way to shorten the time?

2023-06-06 Thread Eugen Block
I suspect the target_max_misplaced_ratio (default 0.05). You could try setting it to 1 and see if it helps. This has been discussed multiple times on this list, check out the archives for more details. Zitat von Louis Koo : Thanks for your responses, I want to know why it spend much time to

[ceph-users] Re: Quincy release -Swift integration with Keystone

2023-06-06 Thread Eugen Block
Hi, it's not really useful to create multiple threads for the same question. I wrote up some examples [1] which worked for me to integrate keystone and radosgw. From the debug logs below, it appears that radosgw is still trying to authenticate with Swift instead of Keystone. Any pointers

[ceph-users] Re: The pg_num from 1024 reduce to 32 spend much time, is there way to shorten the time?

2023-06-06 Thread Louis Koo
Thanks for your responses, I want to know why it spend much time to reduce the pg num? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] RADOSGW not authenticating with Keystone. Quincy release

2023-06-06 Thread fsbiz
Hi folks, My ceph cluster with Quincy and Rocky9 is up and running. But I'm having issues with RADOSGW authenticating with keystone. Was wondering if I'm missed anything in the configuration. >From the debug logs below, it appears that radosgw is still trying to >authenticate with Swift instead

[ceph-users] RADOSGW integration with Keystone not working in Quincy release ??

2023-06-06 Thread fs...@yahoo.com
I have a ceph cluster installed using cephadm. The cluster is up and running but I'm unable to get Keystone integration working with RADOSGW. Is this a known issue? thanks,Fred. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an

[ceph-users] Re: Encryption per user Howto

2023-06-06 Thread Frank Schilder
Yes, would be interesting. I understood that it mainly helps with buffered writes, but ceph is using direct IO for writes and that's where bypassing the queues helps. Are there detailed instructions somewhere how to set up a host to disable the queues? I don't have time to figure this out

[ceph-users] Re: Encryption per user Howto

2023-06-06 Thread Stefan Kooman
On 6/6/23 14:26, Frank Schilder wrote: Hi Stefan, there are still users with large HDD installations and I think this will not change anytime soon. What is the impact of encryption with the new settings for HDD? Is it as bad as their continued omission from any statement suggests? We only

[ceph-users] Re: Encryption per user Howto

2023-06-06 Thread Frank Schilder
Hi Stefan, there are still users with large HDD installations and I think this will not change anytime soon. What is the impact of encryption with the new settings for HDD? Is it as bad as their continued omission from any statement suggests? Thanks and best regards, = Frank

[ceph-users] Question about xattr and subvolumes

2023-06-06 Thread Dario Graña
Hi, I'm installing a new instance (my first) of Ceph. Our cluster runs AlmaLinux9 + Quincy. Now I'm dealing with CephFS and quotas. I read documentation about setting up quotas with virtual attributes (xattr) and creating volumes and subvolumes with a prefixed size. I cannot distinguish which is

[ceph-users] Re: CephFS metadata pool grows by two orders of magnitude while trimming (?) snapshots

2023-06-06 Thread Janek Bevendorff
I guess the mailing list didn't preserve the embedded image. Here's an Imgur link: https://imgur.com/a/WSmAOaG I checked the logs as far back as we have them. The issue started appearing only after my last Ceph upgrade on 2 May, which introduced the new corruption assertion. On 06/06/2023

[ceph-users] Re: CephFS metadata pool grows by two orders of magnitude while trimming (?) snapshots

2023-06-06 Thread Janek Bevendorff
I checked our Prometheus logs and the number of log events of individual MONs are indeed randomly starting to increase dramatically all of a sudden. I attached a picture of the curves. The first incidence you see there was when our metadata store filled up entirely. The second, smaller one