[ceph-users] Re: Reef: RGW Multisite object fetch limits

2024-05-31 Thread Jayanth Reddy
y 16, 2024 at 2:40 PM Janne Johansson wrote: > Den tors 16 maj 2024 kl 07:47 skrev Jayanth Reddy < > jayanthreddy5...@gmail.com>: > > > > Hello Community, > > In addition, we've 3+ Gbps links and the average object size is 200 > > kilobytes. So the utilization

[ceph-users] Re: Reef: RGW Multisite object fetch limits

2024-05-15 Thread Jayanth Reddy
em to be 1k to 1.5k per second. Regards, Jayanth On Thu, May 16, 2024 at 11:05 AM Jayanth Reddy wrote: > Hello Community, > We've two zones with Reef (v18.2.1) and trying to sync over 2 billion RGW > objects to the secondary zone. We've added a fresh secondary zone and

[ceph-users] Reef: RGW Multisite object fetch limits

2024-05-15 Thread Jayanth Reddy
Hello Community, We've two zones with Reef (v18.2.1) and trying to sync over 2 billion RGW objects to the secondary zone. We've added a fresh secondary zone and each zones have 2 RGW dedicated daemons (behind LB) each and only for multisite; whereas others don't run sync threads. Strange thing is t

[ceph-users] Re: Snapshot automation/scheduling for rbd?

2024-02-07 Thread Jayanth Reddy
_ From: Jeremy Hansen Sent: Monday, February 5, 2024 10:29:06 pm To: ceph-users@ceph.io ; Jayanth Reddy Subject: Re: [ceph-users] Re: Snapshot automation/scheduling for rbd? Thanks. I think the only issue with doing snapshots via Cloudstack is potentially having to pau

[ceph-users] Re: RBD Image Returning 'Unknown Filesystem LVM2_member' On Mount - Help Please

2024-02-04 Thread Jayanth Reddy
Hi, Anything with "pvs" and "vgs" on the client machine where there is /dev/rbd0? Thanks From: duluxoz Sent: Sunday, February 4, 2024 1:59:04 PM To: yipik...@gmail.com ; matt...@peregrineit.net Cc: ceph-users@ceph.io Subject: [ceph-users] Re: RBD Image Returnin

[ceph-users] Re: Snapshot automation/scheduling for rbd?

2024-02-03 Thread Jayanth Reddy
Hi, For CloudStack with RBD, you should be able to control the snapshot placement using the global setting "snapshot.backup.to.secondary". Setting this to false makes snapshots be placed directly on Ceph instead of secondary storage. See if you can perform recurring snapshots. I know that there

[ceph-users] Re: Quite important: How do I restart a small cluster using cephadm at 18.2.1

2024-01-27 Thread Jayanth Reddy
Hello Carl, What do you mean by powered off? Is the OS booted up and online? Was your disk activity for the OS disk or the disks to which OSDs are deployed? If your OSs are online, all of the daemons should come online automatically. Sometimes when my OSDs are not coming online and assuming rest

[ceph-users] Re: RGW - user created bucket with name of already created bucket

2024-01-14 Thread Jayanth Reddy
-buckets Regards, Jayanth From: Ondřej Kukla Sent: Friday, January 12, 2024 4:19:32 PM To: Jayanth Reddy Cc: ceph-users@ceph.io Subject: Re: [ceph-users] RGW - user created bucket with name of already created bucket Thanks Jayanth, I’ve tried this but

[ceph-users] Re: Reef v18.2.1: ceph osd pool autoscale-status gives empty output

2023-12-25 Thread Jayanth Reddy
2.035909G 0.0008 1.0 128 offFalse cloudstack 771.0G2.035909G 0.0429 1.01024 on True Thanks, Jayanth On Mon, Dec 25, 2023 at 9:

[ceph-users] Reef v18.2.1: ceph osd pool autoscale-status gives empty output

2023-12-25 Thread Jayanth Reddy
Hello Users, I deployed a new cluster with v18.2.1 but noticed that pg_num and pgp_num always remained 1 for the pools with autoscale turned on. Below is the env and the relevant information ceph> version ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable) ceph> status c

[ceph-users] Re: RGW - user created bucket with name of already created bucket

2023-12-24 Thread Jayanth Reddy
Hi Ondřej, I've not tried it myself, but see if you can use *# radosgw-admin bucket unlink* [1] command to achieve it. It is strange that the user was somehow able to create the bucket with the same name. We've also got v17.2.6 and have not encountered this so far. Maybe devs from RGW can answer th

[ceph-users] Re: Libvirt and Ceph: libvirtd tries to open random RBD images

2023-12-04 Thread Jayanth Reddy
gt; increase the debug level to see what exactly it tries to do? > > Regards, > Eugen > > Zitat von Jayanth Reddy : > > > Hello Users, > > We're using libvirt with KVM and the orchestrator is Cloudstack. I raised > > the issue already at Cloudstack at

[ceph-users] Libvirt and Ceph: libvirtd tries to open random RBD images

2023-12-01 Thread Jayanth Reddy
g the same. We manually did "virsh pool-refresh" which CloudStack itself takes care of at regular intervals and the warning messages still appear. Please help me find the cause and let me know if further information is needed. Thanks, Jayanth Reddy ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: owner locked out of bucket via bucket policy

2023-11-08 Thread Jayanth Reddy
Hello Casey, Thank you so much, the steps you provided worked. I'll follow up on the tracker to provide further information. Regards, Jayanth On Wed, Nov 8, 2023 at 8:41 PM Jayanth Reddy wrote: > Hello Casey, > > Thank you so much for the response. I'm applying these righ

[ceph-users] Re: owner locked out of bucket via bucket policy

2023-11-08 Thread Jayanth Reddy
cket metadata and xattrs, so > you'd either need to restart them or clear their metadata caches > > $ ceph daemon client.rgw.xyz cache zap > > On Wed, Nov 8, 2023 at 9:06 AM Jayanth Reddy > wrote: > > > > Hello Wesley, > > Thank you for the response. I trie

[ceph-users] Re: owner locked out of bucket via bucket policy

2023-11-08 Thread Jayanth Reddy
bucket-policy" > > Respectfully, > > *Wes Dillingham* > w...@wesdillingham.com > LinkedIn <http://www.linkedin.com/in/wesleydillingham> > > > On Wed, Nov 8, 2023 at 8:30 AM Jayanth Reddy > wrote: > >> Hello Casey, >> >> We're totally s

[ceph-users] Re: owner locked out of bucket via bucket policy

2023-11-08 Thread Jayanth Reddy
next minute rgw daemon upgraded from > v16.2.12 to v17.2.7. Looks like there is some issue with parsing. > > I'm thinking to downgrade back to v17.2.6 and earlier, please let me know > if this is a good option for now. > > Thanks, > Jayanth > --

[ceph-users] Re: owner locked out of bucket via bucket policy

2023-11-07 Thread Jayanth Reddy
v17.2.7. Looks like there is some issue with parsing. I'm thinking to downgrade back to v17.2.6 and earlier, please let me know if this is a good option for now. Thanks, Jayanth From: Jayanth Reddy Sent: Tuesday, November 7, 2023 11:59:38 PM To: Casey Bodle

[ceph-users] Re: owner locked out of bucket via bucket policy

2023-11-07 Thread Jayanth Reddy
Hello Casey, Thank you for the quick response. I see `rgw_policy_reject_invalid_principals` is not present in v17.2.7. Please let me know. Regards Jayanth On Tue, Nov 7, 2023 at 11:50 PM Casey Bodley wrote: > On Tue, Nov 7, 2023 at 12:41 PM Jayanth Reddy > wrote: > > > >

[ceph-users] Re: owner locked out of bucket via bucket policy

2023-11-07 Thread Jayanth Reddy
Hello Wesley and Casey, We've ended up with the same issue and here it appears that even the user with "--admin" isn't able to do anything. We're now unable to figure out if it is due to bucket policies, ACLs or IAM of some sort. I'm seeing these IAM errors in the logs ``` Nov 7 00:02:00 ceph-0

[ceph-users] RGW: Quincy 17.2.7 and rgw_crypt_default_encryption_key

2023-11-04 Thread Jayanth Reddy
Hello Users, It is great to hear a note about RGW "S3 multipart uploads using Server-Side Encryption now replicate correctly in multi-site" in Quincy v17.2.7 release. But I see that users who are using [1] still have a dependency on the item tracked at [2]. I tested with Reef 18.2.0 as well and the

[ceph-users] RGW multisite - requesting help for fixing error_code: 125

2023-09-17 Thread Jayanth Reddy
Hello Users, We're running 2 Ceph clusters with v17.2.6 and noticing the error message in # radosgw-admin sync error list *"message": "failed to sync bucket instance: (125) Operation canceled"* We've the output as below, [ { "shard_id": 0, "entries": [ {

[ceph-users] Re: Starting v17.2.5 RGW SSE with default key (likely others) no longer works

2023-06-20 Thread Jayanth Reddy
Thanks, Casey for the response. I'll track the fix there. Thanks, Jayanth Reddy ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: EC 8+3 Pool PGs stuck in remapped+incomplete

2023-06-19 Thread Jayanth Reddy
Hello Weiwen, Thank you for the response. I've attached the output for all PGs in state incomplete and remapped+incomplete. Thank you! Thanks, Jayanth Reddy On Sat, Jun 17, 2023 at 11:00 PM 胡 玮文 wrote: > Hi Jayanth, > > Can you post the complete output of “ceph pg query”?

[ceph-users] Re: EC 8+3 Pool PGs stuck in remapped+incomplete

2023-06-19 Thread Jayanth Reddy
Hello Weiwen, Thank you for the response. I've attached the output for all PGs in state incomplete and remapped+incomplete. Thank you! Thanks, Jayanth Reddy On Sun, Jun 18, 2023 at 4:09 PM Jayanth Reddy wrote: > Hello Weiwen, > > Thank you for the response. I've attached

[ceph-users] Starting v17.2.5 RGW SSE with default key (likely others) no longer works

2023-06-17 Thread Jayanth Reddy
ly Thanks, Jayanth Reddy ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: EC 8+3 Pool PGs stuck in remapped+incomplete

2023-06-17 Thread Jayanth Reddy
ng if these have to do something with the recovery. Thanks, Jayanth Reddy On Sat, Jun 17, 2023 at 12:31 PM Jayanth Reddy wrote: > Thanks, Nino. > > Would give these initial suggestions a try and let you know at the > earliest. > > Regards, > Jayanth Reddy >

[ceph-users] Re: EC 8+3 Pool PGs stuck in remapped+incomplete

2023-06-17 Thread Jayanth Reddy
ng if these have to do something with the recovery. Thanks, Jayanth Reddy On Sat, Jun 17, 2023 at 4:17 PM Anthony D'Atri wrote: > Your cluster’s configuration is preventing CRUSH from calculating full > placements > > set max_pg_per_osd = 1000, either in central config (or ceph

[ceph-users] Removing the encryption: (essentially decrypt) encrypted RGW objects

2023-06-17 Thread Jayanth Reddy
mand option at https://github.com/ceph/ceph/pull/51842 but it appears it takes some time to be merged. Cheers, Jayanth Reddy ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: EC 8+3 Pool PGs stuck in remapped+incomplete

2023-06-17 Thread Jayanth Reddy
Thanks, Nino. Would give these initial suggestions a try and let you know at the earliest. Regards, Jayanth Reddy From: Nino Kotur Sent: Saturday, June 17, 2023 12:16:09 PM To: Jayanth Reddy Cc: ceph-users@ceph.io Subject: Re: [ceph-users] EC 8+3 Pool PGs

[ceph-users] EC 8+3 Pool PGs stuck in remapped+incomplete

2023-06-16 Thread Jayanth Reddy
hese are failing as well, very frequently. Currently we're surviving by writing a script that restarts RGW daemons whenever the LB responds with HTTP status code 504. Any help is highly appreciated! Regards, Jayanth Reddy ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io