y 16, 2024 at 2:40 PM Janne Johansson wrote:
> Den tors 16 maj 2024 kl 07:47 skrev Jayanth Reddy <
> jayanthreddy5...@gmail.com>:
> >
> > Hello Community,
> > In addition, we've 3+ Gbps links and the average object size is 200
> > kilobytes. So the utilization
em to be 1k to 1.5k per
second.
Regards,
Jayanth
On Thu, May 16, 2024 at 11:05 AM Jayanth Reddy
wrote:
> Hello Community,
> We've two zones with Reef (v18.2.1) and trying to sync over 2 billion RGW
> objects to the secondary zone. We've added a fresh secondary zone and
Hello Community,
We've two zones with Reef (v18.2.1) and trying to sync over 2 billion RGW
objects to the secondary zone. We've added a fresh secondary zone and each
zones have 2 RGW dedicated daemons (behind LB) each and only for multisite;
whereas others don't run sync threads. Strange thing is t
_
From: Jeremy Hansen
Sent: Monday, February 5, 2024 10:29:06 pm
To: ceph-users@ceph.io ; Jayanth Reddy
Subject: Re: [ceph-users] Re: Snapshot automation/scheduling for rbd?
Thanks. I think the only issue with doing snapshots via Cloudstack is
potentially having to pau
Hi,
Anything with "pvs" and "vgs" on the client machine where there is /dev/rbd0?
Thanks
From: duluxoz
Sent: Sunday, February 4, 2024 1:59:04 PM
To: yipik...@gmail.com ; matt...@peregrineit.net
Cc: ceph-users@ceph.io
Subject: [ceph-users] Re: RBD Image Returnin
Hi,
For CloudStack with RBD, you should be able to control the snapshot placement
using the global setting "snapshot.backup.to.secondary". Setting this to false
makes snapshots be placed directly on Ceph instead of secondary storage. See if
you can perform recurring snapshots. I know that there
Hello Carl,
What do you mean by powered off? Is the OS booted up and online? Was your disk
activity for the OS disk or the disks to which OSDs are deployed?
If your OSs are online, all of the daemons should come online automatically.
Sometimes when my OSDs are not coming online and assuming rest
-buckets
Regards,
Jayanth
From: Ondřej Kukla
Sent: Friday, January 12, 2024 4:19:32 PM
To: Jayanth Reddy
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] RGW - user created bucket with name of already
created bucket
Thanks Jayanth,
I’ve tried this but
2.035909G 0.0008
1.0 128 offFalse
cloudstack 771.0G2.035909G 0.0429
1.01024 on True
Thanks,
Jayanth
On Mon, Dec 25, 2023 at 9:
Hello Users,
I deployed a new cluster with v18.2.1 but noticed that pg_num and pgp_num
always remained 1 for the pools with autoscale turned on. Below is the env
and the relevant information
ceph> version
ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)
ceph> status
c
Hi Ondřej,
I've not tried it myself, but see if you can use *# radosgw-admin bucket
unlink* [1] command to achieve it. It is strange that the user was somehow
able to create the bucket with the same name. We've also got v17.2.6 and
have not encountered this so far. Maybe devs from RGW can answer th
gt; increase the debug level to see what exactly it tries to do?
>
> Regards,
> Eugen
>
> Zitat von Jayanth Reddy :
>
> > Hello Users,
> > We're using libvirt with KVM and the orchestrator is Cloudstack. I raised
> > the issue already at Cloudstack at
g the same. We manually did "virsh pool-refresh" which CloudStack
itself takes care of at regular intervals and the warning messages still
appear. Please help me find the cause and let me know if further
information is needed.
Thanks,
Jayanth Reddy
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hello Casey,
Thank you so much, the steps you provided worked. I'll follow up on the
tracker to provide further information.
Regards,
Jayanth
On Wed, Nov 8, 2023 at 8:41 PM Jayanth Reddy
wrote:
> Hello Casey,
>
> Thank you so much for the response. I'm applying these righ
cket metadata and xattrs, so
> you'd either need to restart them or clear their metadata caches
>
> $ ceph daemon client.rgw.xyz cache zap
>
> On Wed, Nov 8, 2023 at 9:06 AM Jayanth Reddy
> wrote:
> >
> > Hello Wesley,
> > Thank you for the response. I trie
bucket-policy"
>
> Respectfully,
>
> *Wes Dillingham*
> w...@wesdillingham.com
> LinkedIn <http://www.linkedin.com/in/wesleydillingham>
>
>
> On Wed, Nov 8, 2023 at 8:30 AM Jayanth Reddy
> wrote:
>
>> Hello Casey,
>>
>> We're totally s
next minute rgw daemon upgraded from
> v16.2.12 to v17.2.7. Looks like there is some issue with parsing.
>
> I'm thinking to downgrade back to v17.2.6 and earlier, please let me know
> if this is a good option for now.
>
> Thanks,
> Jayanth
> --
v17.2.7. Looks like there is some issue with parsing.
I'm thinking to downgrade back to v17.2.6 and earlier, please let me know if
this is a good option for now.
Thanks,
Jayanth
From: Jayanth Reddy
Sent: Tuesday, November 7, 2023 11:59:38 PM
To: Casey Bodle
Hello Casey,
Thank you for the quick response. I see
`rgw_policy_reject_invalid_principals` is not present in v17.2.7. Please
let me know.
Regards
Jayanth
On Tue, Nov 7, 2023 at 11:50 PM Casey Bodley wrote:
> On Tue, Nov 7, 2023 at 12:41 PM Jayanth Reddy
> wrote:
> >
> >
Hello Wesley and Casey,
We've ended up with the same issue and here it appears that even the user
with "--admin" isn't able to do anything. We're now unable to figure out if
it is due to bucket policies, ACLs or IAM of some sort. I'm seeing
these IAM errors in the logs
```
Nov 7 00:02:00 ceph-0
Hello Users,
It is great to hear a note about RGW "S3 multipart uploads using
Server-Side Encryption now replicate correctly in multi-site" in Quincy
v17.2.7 release. But I see that users who are using [1] still have a
dependency on the item tracked at [2].
I tested with Reef 18.2.0 as well and the
Hello Users,
We're running 2 Ceph clusters with v17.2.6 and noticing the error message
in # radosgw-admin sync error list
*"message": "failed to sync bucket instance: (125) Operation canceled"*
We've the output as below,
[
{
"shard_id": 0,
"entries": [
{
Thanks, Casey for the response. I'll track the fix there.
Thanks,
Jayanth Reddy
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hello Weiwen,
Thank you for the response. I've attached the output for all PGs in state
incomplete and remapped+incomplete. Thank you!
Thanks,
Jayanth Reddy
On Sat, Jun 17, 2023 at 11:00 PM 胡 玮文 wrote:
> Hi Jayanth,
>
> Can you post the complete output of “ceph pg query”?
Hello Weiwen,
Thank you for the response. I've attached the output for all PGs in state
incomplete and remapped+incomplete. Thank you!
Thanks,
Jayanth Reddy
On Sun, Jun 18, 2023 at 4:09 PM Jayanth Reddy
wrote:
> Hello Weiwen,
>
> Thank you for the response. I've attached
ly
Thanks,
Jayanth Reddy
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
ng if these have to
do something with the recovery.
Thanks,
Jayanth Reddy
On Sat, Jun 17, 2023 at 12:31 PM Jayanth Reddy
wrote:
> Thanks, Nino.
>
> Would give these initial suggestions a try and let you know at the
> earliest.
>
> Regards,
> Jayanth Reddy
>
ng if these have to
do something with the recovery.
Thanks,
Jayanth Reddy
On Sat, Jun 17, 2023 at 4:17 PM Anthony D'Atri
wrote:
> Your cluster’s configuration is preventing CRUSH from calculating full
> placements
>
> set max_pg_per_osd = 1000, either in central config (or ceph
mand option at
https://github.com/ceph/ceph/pull/51842 but it appears it takes some time
to be merged.
Cheers,
Jayanth Reddy
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Thanks, Nino.
Would give these initial suggestions a try and let you know at the earliest.
Regards,
Jayanth Reddy
From: Nino Kotur
Sent: Saturday, June 17, 2023 12:16:09 PM
To: Jayanth Reddy
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] EC 8+3 Pool PGs
hese are failing as well, very
frequently. Currently we're surviving by writing a script that restarts RGW
daemons whenever the LB responds with HTTP status code 504. Any help is
highly appreciated!
Regards,
Jayanth Reddy
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
31 matches
Mail list logo