Hi,In my ceph log file, the timestamp shows only to millisecond, like below.
How to make it show microsecond and nanosecond? By config file, or need to
modify source code and where?
2023-06-10T09:38:50.549+0800 7f639d009040 15 client.1500594.objecter
_session_op_assign 8 1...
My ceph version is
Hi Austin,
Do you have rgw debug logs that can help debug this?
Can you provide more information, as to which user is trying to assume the
role - which tenants the user and role belong to?
Can you please open a tracker issue with all this information?
Thanks,
Pritha
On Wed, Jun 14, 2023 at 6:14
Hi Cory,
> 1. PUT requests during reshard of versioned bucket fail with 404 and leave
>behind dark data
>
>Tracker: https://tracker.ceph.com/issues/61359
Could you tell me whether this problem is bypassed by
suspending(disabling) versioned buckets?
I have some versioning buckets in my Ce
Do you have an ingress service for HAProxy/keepalived? If so, that’s the
service that you will need to have orch redeploy/restart. If not, maybe try
`ceph orch redeploy pech` ?
Thank you,
Josh Beaman
From: Kai Stian Olstad
Date: Wednesday, June 14, 2023 at 7:58 AM
To: ceph-users@ceph.io
Sub
Hi,
further note to self and for posterity … ;)
This turned out to be a no-go as well, because you can’t silently switch the
pools to a different storage class: the objects will be found, but the index
still refers to the old storage class and lifecycle migrations won’t work.
I’ve brainstormed
Hi folks,
For multi-site environment, it seems that the common practice is to have one
rgw zone backed by one storage cluster. However, after gaining some experience
with such a setup, it appears to me that it is possible to have multiple zones
with the same storage cluster. The first evidence
Hello!
In my company we are using an EC 8+4 pool backed by HDDs.
Since the default rgw_max_chunk_size and rgw_object_stripe_size parameters are
set to 4MiB, each HDD's chunk size is 512KiB, which is not ideal.
I was wondering if anyone tried playing with this parameters, as increasing
them to 3
I'll try to increase in my small cluster, let's see is there any improvement
there, thank you.
Any reason if has memory enough to not increase?
Istvan Szabo
Staff Infrastructure Engineer
---
Agoda Services Co., Ltd.
e: istvan.sz...@agoda.com
--
On Wed, Jun 14, 2023 at 01:44:40PM +, Szabo, Istvan (Agoda) wrote:
I have a dedicated loadbalancer pairs separated on 2x baremetal servers and
behind the haproxy balancers I have 3 mon/mgr/rgw nodes.
Each rgw node has 2rgw on it so in the cluster altogether 6, (now I just added
one more so
Hi,
I have a dedicated loadbalancer pairs separated on 2x baremetal servers and
behind the haproxy balancers I have 3 mon/mgr/rgw nodes.
Each rgw node has 2rgw on it so in the cluster altogether 6, (now I just added
one more so currently 9).
Today I see pretty high GET latency in the cluster (3
When I enabled RGW in cephadm I used this spec file rgw.yml
service_type: rgw
service_id: pech
placement:
label: cog
spec:
ssl: true
rgw_frontend_ssl_certificate: |
-BEGIN CERTIFICATE-
-END CERTIFICATE-
-BEGIN CERTIFICATE-
Hi Pritha,
I have added the bucket to the resource, but I am still running into the same
Forbidden response.
Thanks,
Austin
-Original Message-
From: Pritha Srivastava
Sent: June 14, 2023 4:59 AM
To: Austin Axworthy
Cc: ceph-users@ceph.io
Subject: [ceph-users] Re: RGW STS Token Forbi
Hi everyone,
as discussed on this list before, we had an issue upgrading the metadata
servers while performing an upgrade from 17.2.5 to 17.2.6. (See also
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/U3VEPCXYDYO2YSGF76CJLU25YOPEB3XU/#EVEP2MEMEI5HAXLYAXMHMWM6ZLJ2KUR6
.) We ha
Hi Everyone,
I am new to ceph and looking at how to upgrade a Debian 10 cluster to
Debian 11. Each node is running the stock Debian (Luminous) packages and
the upgrade to Bullseye (Debian 11) will pull in Nautilus.
According to the documentation here
https://docs.ceph.com/en/latest/releases/nauti
Hi Austin,
Can you try by adding the bucket arn to the Resource section of the policy,
like the following:
"Resource": [
"arn:aws:s3:::bucket1",
"arn:aws:s3:::bucket1/*",
"arn:aw
15 matches
Mail list logo