On Thu, May 23, 2024 at 11:50 AM Szabo, Istvan (Agoda)
wrote:
>
> Hi,
>
> Wonder what is the best practice to scale RGW, increase the thread numbers or
> spin up more gateways?
>
>
> *
> Let's say I have 21000 connections on my haproxy
> *
> I have 3 physical gateway servers so let's say
On Fri, Apr 12, 2024 at 2:38 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/65393#note-1
> Release Notes - TBD
> LRC upgrade - TBD
>
> Seeking approvals/reviews for:
>
> smoke - infra issues, still trying, Laura PTL
>
> rados - Radek,
unfortunately, this cloud sync module only exports data from ceph to a
remote s3 endpoint, not the other way around:
"This module syncs zone data to a remote cloud service. The sync is
unidirectional; data is not synced back from the remote zone."
i believe that rclone supports copying from one
On Wed, Apr 3, 2024 at 3:09 PM Lorenz Bausch wrote:
>
> Hi Casey,
>
> thank you so much for analysis! We tested the upgraded intensively, but
> the buckets in our test environment were probably too small to get
> dynamically resharded.
>
> > after upgrading to the Quincy release, rgw would
> >
object names when trying to list those buckets. 404
NoSuchKey is the response i would expect in that case
On Wed, Apr 3, 2024 at 12:20 PM Casey Bodley wrote:
>
> On Wed, Apr 3, 2024 at 11:58 AM Lorenz Bausch wrote:
> >
> > Hi everybody,
> >
> > we upgraded our contain
On Wed, Apr 3, 2024 at 11:58 AM Lorenz Bausch wrote:
>
> Hi everybody,
>
> we upgraded our containerized Red Hat Pacific cluster to the latest
> Quincy release (Community Edition).
i'm afraid this is not an upgrade path that we try to test or support.
Red Hat makes its own decisions about what
Ubuntu 22.04 packages are now available for the 17.2.7 Quincy release.
The upcoming Squid release will not support Ubuntu 20.04 (Focal
Fossa). Ubuntu users planning to upgrade from Quincy to Squid will
first need to perform a distro upgrade to 22.04.
Getting Ceph
* Git at
anything we can do to narrow down the policy issue here? any of the
Principal, Action, Resource, or Condition matches could be failing
here. you might try replacing each with a wildcard, one at a time,
until you see the policy take effect
On Wed, Dec 13, 2023 at 5:04 AM Marc Singer wrote:
>
> Hi
hey Christian, i'm guessing this relates to
https://tracker.ceph.com/issues/63373 which tracks a deadlock in s3
DeleteObjects requests when multisite is enabled.
rgw_multi_obj_del_max_aio can be set to 1 as a workaround until the
reef backport lands
On Wed, Mar 6, 2024 at 2:41 PM Christian Kugler
thanks Giada, i see that you created
https://tracker.ceph.com/issues/64547 for this
unfortunately, this topic metadata doesn't really have a permission
model at all. topics are shared across the entire tenant, and all
users have access to read/overwrite those topics
a lot of work was done for
Estimate on release timeline for 17.2.8?
- after pacific 16.2.15 and reef 18.2.2 hotfix
(https://tracker.ceph.com/issues/64339,
https://tracker.ceph.com/issues/64406)
Estimate on release timeline for 19.2.0?
- target April, depending on testing and RCs
- Testing plan for Squid beyond dev freeze
run here, approved
>
> ceph-volume - Guillaume, fixed by
> https://github.com/ceph/ceph/pull/55658 retesting
>
> On Thu, Feb 8, 2024 at 8:43 AM Casey Bodley wrote:
> >
> > thanks, i've created https://tracker.ceph.com/issues/64360 to track
> > these backports t
i've cc'ed Matt who's working on the s3 object integrity feature
https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html,
where rgw compares the generated checksum with the client's on ingest,
then stores it with the object so clients can read it back for later
thanks, i've created https://tracker.ceph.com/issues/64360 to track
these backports to pacific/quincy/reef
On Thu, Feb 8, 2024 at 7:50 AM Stefan Kooman wrote:
>
> Hi,
>
> Is this PR: https://github.com/ceph/ceph/pull/54918 included as well?
>
> You definitely want to build the Ubuntu / debian
On Fri, Feb 2, 2024 at 11:21 AM Chris Palmer wrote:
>
> Hi Matthew
>
> AFAIK the upgrade from quincy/deb11 to reef/deb12 is not possible:
>
> * The packaging problem you can work around, and a fix is pending
> * You have to upgrade both the OS and Ceph in one step
> * The MGR will not run
On Mon, Jan 29, 2024 at 4:39 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/64151#note-1
>
> Seeking approvals/reviews for:
>
> rados - Radek, Laura, Travis, Ernesto, Adam King
> rgw - Casey
rgw approved, thanks
> fs - Venky
> rbd -
On Wed, Jan 31, 2024 at 3:43 AM garcetto wrote:
>
> good morning,
> i was struggling trying to understand why i cannot find this setting on
> my reef version, is it because is only on latest dev ceph version and not
> before?
that's right, this new feature will be part of the squid release. we
my understanding is that default placement is stored at the bucket
level, so changes to the user's default placement only take effect for
newly-created buckets
On Sun, Nov 12, 2023 at 9:48 PM Huy Nguyen wrote:
>
> Hi community,
> I'm using Ceph version 16.2.13. I tried to set
7450325/
> >>
> >> Seems to be related to nfs-ganesha. I've reached out to Frank Filz
> >> (#cephfs on ceph slack) to have a look. WIll update as soon as
> >> possible.
> >>
> >> > orch - Adam King
> >> > rbd - Ilya approved
> >
e cluster to v17.2.7 two days ago and it seems obvious that
>>>> the IAM error logs are generated the next minute rgw daemon upgraded from
>>>> v16.2.12 to v17.2.7. Looks like there is some issue with parsing.
>>>>
>>>> I'm thinking to downgrade back
gt; Thank you, this has worked to remove the policy.
>>
>> Respectfully,
>>
>> *Wes Dillingham*
>> w...@wesdillingham.com
>> LinkedIn <http://www.linkedin.com/in/wesleydillingham>
>>
>>
>> On Wed, Oct 25, 2023 at 5:10 PM Casey Bodley wrot
On Mon, Nov 6, 2023 at 4:31 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/63443#note-1
>
> Seeking approvals/reviews for:
>
> smoke - Laura, Radek, Prashant, Venky (POOL_APP_NOT_ENABLE failures)
> rados - Neha, Radek, Travis,
quincy 17.2.7: released!
* major 'dashboard v3' changes causing issues?
https://github.com/ceph/ceph/pull/54250 did not merge for 17.2.7
* planning a retrospective to discuss what kind of changes should go
in minor releases when members of the dashboard team are present
reef 18.2.1:
* most PRs
another option is to enable the rgw ops log, which includes the bucket
name for each request
the http access log line that's visible at log level 1 follows a known
apache format that users can scrape, so i've resisted adding extra
s3-specific stuff like bucket/object names there. there was some
ngham.com
> LinkedIn
>
>
> On Wed, Oct 25, 2023 at 4:41 PM Casey Bodley wrote:
>>
>> if you have an administrative user (created with --admin), you should
>> be able to use its credentials with awscli to delete or overwrite this
>> bucket policy
>>
>> O
if you have an administrative user (created with --admin), you should
be able to use its credentials with awscli to delete or overwrite this
bucket policy
On Wed, Oct 25, 2023 at 4:11 PM Wesley Dillingham
wrote:
>
> I have a bucket which got injected with bucket policy which locks the
> bucket
:
>
> Thanks Casey for your explanation,
>
> Yes it succeeded eventually. Sometimes after about 100 retries. It's odd that
> it stays in racing condition for that much time.
>
> Best Regards,
> Mahnoosh
>
> On Tue, Oct 24, 2023 at 5:17 PM Casey Bodley wrote:
>>
>> er
errno 125 is ECANCELED, which is the code we use when we detect a
racing write. so it sounds like something else is modifying that user
at the same time. does it eventually succeed if you retry?
On Tue, Oct 24, 2023 at 9:21 AM mahnoosh shahidi
wrote:
>
> Hi all,
>
> I couldn't understand what
On Mon, Oct 16, 2023 at 2:52 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/63219#note-2
> Release Notes - TBD
>
> Issue https://tracker.ceph.com/issues/63192 appears to be failing several
> runs.
> Should it be fixed for this
; the magic that activates that interface eludes me and whether to do it
> directly on the RGW container hos (and how) or on my master host is
> totally unclear to me. It doesn't help that this is an item that has
> multiple values, not just on/off or that by default the docs seem to
> i
On Mon, Oct 16, 2023 at 2:52 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/63219#note-2
> Release Notes - TBD
>
> Issue https://tracker.ceph.com/issues/63192 appears to be failing several
> runs.
> Should it be fixed for this
hey Tim,
your changes to rgw_admin_entry probably aren't taking effect on the
running radosgws. you'd need to restart them in order to set up the
new route
there also seems to be some confusion about the need for a bucket
named 'default'. radosgw just routes requests with paths starting with
we're tracking this in https://tracker.ceph.com/issues/61882. my
understanding is that we're just waiting for the next quincy point
release builds to resolve this
On Tue, Oct 10, 2023 at 11:07 AM Graham Derryberry
wrote:
>
> I have just started adding a ceph client on a rocky 9 system to our
hi Arvydas,
it looks like this change corresponds to
https://tracker.ceph.com/issues/48322 and
https://github.com/ceph/ceph/pull/38234. the intent was to enforce the
same limitation as AWS S3 and force clients to use multipart copy
instead. this limit is controlled by the config option
On Mon, Oct 9, 2023 at 9:16 AM Gilles Mocellin
wrote:
>
> Hello Cephers,
>
> I was using Ceph with OpenStack, and users could add, remove credentials
> with `openstack ec2 credentials` commands.
> But, we are moving our Object Storage service to a new cluster, and
> didn't want to tie it with
thanks Tobias, i see that https://github.com/ceph/ceph/pull/53414 had
a ton of test failures that don't look related. i'm working with Yuri
to reschedule them
On Thu, Oct 5, 2023 at 2:05 AM Tobias Urdin wrote:
>
> Hello Yuri,
>
> On the RGW side I would very much like to get this [1] patch in
On Tue, Oct 3, 2023 at 9:06 AM Thomas Bennett wrote:
>
> Hi Jonas,
>
> Thanks :) that solved my issue.
>
> It would seem to me that this is heading towards something that the clients
> s3 should paginate, but I couldn't find any documentation on how to
> paginate bucket listings.
the s3
On Sat, Sep 23, 2023 at 5:05 AM Matthias Ferdinand wrote:
>
> On Fri, Sep 22, 2023 at 06:09:57PM -0400, Casey Bodley wrote:
> > each radosgw does maintain its own cache for certain metadata like
> > users and buckets. when one radosgw writes to a metadata object, it
> > b
each radosgw does maintain its own cache for certain metadata like
users and buckets. when one radosgw writes to a metadata object, it
broadcasts a notification (using rados watch/notify) to other radosgws
to update/invalidate their caches. the initiating radosgw waits for
all watch/notify
can see the the read 1~10 in the osd logs I’ve sent here -
> https://pastebin.com/nGQw4ugd
>
> Which is wierd as it seems that it is not the same you were able to replicate.
>
> Ondrej
>
> On 22. 9. 2023, at 21:52, Casey Bodley wrote:
>
> hey Ondrej,
>
> thanks
hey Ondrej,
thanks for creating the tracker issue
https://tracker.ceph.com/issues/62938. i added a comment there, and
opened a fix in https://github.com/ceph/ceph/pull/53602 for the only
issue i was able to identify
On Wed, Sep 20, 2023 at 9:20 PM Ondřej Kukla wrote:
>
> I was checking the
s in particular, i highly recommend trying out
the reef release. in addition to multisite resharding support, we made
a lot of improvements to multisite stability/reliability that we won't
be able to backport to pacific/quincy
>
> -Chris
>
>
> On Wednesday, September 20, 2023 at 07:3
these keys starting with "<80>0_" appear to be replication log entries
for multisite. can you confirm that this is a multisite setup? is the
'bucket sync status' mostly caught up on each zone? in a healthy
multisite configuration, these log entries would eventually get
trimmed automatically
On
thanks Shashi, this regression is tracked in
https://tracker.ceph.com/issues/62771. we're testing a fix
On Sat, Sep 16, 2023 at 7:32 PM Shashi Dahal wrote:
>
> Hi All,
>
> We have 3 openstack clusters, each with their own ceph. The openstack
> versions are identical( using openstack-ansible)
On Wed, Aug 23, 2023 at 10:41 AM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/62527#note-1
> Release Notes - TBD
>
> Seeking approvals for:
>
> smoke - Venky
> rados - Radek, Laura
> rook - Sébastien Han
> cephadm - Adam K
>
you could potentially create a cls_crypt object class that exposes
functions like crypt_read() and crypt_write() to do this. but your
application would have to use cls_crypt for all reads/writes instead
of the normal librados read/write operations. would that work for you?
On Wed, Aug 23, 2023 at
On Thu, Aug 17, 2023 at 12:14 PM wrote:
>
> Hello,
>
> Yes, I can see that there are metrics to check the size of the compressed
> data stored in a pool with ceph df detail (relevant columns are USED COMPR
> and UNDER COMPR)
>
> Also the size of compressed data can be checked on osd level using
thanks Louis,
that looks like the same backtrace as
https://tracker.ceph.com/issues/61763. that issue has been on 'Need
More Info' because all of the rgw logging was disabled there. are you
able to share some more log output to help us figure this out?
under "--- begin dump of recent events
On Mon, Jul 31, 2023 at 11:38 AM Yuri Weinstein wrote:
>
> Thx Casey
>
> If you agree I will merge https://github.com/ceph/ceph/pull/52710
> ?
yes please
>
> On Mon, Jul 31, 2023 at 8:34 AM Casey Bodley wrote:
> >
> > On Sun, Jul 30, 2023 at 11:46 AM Yuri Wein
On Sun, Jul 30, 2023 at 11:46 AM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/62231#note-1
>
> Seeking approvals/reviews for:
>
> smoke - Laura, Radek
> rados - Neha, Radek, Travis, Ernesto, Adam King
> rgw - Casey
the pacific upgrade
Welcome to Aviv Caro as new Ceph NVMe-oF lead
Reef status:
* reef 18.1.3 built, gibba cluster upgraded, plan to publish this week
* https://pad.ceph.com/p/reef_final_blockers all resolved except for
bookworm builds https://tracker.ceph.com/issues/61845
* only blockers will merge to reef so the
d=2196790
>
> I was interested to see almost all of these are already in progress .
> That final one (logutils) should go to EPEL's stable repo in a week
> (faster with karma).
>
> - Ken
>
>
>
>
> On Wed, Apr 26, 2023 at 11:00 AM Casey Bodley wrote:
> >
&
On Mon, Jul 10, 2023 at 10:40 AM wrote:
>
> Hi,
>
> yes, this is incomplete multiparts problem.
>
> Then, how do admin delete the incomplete multipart object?
> I mean
> 1. can admin find incomplete job and incomplete multipart object?
> 2. If first question is possible, then can admin delete all
while a bucket is resharding, rgw will retry several times internally
to apply the write before returning an error to the client. while most
buckets can be resharded within seconds, very large buckets may hit
these timeouts. any other cause of slow osd ops could also have that
effect. it can be
gt;
> Regards,Yixin
>
> On Friday, June 30, 2023 at 11:29:16 a.m. EDT, Casey Bodley
> wrote:
>
> you're correct that the distinction is between metadata and data;
> metadata like users and buckets will replicate to all zonegroups,
> while object data only r
On Mon, Jul 3, 2023 at 6:52 AM mahnoosh shahidi wrote:
>
> I think this part of the doc shows that LocationConstraint can override the
> placement and I can change the placement target with this field.
>
> When creating a bucket with the S3 protocol, a placement target can be
> > provided as part
gt; Actually, I reported a documentation bug for something very similar.
>
> On Fri, Jun 30, 2023 at 11:30 PM Casey Bodley wrote:
> >
> > you're correct that the distinction is between metadata and data;
> > metadata like users and buckets will replicate to all zonegroups,
>
you're correct that the distinction is between metadata and data;
metadata like users and buckets will replicate to all zonegroups,
while object data only replicates within a single zonegroup. any given
bucket is 'owned' by the zonegroup that creates it (or overridden by
the LocationConstraint on
hi Jayanth,
i don't know that we have a supported way to do this. the
s3-compatible method would be to copy the object onto itself without
requesting server-side encryption. however, this wouldn't prevent
default encryption if rgw_crypt_default_encryption_key was still
enabled. furthermore, rgw
hi Boris,
we've been investigating reports of excessive polling from metadata
sync. i just opened https://tracker.ceph.com/issues/61743 to track
this. restarting the secondary zone radosgws should help as a
temporary workaround
On Tue, Jun 20, 2023 at 5:57 AM Boris Behrens wrote:
>
> Hi,
>
On Sat, Jun 17, 2023 at 8:37 AM Vahideh Alinouri
wrote:
>
> Dear Ceph Users,
>
> I am writing to request the backporting changes related to the
> AsioFrontend class and specifically regarding the header_limit value.
>
> In the Pacific release of Ceph, the header_limit value in the
> AsioFrontend
On Sat, Jun 17, 2023 at 1:11 PM Jayanth Reddy
wrote:
>
> Hello Folks,
>
> I've been experimenting with RGW encryption and found this out.
> Focusing on Quincy and Reef dev, for the SSE (any methods) to work, transit
> has to be end to end encrypted, however if there is a proxy, then [1] can
> be
On Fri, Jun 16, 2023 at 2:55 AM Christian Rohmann
wrote:
>
> On 15/06/2023 15:46, Casey Bodley wrote:
>
> * In case of HTTP via headers like "X-Forwarded-For". This is
> apparently supported only for logging the source in the "rgw ops log" ([1])?
> Or
On Thu, Jun 15, 2023 at 7:23 AM Christian Rohmann
wrote:
>
> Hello Ceph-Users,
>
> context or motivation of my question is S3 bucket policies and other
> cases using the source IP address as condition.
>
> I was wondering if and how RadosGW is able to access the source IP
> address of clients if
radosgw's object striping does not repeat, so there is no concept of
'stripe width'. rgw_obj_stripe_size just controls the maximum size of
each rados object, so the 'stripe count' is essentially just the total
s3 object size divided by rgw_obj_stripe_size
On Tue, Jun 13, 2023 at 10:22 AM Teja A
wrote:
>
> Casey
>
> I will rerun rgw and we will see.
> Stay tuned.
>
> On Wed, May 31, 2023 at 10:27 AM Casey Bodley wrote:
> >
> > On Tue, May 30, 2023 at 12:54 PM Yuri Weinstein wrote:
> > >
> > > Details of this release are summarized here:
>
On Tue, May 30, 2023 at 12:54 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/61515#note-1
> Release Notes - TBD
>
> Seeking approvals/reviews for:
>
> rados - Neha, Radek, Travis, Ernesto, Adam King (we still have to
> merge
thanks for the report. this regression was already fixed in
https://tracker.ceph.com/issues/58932 and will be in the next quincy
point release
On Wed, May 31, 2023 at 10:46 AM wrote:
>
> I was running on 17.2.5 since October, and just upgraded to 17.2.6, and now
> the "mtime" property on all my
e/overwrite the original copy
>
> Best regards
> Tobias
>
> On 30 May 2023, at 14:48, Casey Bodley wrote:
>
> On Tue, May 30, 2023 at 8:22 AM Tobias Urdin
> mailto:tobias.ur...@binero.com>> wrote:
>
> Hello Casey,
>
> Thanks for the information
n difference is
where they get the key
>
> [1]
> https://docs.ceph.com/en/quincy/radosgw/encryption/#automatic-encryption-for-testing-only
>
> > On 26 May 2023, at 22:45, Casey Bodley wrote:
> >
> > Our downstream QE team recently observed an md5 mismatch of repl
Our downstream QE team recently observed an md5 mismatch of replicated
objects when testing rgw's server-side encryption in multisite. This
corruption is specific to s3 multipart uploads, and only affects the
replicated copy - the original object remains intact. The bug likely
affects Ceph
rgw supports the 3 flavors of S3 Server-Side Encryption, along with
the PutBucketEncryption api for per-bucket default encryption. you can
find the docs in https://docs.ceph.com/en/quincy/radosgw/encryption/
On Mon, May 22, 2023 at 10:49 AM huxia...@horebdata.cn
wrote:
>
> Dear Alexander,
>
>
l one (logutils) should go to EPEL's stable repo in a week
> (faster with karma).
>
> - Ken
>
>
>
>
> On Wed, Apr 26, 2023 at 11:00 AM Casey Bodley wrote:
> >
> > are there any volunteers willing to help make these python packages
> > available upst
On Wed, May 17, 2023 at 11:13 PM Ramin Najjarbashi
wrote:
>
> Hi
>
> I'm currently using Ceph version 16.2.7 and facing an issue with bucket
> creation in a multi-zone configuration. My setup includes two zone groups:
>
> ZG1 (Master) and ZG2, with one zone in each zone group (zone-1 in ZG1 and
>
i'm afraid that feature will be new in the reef release. multisite
resharding isn't supported on quincy
On Wed, May 17, 2023 at 11:56 AM Alexander Mamonov wrote:
>
> https://docs.ceph.com/en/latest/radosgw/multisite/#feature-resharding
> When I try this I get:
> root@ceph-m-02:~# radosgw-admin
sync doesn't distinguish between multipart and regular object uploads.
once a multipart upload completes, sync will replicate it as a single
object using an s3 GetObject request
replicating the parts individually would have some benefits. for
example, when sync retries are necessary, we might
+ 7f20b12b8700 0 WARNING: curl operation timed
> out, network average transfer speed less than 1024 Bytes per second during
> 300 seconds.
> 2023-05-09T15:46:21.069+ 7f2085ff3700 0 rgw async rados processor:
> store->fetch_remote_obj() returned r=-5
> 2023-05-09T15:46:2
On Sun, May 7, 2023 at 5:25 PM Yuri Weinstein wrote:
>
> All PRs were cherry-picked and the new RC1 build is:
>
> https://shaman.ceph.com/builds/ceph/pacific-release/8f93a58b82b94b6c9ac48277cc15bd48d4c0a902/
>
> Rados, fs and rgw were rerun and results are summarized here:
>
On Thu, Apr 27, 2023 at 5:21 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/59542#note-1
> Release Notes - TBD
>
> Seeking approvals for:
>
> smoke - Radek, Laura
> rados - Radek, Laura
> rook - Sébastien Han
> cephadm - Adam K
>
On Thu, Apr 27, 2023 at 11:36 AM Tarrago, Eli (RIS-BCT)
wrote:
>
> After working on this issue for a bit.
> The active plan is to fail over master, to the “west” dc. Perform a realm
> pull from the west so that it forces the failover to occur. Then have the
> “east” DC, then pull the realm data
ges in EPEL. There's this BZ (https://bugzilla.redhat.com/2166620)
>> > requesting that specific package, but that's only one out of the dozen of
>> > missing packages (plus transitive dependencies)...
>> >
>> > Kind Regards,
>> > Ernesto
>> &
# ceph windows tests
PR check will be made required once regressions are fixed
windows build currently depends on gcc11 which limits use of c++20
features. investigating newer gcc or clang toolchain
# 16.2.13 release
final testing in progress
# prometheus metric regressions
On Sun, Apr 16, 2023 at 11:47 PM Richard Bade wrote:
>
> Hi Everyone,
> I've been having trouble finding an answer to this question. Basically
> I'm wanting to know if stuff in the .log pool is actively used for
> anything or if it's just logs that can be deleted.
> In particular I was wondering
On Wed, Apr 19, 2023 at 7:55 PM Christopher Durham wrote:
>
> Hi,
>
> I am using 17.2.6 on rocky linux for both the master and the slave site
> I noticed that:
> radosgw-admin sync status
> often shows that the metadata sync is behind a minute or two on the slave.
> This didn't make sense, as
On Wed, Apr 19, 2023 at 5:13 AM Gaël THEROND wrote:
>
> Hi everyone, quick question regarding radosgw zone data-pool.
>
> I’m currently planning to migrate an old data-pool that was created with
> inappropriate failure-domain to a newly created pool with appropriate
> failure-domain.
>
> If I’m
On Tue, Apr 11, 2023 at 3:53 PM Casey Bodley wrote:
>
> On Tue, Apr 11, 2023 at 3:19 PM Christopher Durham wrote:
> >
> >
> > Hi,
> > I see that this PR: https://github.com/ceph/ceph/pull/48030
> > made it into ceph 17.2.6, as per the change log at:
> >
On Tue, Apr 11, 2023 at 3:19 PM Christopher Durham wrote:
>
>
> Hi,
> I see that this PR: https://github.com/ceph/ceph/pull/48030
> made it into ceph 17.2.6, as per the change log at:
> https://docs.ceph.com/en/latest/releases/quincy/ That's great.
> But my scenario is as follows:
> I have two
there's a rgw_period_root_pool option for the period objects too. but
it shouldn't be necessary to override any of these
On Sun, Apr 9, 2023 at 11:26 PM wrote:
>
> Up :)
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an
know who's the maintainer of those
> > packages in EPEL. There's this BZ (https://bugzilla.redhat.com/2166620)
> > requesting that specific package, but that's only one out of the dozen of
> > missing packages (plus transitive dependencies)...
> >
> > Kind Regards
On Fri, Mar 24, 2023 at 3:46 PM Yuri Weinstein wrote:
>
> Details of this release are updated here:
>
> https://tracker.ceph.com/issues/59070#note-1
> Release Notes - TBD
>
> The slowness we experienced seemed to be self-cured.
> Neha, Radek, and Laura please provide any findings if you have
On Wed, Mar 22, 2023 at 9:27 AM Casey Bodley wrote:
>
> On Tue, Mar 21, 2023 at 4:06 PM Yuri Weinstein wrote:
> >
> > Details of this release are summarized here:
> >
> > https://tracker.ceph.com/issues/59070#note-1
> > Release Notes - TBD
> >
> >
hi Ernesto and lists,
> [1] https://github.com/ceph/ceph/pull/47501
are we planning to backport this to quincy so we can support centos 9
there? enabling that upgrade path on centos 9 was one of the
conditions for dropping centos 8 support in reef, which i'm still keen
to do
if not, can we find
On Tue, Mar 21, 2023 at 4:06 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/59070#note-1
> Release Notes - TBD
>
> The reruns were in the queue for 4 days because of some slowness issues.
> The core team (Neha, Radek, Laura, and
On Tue, Feb 28, 2023 at 8:19 AM Lars Dunemark wrote:
>
> Hi,
>
> I notice that CompleteMultipartUploadResult does return an empty ETag
> field when completing an multipart upload in v17.2.3.
>
> I haven't had the possibility to verify from which version this changed
> and can't find in the
On Sun, Feb 26, 2023 at 8:20 AM Ilya Dryomov wrote:
>
> On Sun, Feb 26, 2023 at 2:15 PM Patrick Schlangen
> wrote:
> >
> > Hi Ilya,
> >
> > > Am 26.02.2023 um 14:05 schrieb Ilya Dryomov :
> > >
> > > Isn't OpenSSL 1.0 long out of support? I'm not sure if extending
> > > librados API to support
On Mon, Feb 13, 2023 at 8:41 AM Boris Behrens wrote:
>
> I've tried it the other way around and let cat give out all escaped chars
> and the did the grep:
>
> # cat -A omapkeys_list | grep -aFn '/'
> 9844:/$
> 9845:/^@v913^@$
> 88010:M-^@1000_/^@$
> 128981:M-^@1001_/$
>
> Did anyone ever saw
On Mon, Feb 13, 2023 at 4:31 AM Boris Behrens wrote:
>
> Hi Casey,
>
>> changes to the user's default placement target/storage class don't
>> apply to existing buckets, only newly-created ones. a bucket's default
>> placement target/storage class can't be changed after creation
>
>
> so I can
hi Boris,
On Sat, Feb 11, 2023 at 7:07 AM Boris Behrens wrote:
>
> Hi,
> we use rgw as our backup storage, and it basically holds only compressed
> rbd snapshots.
> I would love to move these out of the replicated into a ec pool.
>
> I've read that I can set a default placement target for a user
distro testing for reef
* https://github.com/ceph/ceph/pull/49443 adds centos9 and ubuntu22 to
supported distros
* centos9 blocked by teuthology bug https://tracker.ceph.com/issues/58491
- lsb_release command no longer exists, use /etc/os-release instead
- ceph stopped depending on lsb_release
On Fri, Jan 20, 2023 at 11:39 AM Yuri Weinstein wrote:
>
> The overall progress on this release is looking much better and if we
> can approve it we can plan to publish it early next week.
>
> Still seeking approvals
>
> rados - Neha, Laura
> rook - Sébastien Han
> cephadm - Adam
> dashboard -
1 - 100 of 181 matches
Mail list logo