[ceph-users] Re: Best practice regarding rgw scaling

2024-05-23 Thread Casey Bodley
On Thu, May 23, 2024 at 11:50 AM Szabo, Istvan (Agoda) wrote: > > Hi, > > Wonder what is the best practice to scale RGW, increase the thread numbers or > spin up more gateways? > > > * > Let's say I have 21000 connections on my haproxy > * > I have 3 physical gateway servers so let's say

[ceph-users] Re: reef 18.2.3 QE validation status

2024-04-12 Thread Casey Bodley
On Fri, Apr 12, 2024 at 2:38 PM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/65393#note-1 > Release Notes - TBD > LRC upgrade - TBD > > Seeking approvals/reviews for: > > smoke - infra issues, still trying, Laura PTL > > rados - Radek,

[ceph-users] Re: Migrating from S3 to Ceph RGW (Cloud Sync Module)

2024-04-11 Thread Casey Bodley
unfortunately, this cloud sync module only exports data from ceph to a remote s3 endpoint, not the other way around: "This module syncs zone data to a remote cloud service. The sync is unidirectional; data is not synced back from the remote zone." i believe that rclone supports copying from one

[ceph-users] Re: Upgraded to Quincy 17.2.7: some S3 buckets inaccessible

2024-04-03 Thread Casey Bodley
On Wed, Apr 3, 2024 at 3:09 PM Lorenz Bausch wrote: > > Hi Casey, > > thank you so much for analysis! We tested the upgraded intensively, but > the buckets in our test environment were probably too small to get > dynamically resharded. > > > after upgrading to the Quincy release, rgw would > >

[ceph-users] Re: Upgraded to Quincy 17.2.7: some S3 buckets inaccessible

2024-04-03 Thread Casey Bodley
object names when trying to list those buckets. 404 NoSuchKey is the response i would expect in that case On Wed, Apr 3, 2024 at 12:20 PM Casey Bodley wrote: > > On Wed, Apr 3, 2024 at 11:58 AM Lorenz Bausch wrote: > > > > Hi everybody, > > > > we upgraded our contain

[ceph-users] Re: Upgraded to Quincy 17.2.7: some S3 buckets inaccessible

2024-04-03 Thread Casey Bodley
On Wed, Apr 3, 2024 at 11:58 AM Lorenz Bausch wrote: > > Hi everybody, > > we upgraded our containerized Red Hat Pacific cluster to the latest > Quincy release (Community Edition). i'm afraid this is not an upgrade path that we try to test or support. Red Hat makes its own decisions about what

[ceph-users] v17.2.7 Quincy now supports Ubuntu 22.04 (Jammy Jellyfish)

2024-03-29 Thread Casey Bodley
Ubuntu 22.04 packages are now available for the 17.2.7 Quincy release. The upcoming Squid release will not support Ubuntu 20.04 (Focal Fossa). Ubuntu users planning to upgrade from Quincy to Squid will first need to perform a distro upgrade to 22.04. Getting Ceph * Git at

[ceph-users] Re: Disable signature url in ceph rgw

2024-03-07 Thread Casey Bodley
anything we can do to narrow down the policy issue here? any of the Principal, Action, Resource, or Condition matches could be failing here. you might try replacing each with a wildcard, one at a time, until you see the policy take effect On Wed, Dec 13, 2023 at 5:04 AM Marc Singer wrote: > > Hi

[ceph-users] Re: Hanging request in S3

2024-03-06 Thread Casey Bodley
hey Christian, i'm guessing this relates to https://tracker.ceph.com/issues/63373 which tracks a deadlock in s3 DeleteObjects requests when multisite is enabled. rgw_multi_obj_del_max_aio can be set to 1 as a workaround until the reef backport lands On Wed, Mar 6, 2024 at 2:41 PM Christian Kugler

[ceph-users] Re: list topic shows endpoint url and username e password

2024-02-23 Thread Casey Bodley
thanks Giada, i see that you created https://tracker.ceph.com/issues/64547 for this unfortunately, this topic metadata doesn't really have a permission model at all. topics are shared across the entire tenant, and all users have access to read/overwrite those topics a lot of work was done for

[ceph-users] Ceph Leadership Team Meeting: 2024-2-21 Minutes

2024-02-21 Thread Casey Bodley
Estimate on release timeline for 17.2.8? - after pacific 16.2.15 and reef 18.2.2 hotfix (https://tracker.ceph.com/issues/64339, https://tracker.ceph.com/issues/64406) Estimate on release timeline for 19.2.0? - target April, depending on testing and RCs - Testing plan for Squid beyond dev freeze

[ceph-users] Re: pacific 16.2.15 QE validation status

2024-02-21 Thread Casey Bodley
run here, approved > > ceph-volume - Guillaume, fixed by > https://github.com/ceph/ceph/pull/55658 retesting > > On Thu, Feb 8, 2024 at 8:43 AM Casey Bodley wrote: > > > > thanks, i've created https://tracker.ceph.com/issues/64360 to track > > these backports t

[ceph-users] Re: How to solve data fixity

2024-02-09 Thread Casey Bodley
i've cc'ed Matt who's working on the s3 object integrity feature https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html, where rgw compares the generated checksum with the client's on ingest, then stores it with the object so clients can read it back for later

[ceph-users] Re: pacific 16.2.15 QE validation status

2024-02-08 Thread Casey Bodley
thanks, i've created https://tracker.ceph.com/issues/64360 to track these backports to pacific/quincy/reef On Thu, Feb 8, 2024 at 7:50 AM Stefan Kooman wrote: > > Hi, > > Is this PR: https://github.com/ceph/ceph/pull/54918 included as well? > > You definitely want to build the Ubuntu / debian

[ceph-users] Re: Debian 12 (bookworm) / Reef 18.2.1 problems

2024-02-02 Thread Casey Bodley
On Fri, Feb 2, 2024 at 11:21 AM Chris Palmer wrote: > > Hi Matthew > > AFAIK the upgrade from quincy/deb11 to reef/deb12 is not possible: > > * The packaging problem you can work around, and a fix is pending > * You have to upgrade both the OS and Ceph in one step > * The MGR will not run

[ceph-users] Re: pacific 16.2.15 QE validation status

2024-01-31 Thread Casey Bodley
On Mon, Jan 29, 2024 at 4:39 PM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/64151#note-1 > > Seeking approvals/reviews for: > > rados - Radek, Laura, Travis, Ernesto, Adam King > rgw - Casey rgw approved, thanks > fs - Venky > rbd -

[ceph-users] Re: Help on rgw metrics (was rgw_user_counters_cache)

2024-01-31 Thread Casey Bodley
On Wed, Jan 31, 2024 at 3:43 AM garcetto wrote: > > good morning, > i was struggling trying to understand why i cannot find this setting on > my reef version, is it because is only on latest dev ceph version and not > before? that's right, this new feature will be part of the squid release. we

[ceph-users] Re: RGW: user modify default_storage_class does not work

2023-11-13 Thread Casey Bodley
my understanding is that default placement is stored at the bucket level, so changes to the user's default placement only take effect for newly-created buckets On Sun, Nov 12, 2023 at 9:48 PM Huy Nguyen wrote: > > Hi community, > I'm using Ceph version 16.2.13. I tried to set

[ceph-users] Re: reef 18.2.1 QE Validation status

2023-11-09 Thread Casey Bodley
7450325/ > >> > >> Seems to be related to nfs-ganesha. I've reached out to Frank Filz > >> (#cephfs on ceph slack) to have a look. WIll update as soon as > >> possible. > >> > >> > orch - Adam King > >> > rbd - Ilya approved > >

[ceph-users] Re: owner locked out of bucket via bucket policy

2023-11-08 Thread Casey Bodley
e cluster to v17.2.7 two days ago and it seems obvious that >>>> the IAM error logs are generated the next minute rgw daemon upgraded from >>>> v16.2.12 to v17.2.7. Looks like there is some issue with parsing. >>>> >>>> I'm thinking to downgrade back

[ceph-users] Re: owner locked out of bucket via bucket policy

2023-11-07 Thread Casey Bodley
gt; Thank you, this has worked to remove the policy. >> >> Respectfully, >> >> *Wes Dillingham* >> w...@wesdillingham.com >> LinkedIn <http://www.linkedin.com/in/wesleydillingham> >> >> >> On Wed, Oct 25, 2023 at 5:10 PM Casey Bodley wrot

[ceph-users] Re: reef 18.2.1 QE Validation status

2023-11-07 Thread Casey Bodley
On Mon, Nov 6, 2023 at 4:31 PM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/63443#note-1 > > Seeking approvals/reviews for: > > smoke - Laura, Radek, Prashant, Venky (POOL_APP_NOT_ENABLE failures) > rados - Neha, Radek, Travis,

[ceph-users] Ceph Leadership Team Meeting: 2023-11-1 Minutes

2023-11-01 Thread Casey Bodley
quincy 17.2.7: released! * major 'dashboard v3' changes causing issues? https://github.com/ceph/ceph/pull/54250 did not merge for 17.2.7 * planning a retrospective to discuss what kind of changes should go in minor releases when members of the dashboard team are present reef 18.2.1: * most PRs

[ceph-users] Re: RGW access logs with bucket name

2023-10-30 Thread Casey Bodley
another option is to enable the rgw ops log, which includes the bucket name for each request the http access log line that's visible at log level 1 follows a known apache format that users can scrape, so i've resisted adding extra s3-specific stuff like bucket/object names there. there was some

[ceph-users] Re: owner locked out of bucket via bucket policy

2023-10-25 Thread Casey Bodley
ngham.com > LinkedIn > > > On Wed, Oct 25, 2023 at 4:41 PM Casey Bodley wrote: >> >> if you have an administrative user (created with --admin), you should >> be able to use its credentials with awscli to delete or overwrite this >> bucket policy >> >> O

[ceph-users] Re: owner locked out of bucket via bucket policy

2023-10-25 Thread Casey Bodley
if you have an administrative user (created with --admin), you should be able to use its credentials with awscli to delete or overwrite this bucket policy On Wed, Oct 25, 2023 at 4:11 PM Wesley Dillingham wrote: > > I have a bucket which got injected with bucket policy which locks the > bucket

[ceph-users] Re: Modify user op status=-125

2023-10-24 Thread Casey Bodley
: > > Thanks Casey for your explanation, > > Yes it succeeded eventually. Sometimes after about 100 retries. It's odd that > it stays in racing condition for that much time. > > Best Regards, > Mahnoosh > > On Tue, Oct 24, 2023 at 5:17 PM Casey Bodley wrote: >> >> er

[ceph-users] Re: Modify user op status=-125

2023-10-24 Thread Casey Bodley
errno 125 is ECANCELED, which is the code we use when we detect a racing write. so it sounds like something else is modifying that user at the same time. does it eventually succeed if you retry? On Tue, Oct 24, 2023 at 9:21 AM mahnoosh shahidi wrote: > > Hi all, > > I couldn't understand what

[ceph-users] Re: quincy v17.2.7 QE Validation status

2023-10-18 Thread Casey Bodley
On Mon, Oct 16, 2023 at 2:52 PM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/63219#note-2 > Release Notes - TBD > > Issue https://tracker.ceph.com/issues/63192 appears to be failing several > runs. > Should it be fixed for this

[ceph-users] Re: Dashboard and Object Gateway

2023-10-17 Thread Casey Bodley
; the magic that activates that interface eludes me and whether to do it > directly on the RGW container hos (and how) or on my master host is > totally unclear to me. It doesn't help that this is an item that has > multiple values, not just on/off or that by default the docs seem to > i

[ceph-users] Re: quincy v17.2.7 QE Validation status

2023-10-17 Thread Casey Bodley
On Mon, Oct 16, 2023 at 2:52 PM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/63219#note-2 > Release Notes - TBD > > Issue https://tracker.ceph.com/issues/63192 appears to be failing several > runs. > Should it be fixed for this

[ceph-users] Re: Dashboard and Object Gateway

2023-10-17 Thread Casey Bodley
hey Tim, your changes to rgw_admin_entry probably aren't taking effect on the running radosgws. you'd need to restart them in order to set up the new route there also seems to be some confusion about the need for a bucket named 'default'. radosgw just routes requests with paths starting with

[ceph-users] Re: Nothing provides libthrift-0.14.0.so()(64bit)

2023-10-10 Thread Casey Bodley
we're tracking this in https://tracker.ceph.com/issues/61882. my understanding is that we're just waiting for the next quincy point release builds to resolve this On Tue, Oct 10, 2023 at 11:07 AM Graham Derryberry wrote: > > I have just started adding a ceph client on a rocky 9 system to our

[ceph-users] Re: Copying big objects (>5GB) doesn't work after upgrade to Quincy on S3

2023-10-10 Thread Casey Bodley
hi Arvydas, it looks like this change corresponds to https://tracker.ceph.com/issues/48322 and https://github.com/ceph/ceph/pull/38234. the intent was to enforce the same limitation as AWS S3 and force clients to use multipart copy instead. this limit is controlled by the config option

[ceph-users] Re: [RGW] Is there a way for a user to change is secret key or create other keys ?

2023-10-09 Thread Casey Bodley
On Mon, Oct 9, 2023 at 9:16 AM Gilles Mocellin wrote: > > Hello Cephers, > > I was using Ceph with OpenStack, and users could add, remove credentials > with `openstack ec2 credentials` commands. > But, we are moving our Object Storage service to a new cluster, and > didn't want to tie it with

[ceph-users] Re: Next quincy point release 17.2.7

2023-10-05 Thread Casey Bodley
thanks Tobias, i see that https://github.com/ceph/ceph/pull/53414 had a ton of test failures that don't look related. i'm working with Yuri to reschedule them On Thu, Oct 5, 2023 at 2:05 AM Tobias Urdin wrote: > > Hello Yuri, > > On the RGW side I would very much like to get this [1] patch in

[ceph-users] Re: S3 user with more than 1000 buckets

2023-10-03 Thread Casey Bodley
On Tue, Oct 3, 2023 at 9:06 AM Thomas Bennett wrote: > > Hi Jonas, > > Thanks :) that solved my issue. > > It would seem to me that this is heading towards something that the clients > s3 should paginate, but I couldn't find any documentation on how to > paginate bucket listings. the s3

[ceph-users] Re: rgw: strong consistency for (bucket) policy settings?

2023-09-25 Thread Casey Bodley
On Sat, Sep 23, 2023 at 5:05 AM Matthias Ferdinand wrote: > > On Fri, Sep 22, 2023 at 06:09:57PM -0400, Casey Bodley wrote: > > each radosgw does maintain its own cache for certain metadata like > > users and buckets. when one radosgw writes to a metadata object, it > > b

[ceph-users] Re: rgw: strong consistency for (bucket) policy settings?

2023-09-22 Thread Casey Bodley
each radosgw does maintain its own cache for certain metadata like users and buckets. when one radosgw writes to a metadata object, it broadcasts a notification (using rados watch/notify) to other radosgws to update/invalidate their caches. the initiating radosgw waits for all watch/notify

[ceph-users] Re: S3website range requests - possible issue

2023-09-22 Thread Casey Bodley
can see the the read 1~10 in the osd logs I’ve sent here - > https://pastebin.com/nGQw4ugd > > Which is wierd as it seems that it is not the same you were able to replicate. > > Ondrej > > On 22. 9. 2023, at 21:52, Casey Bodley wrote: > > hey Ondrej, > > thanks

[ceph-users] Re: S3website range requests - possible issue

2023-09-22 Thread Casey Bodley
hey Ondrej, thanks for creating the tracker issue https://tracker.ceph.com/issues/62938. i added a comment there, and opened a fix in https://github.com/ceph/ceph/pull/53602 for the only issue i was able to identify On Wed, Sep 20, 2023 at 9:20 PM Ondřej Kukla wrote: > > I was checking the

[ceph-users] Re: millions of hex 80 0_0000 omap keys in single index shard for single bucket

2023-09-21 Thread Casey Bodley
s in particular, i highly recommend trying out the reef release. in addition to multisite resharding support, we made a lot of improvements to multisite stability/reliability that we won't be able to backport to pacific/quincy > > -Chris > > > On Wednesday, September 20, 2023 at 07:3

[ceph-users] Re: millions of hex 80 0_0000 omap keys in single index shard for single bucket

2023-09-20 Thread Casey Bodley
these keys starting with "<80>0_" appear to be replication log entries for multisite. can you confirm that this is a multisite setup? is the 'bucket sync status' mostly caught up on each zone? in a healthy multisite configuration, these log entries would eventually get trimmed automatically On

[ceph-users] Re: openstack rgw swift -- reef vs quincy

2023-09-18 Thread Casey Bodley
thanks Shashi, this regression is tracked in https://tracker.ceph.com/issues/62771. we're testing a fix On Sat, Sep 16, 2023 at 7:32 PM Shashi Dahal wrote: > > Hi All, > > We have 3 openstack clusters, each with their own ceph. The openstack > versions are identical( using openstack-ansible)

[ceph-users] Re: 16.2.14 pacific QE validation status

2023-08-24 Thread Casey Bodley
On Wed, Aug 23, 2023 at 10:41 AM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/62527#note-1 > Release Notes - TBD > > Seeking approvals for: > > smoke - Venky > rados - Radek, Laura > rook - Sébastien Han > cephadm - Adam K >

[ceph-users] Re: Rados object transformation

2023-08-23 Thread Casey Bodley
you could potentially create a cls_crypt object class that exposes functions like crypt_read() and crypt_write() to do this. but your application would have to use cls_crypt for all reads/writes instead of the normal librados read/write operations. would that work for you? On Wed, Aug 23, 2023 at

[ceph-users] Re: Check allocated RGW bucket/object size after enabling Bluestore compression

2023-08-17 Thread Casey Bodley
On Thu, Aug 17, 2023 at 12:14 PM wrote: > > Hello, > > Yes, I can see that there are metrics to check the size of the compressed > data stored in a pool with ceph df detail (relevant columns are USED COMPR > and UNDER COMPR) > > Also the size of compressed data can be checked on osd level using

[ceph-users] Re: [ceph v16.2.10] radosgw crash

2023-08-16 Thread Casey Bodley
thanks Louis, that looks like the same backtrace as https://tracker.ceph.com/issues/61763. that issue has been on 'Need More Info' because all of the rgw logging was disabled there. are you able to share some more log output to help us figure this out? under "--- begin dump of recent events

[ceph-users] Re: ref v18.2.0 QE Validation status

2023-07-31 Thread Casey Bodley
On Mon, Jul 31, 2023 at 11:38 AM Yuri Weinstein wrote: > > Thx Casey > > If you agree I will merge https://github.com/ceph/ceph/pull/52710 > ? yes please > > On Mon, Jul 31, 2023 at 8:34 AM Casey Bodley wrote: > > > > On Sun, Jul 30, 2023 at 11:46 AM Yuri Wein

[ceph-users] Re: ref v18.2.0 QE Validation status

2023-07-31 Thread Casey Bodley
On Sun, Jul 30, 2023 at 11:46 AM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/62231#note-1 > > Seeking approvals/reviews for: > > smoke - Laura, Radek > rados - Neha, Radek, Travis, Ernesto, Adam King > rgw - Casey the pacific upgrade

[ceph-users] Ceph Leadership Team Meeting, 2023-07-26 Minutes

2023-07-26 Thread Casey Bodley
Welcome to Aviv Caro as new Ceph NVMe-oF lead Reef status: * reef 18.1.3 built, gibba cluster upgraded, plan to publish this week * https://pad.ceph.com/p/reef_final_blockers all resolved except for bookworm builds https://tracker.ceph.com/issues/61845 * only blockers will merge to reef so the

[ceph-users] Re: Ceph Mgr/Dashboard Python depedencies: a new approach

2023-07-14 Thread Casey Bodley
d=2196790 > > I was interested to see almost all of these are already in progress . > That final one (logutils) should go to EPEL's stable repo in a week > (faster with karma). > > - Ken > > > > > On Wed, Apr 26, 2023 at 11:00 AM Casey Bodley wrote: > > &

[ceph-users] Re: ceph quota qustion

2023-07-10 Thread Casey Bodley
On Mon, Jul 10, 2023 at 10:40 AM wrote: > > Hi, > > yes, this is incomplete multiparts problem. > > Then, how do admin delete the incomplete multipart object? > I mean > 1. can admin find incomplete job and incomplete multipart object? > 2. If first question is possible, then can admin delete all

[ceph-users] Re: RGW dynamic resharding blocks write ops

2023-07-07 Thread Casey Bodley
while a bucket is resharding, rgw will retry several times internally to apply the write before returning an error to the client. while most buckets can be resharded within seconds, very large buckets may hit these timeouts. any other cause of slow osd ops could also have that effect. it can be

[ceph-users] Re: [multisite] The purpose of zonegroup

2023-07-05 Thread Casey Bodley
gt; > Regards,Yixin > > On Friday, June 30, 2023 at 11:29:16 a.m. EDT, Casey Bodley > wrote: > > you're correct that the distinction is between metadata and data; > metadata like users and buckets will replicate to all zonegroups, > while object data only r

[ceph-users] Re: Get bucket placement target

2023-07-03 Thread Casey Bodley
On Mon, Jul 3, 2023 at 6:52 AM mahnoosh shahidi wrote: > > I think this part of the doc shows that LocationConstraint can override the > placement and I can change the placement target with this field. > > When creating a bucket with the S3 protocol, a placement target can be > > provided as part

[ceph-users] Re: [multisite] The purpose of zonegroup

2023-06-30 Thread Casey Bodley
gt; Actually, I reported a documentation bug for something very similar. > > On Fri, Jun 30, 2023 at 11:30 PM Casey Bodley wrote: > > > > you're correct that the distinction is between metadata and data; > > metadata like users and buckets will replicate to all zonegroups, >

[ceph-users] Re: [multisite] The purpose of zonegroup

2023-06-30 Thread Casey Bodley
you're correct that the distinction is between metadata and data; metadata like users and buckets will replicate to all zonegroups, while object data only replicates within a single zonegroup. any given bucket is 'owned' by the zonegroup that creates it (or overridden by the LocationConstraint on

[ceph-users] Re: Removing the encryption: (essentially decrypt) encrypted RGW objects

2023-06-22 Thread Casey Bodley
hi Jayanth, i don't know that we have a supported way to do this. the s3-compatible method would be to copy the object onto itself without requesting server-side encryption. however, this wouldn't prevent default encryption if rgw_crypt_default_encryption_key was still enabled. furthermore, rgw

[ceph-users] Re: radosgw new zonegroup hammers master with metadata sync

2023-06-20 Thread Casey Bodley
hi Boris, we've been investigating reports of excessive polling from metadata sync. i just opened https://tracker.ceph.com/issues/61743 to track this. restarting the secondary zone radosgws should help as a temporary workaround On Tue, Jun 20, 2023 at 5:57 AM Boris Behrens wrote: > > Hi, >

[ceph-users] Re: header_limit in AsioFrontend class

2023-06-19 Thread Casey Bodley
On Sat, Jun 17, 2023 at 8:37 AM Vahideh Alinouri wrote: > > Dear Ceph Users, > > I am writing to request the backporting changes related to the > AsioFrontend class and specifically regarding the header_limit value. > > In the Pacific release of Ceph, the header_limit value in the > AsioFrontend

[ceph-users] Re: Starting v17.2.5 RGW SSE with default key (likely others) no longer works

2023-06-19 Thread Casey Bodley
On Sat, Jun 17, 2023 at 1:11 PM Jayanth Reddy wrote: > > Hello Folks, > > I've been experimenting with RGW encryption and found this out. > Focusing on Quincy and Reef dev, for the SSE (any methods) to work, transit > has to be end to end encrypted, however if there is a proxy, then [1] can > be

[ceph-users] Re: RGW accessing real source IP address of a client (e.g. in S3 bucket policies)

2023-06-16 Thread Casey Bodley
On Fri, Jun 16, 2023 at 2:55 AM Christian Rohmann wrote: > > On 15/06/2023 15:46, Casey Bodley wrote: > > * In case of HTTP via headers like "X-Forwarded-For". This is > apparently supported only for logging the source in the "rgw ops log" ([1])? > Or

[ceph-users] Re: RGW accessing real source IP address of a client (e.g. in S3 bucket policies)

2023-06-15 Thread Casey Bodley
On Thu, Jun 15, 2023 at 7:23 AM Christian Rohmann wrote: > > Hello Ceph-Users, > > context or motivation of my question is S3 bucket policies and other > cases using the source IP address as condition. > > I was wondering if and how RadosGW is able to access the source IP > address of clients if

[ceph-users] Re: RGW striping configuration.

2023-06-13 Thread Casey Bodley
radosgw's object striping does not repeat, so there is no concept of 'stripe width'. rgw_obj_stripe_size just controls the maximum size of each rados object, so the 'stripe count' is essentially just the total s3 object size divided by rgw_obj_stripe_size On Tue, Jun 13, 2023 at 10:22 AM Teja A

[ceph-users] Re: reef v18.1.0 QE Validation status

2023-06-01 Thread Casey Bodley
wrote: > > Casey > > I will rerun rgw and we will see. > Stay tuned. > > On Wed, May 31, 2023 at 10:27 AM Casey Bodley wrote: > > > > On Tue, May 30, 2023 at 12:54 PM Yuri Weinstein wrote: > > > > > > Details of this release are summarized here: >

[ceph-users] Re: reef v18.1.0 QE Validation status

2023-05-31 Thread Casey Bodley
On Tue, May 30, 2023 at 12:54 PM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/61515#note-1 > Release Notes - TBD > > Seeking approvals/reviews for: > > rados - Neha, Radek, Travis, Ernesto, Adam King (we still have to > merge

[ceph-users] Re: all buckets mtime = "0.000000" after upgrade to 17.2.6

2023-05-31 Thread Casey Bodley
thanks for the report. this regression was already fixed in https://tracker.ceph.com/issues/58932 and will be in the next quincy point release On Wed, May 31, 2023 at 10:46 AM wrote: > > I was running on 17.2.5 since October, and just upgraded to 17.2.6, and now > the "mtime" property on all my

[ceph-users] Re: Important: RGW multisite bug may silently corrupt encrypted objects on replication

2023-05-31 Thread Casey Bodley
e/overwrite the original copy > > Best regards > Tobias > > On 30 May 2023, at 14:48, Casey Bodley wrote: > > On Tue, May 30, 2023 at 8:22 AM Tobias Urdin > mailto:tobias.ur...@binero.com>> wrote: > > Hello Casey, > > Thanks for the information

[ceph-users] Re: Important: RGW multisite bug may silently corrupt encrypted objects on replication

2023-05-30 Thread Casey Bodley
n difference is where they get the key > > [1] > https://docs.ceph.com/en/quincy/radosgw/encryption/#automatic-encryption-for-testing-only > > > On 26 May 2023, at 22:45, Casey Bodley wrote: > > > > Our downstream QE team recently observed an md5 mismatch of repl

[ceph-users] Important: RGW multisite bug may silently corrupt encrypted objects on replication

2023-05-26 Thread Casey Bodley
Our downstream QE team recently observed an md5 mismatch of replicated objects when testing rgw's server-side encryption in multisite. This corruption is specific to s3 multipart uploads, and only affects the replicated copy - the original object remains intact. The bug likely affects Ceph

[ceph-users] Re: Encryption per user Howto

2023-05-22 Thread Casey Bodley
rgw supports the 3 flavors of S3 Server-Side Encryption, along with the PutBucketEncryption api for per-bucket default encryption. you can find the docs in https://docs.ceph.com/en/quincy/radosgw/encryption/ On Mon, May 22, 2023 at 10:49 AM huxia...@horebdata.cn wrote: > > Dear Alexander, > >

[ceph-users] Re: Ceph Mgr/Dashboard Python depedencies: a new approach

2023-05-18 Thread Casey Bodley
l one (logutils) should go to EPEL's stable repo in a week > (faster with karma). > > - Ken > > > > > On Wed, Apr 26, 2023 at 11:00 AM Casey Bodley wrote: > > > > are there any volunteers willing to help make these python packages > > available upst

[ceph-users] Re: Creating a bucket with bucket constructor in Ceph v16.2.7

2023-05-18 Thread Casey Bodley
On Wed, May 17, 2023 at 11:13 PM Ramin Najjarbashi wrote: > > Hi > > I'm currently using Ceph version 16.2.7 and facing an issue with bucket > creation in a multi-zone configuration. My setup includes two zone groups: > > ZG1 (Master) and ZG2, with one zone in each zone group (zone-1 in ZG1 and >

[ceph-users] Re: how to enable multisite resharding feature?

2023-05-17 Thread Casey Bodley
i'm afraid that feature will be new in the reef release. multisite resharding isn't supported on quincy On Wed, May 17, 2023 at 11:56 AM Alexander Mamonov wrote: > > https://docs.ceph.com/en/latest/radosgw/multisite/#feature-resharding > When I try this I get: > root@ceph-m-02:~# radosgw-admin

[ceph-users] Re: multisite sync and multipart uploads

2023-05-11 Thread Casey Bodley
sync doesn't distinguish between multipart and regular object uploads. once a multipart upload completes, sync will replicate it as a single object using an s3 GetObject request replicating the parts individually would have some benefits. for example, when sync retries are necessary, we might

[ceph-users] Re: Radosgw multisite replication issues

2023-05-11 Thread Casey Bodley
+ 7f20b12b8700 0 WARNING: curl operation timed > out, network average transfer speed less than 1024 Bytes per second during > 300 seconds. > 2023-05-09T15:46:21.069+ 7f2085ff3700 0 rgw async rados processor: > store->fetch_remote_obj() returned r=-5 > 2023-05-09T15:46:2

[ceph-users] Re: 16.2.13 pacific QE validation status

2023-05-08 Thread Casey Bodley
On Sun, May 7, 2023 at 5:25 PM Yuri Weinstein wrote: > > All PRs were cherry-picked and the new RC1 build is: > > https://shaman.ceph.com/builds/ceph/pacific-release/8f93a58b82b94b6c9ac48277cc15bd48d4c0a902/ > > Rados, fs and rgw were rerun and results are summarized here: >

[ceph-users] Re: 16.2.13 pacific QE validation status

2023-05-02 Thread Casey Bodley
On Thu, Apr 27, 2023 at 5:21 PM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/59542#note-1 > Release Notes - TBD > > Seeking approvals for: > > smoke - Radek, Laura > rados - Radek, Laura > rook - Sébastien Han > cephadm - Adam K >

[ceph-users] Re: Radosgw multisite replication issues

2023-04-27 Thread Casey Bodley
On Thu, Apr 27, 2023 at 11:36 AM Tarrago, Eli (RIS-BCT) wrote: > > After working on this issue for a bit. > The active plan is to fail over master, to the “west” dc. Perform a realm > pull from the west so that it forces the failover to occur. Then have the > “east” DC, then pull the realm data

[ceph-users] Re: Ceph Mgr/Dashboard Python depedencies: a new approach

2023-04-26 Thread Casey Bodley
ges in EPEL. There's this BZ (https://bugzilla.redhat.com/2166620) >> > requesting that specific package, but that's only one out of the dozen of >> > missing packages (plus transitive dependencies)... >> > >> > Kind Regards, >> > Ernesto >> &

[ceph-users] Ceph Leadership Team meeting minutes - 2023 April 26

2023-04-26 Thread Casey Bodley
# ceph windows tests PR check will be made required once regressions are fixed windows build currently depends on gcc11 which limits use of c++20 features. investigating newer gcc or clang toolchain # 16.2.13 release final testing in progress # prometheus metric regressions

[ceph-users] Re: Can I delete rgw log entries?

2023-04-20 Thread Casey Bodley
On Sun, Apr 16, 2023 at 11:47 PM Richard Bade wrote: > > Hi Everyone, > I've been having trouble finding an answer to this question. Basically > I'm wanting to know if stuff in the .log pool is actively used for > anything or if it's just logs that can be deleted. > In particular I was wondering

[ceph-users] Re: quincy user metadata constantly changing versions on multisite slave with radosgw roles

2023-04-20 Thread Casey Bodley
On Wed, Apr 19, 2023 at 7:55 PM Christopher Durham wrote: > > Hi, > > I am using 17.2.6 on rocky linux for both the master and the slave site > I noticed that: > radosgw-admin sync status > often shows that the metadata sync is behind a minute or two on the slave. > This didn't make sense, as

[ceph-users] Re: Rados gateway data-pool replacement.

2023-04-19 Thread Casey Bodley
On Wed, Apr 19, 2023 at 5:13 AM Gaël THEROND wrote: > > Hi everyone, quick question regarding radosgw zone data-pool. > > I’m currently planning to migrate an old data-pool that was created with > inappropriate failure-domain to a newly created pool with appropriate > failure-domain. > > If I’m

[ceph-users] Re: ceph 17.2.6 and iam roles (pr#48030)

2023-04-11 Thread Casey Bodley
On Tue, Apr 11, 2023 at 3:53 PM Casey Bodley wrote: > > On Tue, Apr 11, 2023 at 3:19 PM Christopher Durham wrote: > > > > > > Hi, > > I see that this PR: https://github.com/ceph/ceph/pull/48030 > > made it into ceph 17.2.6, as per the change log at: > >

[ceph-users] Re: ceph 17.2.6 and iam roles (pr#48030)

2023-04-11 Thread Casey Bodley
On Tue, Apr 11, 2023 at 3:19 PM Christopher Durham wrote: > > > Hi, > I see that this PR: https://github.com/ceph/ceph/pull/48030 > made it into ceph 17.2.6, as per the change log at: > https://docs.ceph.com/en/latest/releases/quincy/ That's great. > But my scenario is as follows: > I have two

[ceph-users] Re: RGW don't use .rgw.root multisite configuration

2023-04-11 Thread Casey Bodley
there's a rgw_period_root_pool option for the period objects too. but it shouldn't be necessary to override any of these On Sun, Apr 9, 2023 at 11:26 PM wrote: > > Up :) > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an

[ceph-users] Re: Ceph Mgr/Dashboard Python depedencies: a new approach

2023-03-27 Thread Casey Bodley
know who's the maintainer of those > > packages in EPEL. There's this BZ (https://bugzilla.redhat.com/2166620) > > requesting that specific package, but that's only one out of the dozen of > > missing packages (plus transitive dependencies)... > > > > Kind Regards

[ceph-users] Re: quincy v17.2.6 QE Validation status

2023-03-27 Thread Casey Bodley
On Fri, Mar 24, 2023 at 3:46 PM Yuri Weinstein wrote: > > Details of this release are updated here: > > https://tracker.ceph.com/issues/59070#note-1 > Release Notes - TBD > > The slowness we experienced seemed to be self-cured. > Neha, Radek, and Laura please provide any findings if you have

[ceph-users] Re: quincy v17.2.6 QE Validation status

2023-03-23 Thread Casey Bodley
On Wed, Mar 22, 2023 at 9:27 AM Casey Bodley wrote: > > On Tue, Mar 21, 2023 at 4:06 PM Yuri Weinstein wrote: > > > > Details of this release are summarized here: > > > > https://tracker.ceph.com/issues/59070#note-1 > > Release Notes - TBD > > > >

[ceph-users] Re: Ceph Mgr/Dashboard Python depedencies: a new approach

2023-03-23 Thread Casey Bodley
hi Ernesto and lists, > [1] https://github.com/ceph/ceph/pull/47501 are we planning to backport this to quincy so we can support centos 9 there? enabling that upgrade path on centos 9 was one of the conditions for dropping centos 8 support in reef, which i'm still keen to do if not, can we find

[ceph-users] Re: quincy v17.2.6 QE Validation status

2023-03-22 Thread Casey Bodley
On Tue, Mar 21, 2023 at 4:06 PM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/59070#note-1 > Release Notes - TBD > > The reruns were in the queue for 4 days because of some slowness issues. > The core team (Neha, Radek, Laura, and

[ceph-users] Re: CompleteMultipartUploadResult has empty ETag response

2023-02-28 Thread Casey Bodley
On Tue, Feb 28, 2023 at 8:19 AM Lars Dunemark wrote: > > Hi, > > I notice that CompleteMultipartUploadResult does return an empty ETag > field when completing an multipart upload in v17.2.3. > > I haven't had the possibility to verify from which version this changed > and can't find in the

[ceph-users] Re: OpenSSL in librados

2023-02-26 Thread Casey Bodley
On Sun, Feb 26, 2023 at 8:20 AM Ilya Dryomov wrote: > > On Sun, Feb 26, 2023 at 2:15 PM Patrick Schlangen > wrote: > > > > Hi Ilya, > > > > > Am 26.02.2023 um 14:05 schrieb Ilya Dryomov : > > > > > > Isn't OpenSSL 1.0 long out of support? I'm not sure if extending > > > librados API to support

[ceph-users] Re: [RGW - octopus] too many omapkeys on versioned bucket

2023-02-13 Thread Casey Bodley
On Mon, Feb 13, 2023 at 8:41 AM Boris Behrens wrote: > > I've tried it the other way around and let cat give out all escaped chars > and the did the grep: > > # cat -A omapkeys_list | grep -aFn '/' > 9844:/$ > 9845:/^@v913^@$ > 88010:M-^@1000_/^@$ > 128981:M-^@1001_/$ > > Did anyone ever saw

[ceph-users] Re: Migrate a bucket from replicated pool to ec pool

2023-02-13 Thread Casey Bodley
On Mon, Feb 13, 2023 at 4:31 AM Boris Behrens wrote: > > Hi Casey, > >> changes to the user's default placement target/storage class don't >> apply to existing buckets, only newly-created ones. a bucket's default >> placement target/storage class can't be changed after creation > > > so I can

[ceph-users] Re: Migrate a bucket from replicated pool to ec pool

2023-02-11 Thread Casey Bodley
hi Boris, On Sat, Feb 11, 2023 at 7:07 AM Boris Behrens wrote: > > Hi, > we use rgw as our backup storage, and it basically holds only compressed > rbd snapshots. > I would love to move these out of the replicated into a ec pool. > > I've read that I can set a default placement target for a user

[ceph-users] CLT meeting summary 2023-02-01

2023-02-01 Thread Casey Bodley
distro testing for reef * https://github.com/ceph/ceph/pull/49443 adds centos9 and ubuntu22 to supported distros * centos9 blocked by teuthology bug https://tracker.ceph.com/issues/58491 - lsb_release command no longer exists, use /etc/os-release instead - ceph stopped depending on lsb_release

[ceph-users] Re: 16.2.11 pacific QE validation status

2023-01-20 Thread Casey Bodley
On Fri, Jan 20, 2023 at 11:39 AM Yuri Weinstein wrote: > > The overall progress on this release is looking much better and if we > can approve it we can plan to publish it early next week. > > Still seeking approvals > > rados - Neha, Laura > rook - Sébastien Han > cephadm - Adam > dashboard -

  1   2   >