[ceph-users] Re: 16.2.11 branch

2022-12-15 Thread Christian Rohmann
xes in them. Thanks a bunch! Christian ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: 16.2.11 branch

2022-12-15 Thread Christian Rohmann
On 15/12/2022 10:31, Christian Rohmann wrote: May I kindly ask for an update on how things are progressing? Mostly I am interested on the (persisting) implications for testing new point releases (e.g. 16.2.11) with more and more bugfixes in them. I guess I just have not looked on the right

[ceph-users] Re: DB sizing for lots of large files

2020-11-26 Thread Christian Wuerdig
Sorry, I replied to the wrong email thread before, so reposting this: I think it's time to start pointing out the the 3/30/300 logic not really holds any longer true post Octopus: https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/CKRCB3HUR7UDRLHQGC7XXZPWCWNJSBNT/ Although I suppose i

[ceph-users] Re: Advice on SSD choices for WAL/DB?

2020-11-26 Thread Christian Wuerdig
I think it's time to start pointing out the the 3/30/300 logic not really holds any longer true post Octopus: https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/CKRCB3HUR7UDRLHQGC7XXZPWCWNJSBNT/ On Thu, 2 Jul 2020 at 00:09, Burkhard Linke < burkhard.li...@computational.bio.uni-giesse

[ceph-users] Re: OSD slow ops warning not clearing after OSD down

2023-01-16 Thread Christian Rohmann
total failure of an OSD ? Would be nice to fix this though to not "block" the warning status with something that's not actually a warning. Regards Christian ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send a

[ceph-users] Re: Status of Quincy 17.2.5 ?

2023-01-25 Thread Christian Rohmann
Hey everyone, On 20/10/2022 10:12, Christian Rohmann wrote: 1) May I bring up again my remarks about the timing: On 19/10/2022 11:46, Christian Rohmann wrote: I believe the upload of a new release to the repo prior to the announcement happens quite regularly - it might just be due to the

[ceph-users] Renaming a ceph node

2023-02-13 Thread Rice, Christian
Can anyone please point me at a doc that explains the most efficient procedure to rename a ceph node WITHOUT causing a massive misplaced objects churn? When my node came up with a new name, it properly joined the cluster and owned the OSDs, but the original node with no devices remained. I expe

[ceph-users] Re: [EXTERNAL] Re: Renaming a ceph node

2023-02-15 Thread Rice, Christian
name and starting it with the new name. > You only must keep the ID from the node in the crushmap! > > Regards > Manuel > > > On Mon, 13 Feb 2023 22:22:35 + > "Rice, Christian" wrote: > >> Can anyone please point me at a doc that explains the most &

[ceph-users] Trying to throttle global backfill

2023-03-08 Thread Rice, Christian
I have a large number of misplaced objects, and I have all osd settings to “1” already: sudo ceph tell osd.\* injectargs '--osd_max_backfills=1 --osd_recovery_max_active=1 --osd_recovery_op_priority=1' How can I slow it down even more? The cluster is too large, it’s impacting other network t

[ceph-users] Re: Trying to throttle global backfill

2023-03-09 Thread Rice, Christian
ciative of the community response. I learned a lot in the process, had an outage-inducing scenario rectified very quickly, and got back to work. Thanks so much! Happy to answer any followup questions and return the favor when I can. From: Rice, Christian Date: Wednesday, March 8, 2023 at 3:57 P

[ceph-users] External Auth (AssumeRoleWithWebIdentity) , STS by default, generic policies and isolation by ownership

2023-03-15 Thread Christian Rohmann
ow users to create their own roles and policies to use them by default? All the examples talk about the requirement for admin caps and individual setting of '--caps="user-policy=*'. If there was a default role + policy (question #1) that could be applied to externally authenti

[ceph-users] Re: Eccessive occupation of small OSDs

2023-04-02 Thread Christian Wuerdig
With failure domain host your max usable cluster capacity is essentially constrained by the total capacity of the smallest host which is 8TB if I read the output correctly. You need to balance your hosts better by swapping drives. On Fri, 31 Mar 2023 at 03:34, Nicola Mori wrote: > Dear Ceph user

[ceph-users] Re: pg_autoscaler using uncompressed bytes as pool current total_bytes triggering false POOL_TARGET_SIZE_BYTES_OVERCOMMITTED warnings?

2023-04-21 Thread Christian Rohmann
enlighten me. Thank you and with kind regards Christian On 02/02/2022 20:10, Christian Rohmann wrote: Hey ceph-users, I am debugging a mgr pg_autoscaler WARN which states a target_size_bytes on a pool would overcommit the available storage. There is only one pool with value for

[ceph-users] Re: Encryption per user Howto

2023-05-22 Thread Christian Wuerdig
Hm, this thread is confusing in the context of S3 client-side encryption means - the user is responsible to encrypt the data with their own keys before submitting it. As far as I'm aware, client-side encryption doesn't require any specific server support - it's a function of the client SDK used whi

[ceph-users] RGW: Migrating a long-lived cluster to multi-site, fixing an EC pool mistake

2023-06-09 Thread Christian Theune
I guess that would be a good comparison for what timing to expect when running an update on the metadata. I’ll also be in touch with colleagues from Heinlein and 42on but I’m open to other suggestions. Hugs, Christian [1] We currently have 215TiB data in 230M objects. Using the “official

[ceph-users] Re: RGW: Migrating a long-lived cluster to multi-site, fixing an EC pool mistake

2023-06-13 Thread Christian Theune
still 2.4 hours … Cheers, Christian > On 9. Jun 2023, at 11:16, Christian Theune wrote: > > Hi, > > we are running a cluster that has been alive for a long time and we tread > carefully regarding updates. We are still a bit lagging and our cluster (that > started around

[ceph-users] Re: RGW: Migrating a long-lived cluster to multi-site, fixing an EC pool mistake

2023-06-14 Thread Christian Theune
few very large buckets (200T+) that will take a while to copy. We can pre-sync them of course, so the downtime will only be during the second copy. Christian > On 13. Jun 2023, at 14:52, Christian Theune wrote: > > Following up to myself and for posterity: > > I’m going to t

[ceph-users] RGW accessing real source IP address of a client (e.g. in S3 bucket policies)

2023-06-15 Thread Christian Rohmann
ately seems not even supposed by the BEAST library which RGW uses.     I opened feature requests ...      ** https://tracker.ceph.com/issues/59422      ** https://github.com/chriskohlhoff/asio/issues/1091      ** https://github.com/boostorg/beast/issues/2484    but there is no outcome yet. Rega

[ceph-users] Re: RGW accessing real source IP address of a client (e.g. in S3 bucket policies)

2023-06-15 Thread Christian Rohmann
, not the public IP of the client. So the actual remote address is NOT used in my case. Did I miss any config setting anywhere? Regards and thanks for your help Christian ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: RGW: Migrating a long-lived cluster to multi-site, fixing an EC pool mistake

2023-06-16 Thread Christian Theune
id i get something wrong? > > > > > Kind regards, > Nino > > > On Wed, Jun 14, 2023 at 5:44 PM Christian Theune wrote: > Hi, > > further note to self and for posterity … ;) > > This turned out to be a no-go as well, because you can’t silently switch the &g

[ceph-users] Re: RGW: Migrating a long-lived cluster to multi-site, fixing an EC pool mistake

2023-06-21 Thread Christian Theune
zonegroups referring to the same pools and this should only run through proper abstractions … o_O Cheers, Christian > On 14. Jun 2023, at 17:42, Christian Theune wrote: > > Hi, > > further note to self and for posterity … ;) > > This turned out to be a no-go as well, becau

[ceph-users] ceph quincy repo update to debian bookworm...?

2023-06-22 Thread Christian Peters
://download.coeh.com/debian-quincy/ bullseye main to deb https://download.coeh.com/debian-quincy/ boowkworm main in the near future!? Regards, Christian OpenPGP_0xC20C05037880471C.asc Description: OpenPGP public key OpenPGP_signature Description: OpenPGP digital signature

[ceph-users] Bluestore compression - Which algo to choose? Zstd really still that bad?

2023-06-26 Thread Christian Rohmann
with the decision on the compression algo? Regards Christian [1] https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/#confval-bluestore_compression_algorithm [2] https://github.com/ceph/ceph/pull/33790 [3] https://github.com

[ceph-users] Re: Radogw ignoring HTTP_X_FORWARDED_FOR header

2023-06-26 Thread Christian Rohmann
ot;bytes_sent":0,"bytes_received":64413,"object_size":64413,"total_time":155,"user_agent":"aws-sdk-go/1.27.0 (go1.16.15; linux; amd64) S3Manager","referrer":"","trans_id":"REDACTED","authentication_typ

[ceph-users] RGW multisite logs (data, md, bilog) not being trimmed automatically?

2023-06-26 Thread Christian Rohmann
g of the log trimming activity that I should expect? Or that might indicate why trimming does not happen? Regards Christian [1] https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/WZCFOAMLWV3XCGJ3TVLHGMJFVYNZNKLD/ ___ ceph-users ma

[ceph-users] Re: Bluestore compression - Which algo to choose? Zstd really still that bad?

2023-06-27 Thread Christian Rohmann
am simply looking for data other might have collected on their similar use-cases. Also I am still wondering if there really is nobody that worked/played more with zstd since that has become so popular in recent months... Regards Christian ___

[ceph-users] Re: RGW multisite logs (data, md, bilog) not being trimmed automatically?

2023-06-29 Thread Christian Rohmann
, why is that required and why seems to be no periodic trimming happening? Regards Christian ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: RGW accessing real source IP address of a client (e.g. in S3 bucket policies)

2023-07-06 Thread Christian Rohmann
the client. In reality it was simply the private, RFC1918, IP of the test machine that came in as source. Sorry for the noise and thanks for your help. Christian P.S. With IPv6, this would not have happened. ___ ceph-users mailing list -- ceph-use

[ceph-users] Re: Adding datacenter level to CRUSH tree causes rebalancing

2023-07-16 Thread Christian Wuerdig
Based on my understanding of CRUSH it basically works down the hierarchy and then randomly (but deterministically for a given CRUSH map) picks buckets (based on the specific selection rule) on that level for the object and then it does this recursively until it ends up at the leaf nodes. Given that

[ceph-users] Not all Bucket Shards being used

2023-07-18 Thread Christian Kugler
t I could reshard to something like 97. Or I could directly "downshard" to 97. Also, the second zone has a similar problem, but as the error messsage lets me know, this would be a bad idea. Will it just take more time until the sharding is transferred to the seconds zone? Best, Christia

[ceph-users] Re: Not all Bucket Shards being used

2023-07-25 Thread Christian Kugler
et_info": "false" } } > 4. After you resharded previously, did you get command-line output along the > lines of: > 2023-07-24T13:33:50.867-0400 7f10359f2a80 1 execute INFO: reshard of bucket > “" completed successfully I think so, at least for the second reshard. But I wouldn't bet my life on it. I fear I might have missed an error on the first one since I have done a radosgw-admin bucket reshard so often and never seen it fail. Christian ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Not all Bucket Shards being used

2023-08-02 Thread Christian Kugler
> Thank you for the information, Christian. When you reshard the bucket id is > updated (with most recent versions of ceph, a generation number is > incremented). The first bucket id matches the bucket marker, but after the > first reshard they diverge. This makes a lot of sense

[ceph-users] Re: ceph-volume lvm new-db fails

2023-08-10 Thread Christian Rohmann
: https://tracker.ceph.com/issues/55260 It's already fixed master, but the backports are all still pending ... Regards Christian ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: ceph-volume lvm new-db fails

2023-08-11 Thread Christian Rohmann
On 10/08/2023 13:30, Christian Rohmann wrote: It's already fixed master, but the backports are all still pending ... There are PRs for the backports now: * https://tracker.ceph.com/issues/62060 * https://tracker.ceph.com/issues/62061 * https://tracker.ceph.com/issues/62062 Re

[ceph-users] Can ceph-volume manage the LVs optionally used for DB / WAL at all?

2023-08-11 Thread Christian Rohmann
ting a few LVs is hard... it's just that ceph volume does apply some structure to the naming of LVM VGs and LVs on the OSD device and also adds metadata. That would then be up to the user, right? Regards Christian ___ ceph-users mailing list -- c

[ceph-users] When to use the auth profiles simple-rados-client and profile simple-rados-client-with-blocklist?

2023-08-22 Thread Christian Rohmann
their name like the rbd and the corresponding "rbd-read-only" profile? Regards Christian ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Can ceph-volume manage the LVs optionally used for DB / WAL at all?

2023-08-24 Thread Christian Rohmann
f spinning rust OSDs with DB or WAL on fast storage. Regards Christian ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Can ceph-volume manage the LVs optionally used for DB / WAL at all?

2023-08-26 Thread Christian Rohmann
e paragraph above, this is what I am currently doing (lvcreate + ceph-volume lvm create). My question therefore is, if ceph-volume (!) could somehow create this LV for the DB automagically if I'd just give it a device (or existing VG)? Thank you very much for your patience in clarif

[ceph-users] Contionuous spurious repairs without cause?

2023-09-05 Thread Christian Theune
any relevant issue either. Any ideas? Liebe Grüße, Christian Theune -- Christian Theune · c...@flyingcircus.io · +49 345 219401 0 Flying Circus Internet Operations GmbH · https://flyingcircus.io Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland HR Stendal HRB 21169 · Geschäftsführer

[ceph-users] Re: Contionuous spurious repairs without cause?

2023-09-06 Thread Christian Theune
dated all daemons to the same minor version those > errors were gone. > > Regards, > Eugen > > Zitat von Christian Theune : > >> Hi, >> >> this is a bit older cluster (Nautilus, bluestore only). >> >> We’ve noticed that the cluster is almost conti

[ceph-users] Re: Contionuous spurious repairs without cause?

2023-09-06 Thread Christian Theune
Hi, interesting, that’s something we can definitely try! Thanks! Christian > On 5. Sep 2023, at 16:37, Manuel Lausch wrote: > > Hi, > > in older versions of ceph with the auto-repair feature the PG state of > scrubbing PGs had always the repair state as well. > With la

[ceph-users] What is causing *.rgw.log pool to fill up / not be expired (Re: RGW multisite logs (data, md, bilog) not being trimmed automatically?)

2023-09-14 Thread Christian Rohmann
I am unfortunately still observing this issue of the RADOS pool "*.rgw.log" filling up with more and more objects: On 26.06.23 18:18, Christian Rohmann wrote: On the primary cluster I am observing an ever growing (objects and bytes) "sitea.rgw.log" pool, not so on the r

[ceph-users] Upgrade/migrate host operating system for ceph nodes (CentOS/Rocky)

2022-11-03 Thread Prof. Dr. Christian Dietrich
Hi all, we're running a ceph cluster with v15.2.17 and cephadm on various CentOS hosts. Since CentOS 8.x is EOL, we'd like to upgrade/migrate/reinstall the OS, possibly migrating to Rocky or CentOS stream: host | CentOS | Podman -|--|--- osd* | 7.9.2009 | 1.6.4 x5 osd* |

<    1   2