[ceph-users] Re: is the rbd mirror journal replayed on primary after a crash?

2023-10-06 Thread Scheurer François
ted. cheers Francois Scheurer -- EveryWare AG François Scheurer Senior Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheu...@everyware.ch web: http://www.everyware.ch From: Scheurer François S

[ceph-users] is the rbd mirror journal replayed on primary after a crash?

2023-10-03 Thread Scheurer François
Hello Short question regarding journal-based rbd mirroring. ▪IO path with journaling w/o cache: a. Create an event to describe the update b. Asynchronously append event to journal object c. Asynchronously update image once event is safe d. Complete IO to client once update is safe [cf.

[ceph-users] question about radosgw-admin bucket check

2022-02-15 Thread Scheurer François
Dear Ceph Experts, The docu about this rgw command is a bit unclear: radosgw-admin bucket check --bucket --fix --check-objects Is this command still maintained and safe to use? (we are still on nautilus) Is it working with sharded buckets? and also in multi-site? I heard it will clear

[ceph-users] Re: osd true blocksize vs bluestore_min_alloc_size

2022-02-10 Thread Scheurer François
eryware.ch From: Igor Fedotov Sent: Thursday, February 10, 2022 6:06 PM To: Scheurer François; Dan van der Ster Cc: Ceph Users Subject: Re: [ceph-users] Re: osd true blocksize vs bluestore_min_alloc_size Hi Fransois, you should set debug_bluestore = 10 instead. And then grep for

[ceph-users] Re: osd true blocksize vs bluestore_min_alloc_size

2022-02-10 Thread Scheurer François
10 mail: francois.scheu...@everyware.ch web: http://www.everyware.ch From: Dan van der Ster Sent: Thursday, February 10, 2022 4:33 PM To: Scheurer François Cc: Ceph Users Subject: Re: [ceph-users] osd true blocksize vs bluestore_min_alloc_size Hi, When an osd

[ceph-users] Re: Moving all s3 objects from an ec pool to a replicated pool using storage classes.

2022-02-02 Thread Scheurer François
Hi Frederic For your point 3, the default_storage_class from the user info is apparently ignored. Setting it on Nautilus 14.2.15 had no impact and objects were still stored with STANDARD. Another issue is that some clients like s3cmd are per default explicitly using STANDARD. And even

[ceph-users] Write Order during Concurrent S3 PUT on RGW

2021-09-29 Thread Scheurer François
Dear All, RGW provides atomic PUT in order to guarantee write consistency. cf: https://ceph.io/en/news/blog/2011/atomicity-of-restful-radosgw-operations/ But my understanding is that the are no guarantee regarding the PUT order sequence. So basically, if doing a storage class migration: aws

[ceph-users] rgw user metadata default_storage_class not honnored

2021-09-29 Thread Scheurer François
Dear All The rgw user metadata "default_storage_class" is not working as expected on Nautilus 14.2.15. See the doc: https://docs.ceph.com/en/nautilus/radosgw/placement/#user-placement S3 API PUT with the header x-amz-storage-class:NVME is working as expected. But without this header RGW

[ceph-users] Re: rgw bug adding null characters in multipart object names and in Etags

2021-05-13 Thread Scheurer François
: Scheurer François Sent: Thursday, May 13, 2021 2:36 PM To: ceph-users@ceph.io Subject: Re: rgw bug adding null characters in multipart object names and in Etags Hi All listomapkeys is actually dealing correctly with the null chars and output them. rmomapkey is not, but rados has a new option

[ceph-users] Re: rgw bug adding null characters in multipart object names and in Etags

2021-05-13 Thread Scheurer François
Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: francois.scheu...@everyware.ch web: http://www.everyware.ch From: Scheurer François Sent: Thursday, May 13, 2021 12:09:12 PM To: ceph-users@ceph.io Subject: [ceph-users

[ceph-users] Re: rgw bug adding null characters in multipart object names and in Etags

2021-05-13 Thread Scheurer François
...@everyware.ch web: http://www.everyware.ch From: Scheurer François Sent: Saturday, May 8, 2021 12:09:14 PM To: ceph-users@ceph.io Subject: [ceph-users] rgw bug adding null characters in multipart object names and in Etags Dear All We are trying to remove old

[ceph-users] rgw bug adding null characters in multipart object names and in Etags

2021-05-08 Thread Scheurer François
Dear All We are trying to remove old multipart uploads but get in trouble with some of them having null characters: rados -p zh-1.rgw.buckets.index rmomapkey .dir.cb1594b3-a782-49d0-a19f-68cd48870a63.81880353.1.0

[ceph-users] Re: How to reset and configure replication on multiple RGW servers from scratch?

2021-03-23 Thread Scheurer François
Dear All We have the same question here, if anyone can help ... Thank you! We did not find any documentation about the steps to reset & restart the sync. Especially the implications of 'bilog trim', 'mdlog trim' and 'datalog trim'. Our secondary zone is read-only. Both master and secondary

[ceph-users] Re: Multisite RGW - Large omap objects related with bilogs

2021-03-23 Thread Scheurer François
Dear All We have the same question here, if anyone can help ... Thank you! Cheers Francois From: ceph-users on behalf of P. O. Sent: Friday, August 9, 2019 11:05 AM To: ceph-us...@lists.ceph.com Subject: [ceph-users] Multisite RGW - Large omap objects

[ceph-users] Re: performance impact by pool deletion?

2021-01-11 Thread Scheurer François
issues/45765 [2] https://tracker.ceph.com/issues/47044 Zitat von Scheurer François : > Hi everybody > > > > Does somebody had experience with important performance degradations during > > a pool deletion? > > > We are asking because we are going to delete a 370

[ceph-users] Re: performance impact by pool deletion?

2021-01-11 Thread Scheurer François
From: Frank Schilder Sent: Saturday, January 9, 2021 12:10 PM To: Glen Baars; Scheurer François; ceph-users@ceph.io Subject: Re: performance impact by pool deletion? Hi all, I deleted a ceph fs data pool (EC 8+2) of size 240TB with about 150M objects and it had no observable

[ceph-users] Re: radosgw sync using more capacity on secondary than on master zone

2021-01-06 Thread Scheurer François
Hi Wissem Thank you for your reply. As the erasure-code-profile is a pool setting, it is on a lower layer and rgw should be unaware and independent of it, but it could play role regarding this know issue with space allocation and EC pools. Anyway this does so seem to be the cause here; after

[ceph-users] performance impact by pool deletion?

2021-01-06 Thread Scheurer François
Hi everybody Does somebody had experience with important performance degradations during a pool deletion? We are asking because we are going to delete a 370 TiB with 120 M objects and have never done this in the past. The pool is using erasure coding 8+2 on nvme ssd's with rocksdb/wal on

[ceph-users] radosgw sync using more capacity on secondary than on master zone

2020-12-31 Thread Scheurer François
Dear Ceph contributors While our (new) rgw secondary zone is doing the initial data sync from our master zone, we noticed that the reported capacity usage was getting higher than on primary zone: Master Zone: ceph version 14.2.5 zone parameters: "log_meta":

[ceph-users] Re: Fw: Incompatibilities (implicit_tenants & barbican) with Openstack after migrating from Ceph Luminous to Nautilus.

2020-04-14 Thread Scheurer François
OSD code? Or is just that some RGW features depends on OSD features? Thank you for your insights! Cheers Francois From: Casey Bodley Sent: Thursday, March 5, 2020 3:57 PM To: Scheurer François; ceph-users@ceph.io Cc: Engelmann Florian; Rafa

[ceph-users] Re: Fw: Incompatibilities (implicit_tenants & barbican) with Openstack after migrating from Ceph Luminous to Nautilus.

2020-04-04 Thread Scheurer François
ailable featureset? Is RGW really partly implemented in the OSD code? Or is just that some RGW features depends on OSD features? Thank you for your insights! Cheers Francois From: Casey Bodley Sent: Thursday, March 5, 2020 3:57 PM To: Scheurer Franç

[ceph-users] Re: different RGW Versions on same ceph cluster

2020-04-03 Thread Scheurer François
Hi Paul Many thanks for your answer! very helpful! Best Regards Francois From: Paul Emmerich Sent: Friday, April 3, 2020 5:19 PM To: Scheurer François Cc: ceph-users@ceph.io Subject: Re: [ceph-users] different RGW Versions on same ceph cluster

[ceph-users] different RGW Versions on same ceph cluster

2020-04-03 Thread Scheurer François
Dear All One ceph cluster is running with all daemons (mon, mgr, osd, rgw) having the version 12.2.12. Let's say we configure an additional radosgw instance with version 14.2.8, configured with the same ceph cluster name, realm, zonegroup and zone as the existing instances. Is it

[ceph-users] Re: Forcibly move PGs from full to empty OSD

2020-03-04 Thread Scheurer François
Hi Thomas To get the usage: ceph osd df | sort -nk8 #VAR is the ratio to avg util #WEIGHT is CRUSHMAP weight; typically the Disk capacity in TiB #REWEIGHT is temporary (until osd restart or ceph osd set noout) WEIGHT correction for manual rebalance You can use for temporary

[ceph-users] Fw: Incompatibilities (implicit_tenants & barbican) with Openstack after migrating from Ceph Luminous to Nautilus.

2020-03-03 Thread Scheurer François
(resending to the new maillist) Dear Casey, Dear All, We tested the migration from Luminous to Nautilus and noticed two regressions breaking the RGW integration in Openstack: 1) the following config parameter is not working on Nautilus but is valid on Luminous and on Master: