ted.
cheers
Francois Scheurer
--
EveryWare AG
François Scheurer
Senior Systems Engineer
Zurlindenstrasse 52a
CH-8003 Zürich
tel: +41 44 466 60 00
fax: +41 44 466 60 10
mail: francois.scheu...@everyware.ch
web: http://www.everyware.ch
From: Scheurer François
S
Hello
Short question regarding journal-based rbd mirroring.
▪IO path with journaling w/o cache:
a. Create an event to describe the update
b. Asynchronously append event to journal object
c. Asynchronously update image once event is safe
d. Complete IO to client once update is safe
[cf.
Dear Ceph Experts,
The docu about this rgw command is a bit unclear:
radosgw-admin bucket check --bucket --fix --check-objects
Is this command still maintained and safe to use? (we are still on nautilus)
Is it working with sharded buckets? and also in multi-site?
I heard it will clear
eryware.ch
From: Igor Fedotov
Sent: Thursday, February 10, 2022 6:06 PM
To: Scheurer François; Dan van der Ster
Cc: Ceph Users
Subject: Re: [ceph-users] Re: osd true blocksize vs bluestore_min_alloc_size
Hi Fransois,
you should set debug_bluestore = 10 instead.
And then grep for
10
mail: francois.scheu...@everyware.ch
web: http://www.everyware.ch
From: Dan van der Ster
Sent: Thursday, February 10, 2022 4:33 PM
To: Scheurer François
Cc: Ceph Users
Subject: Re: [ceph-users] osd true blocksize vs bluestore_min_alloc_size
Hi,
When an osd
Hi Frederic
For your point 3, the default_storage_class from the user info is apparently
ignored.
Setting it on Nautilus 14.2.15 had no impact and objects were still stored with
STANDARD.
Another issue is that some clients like s3cmd are per default explicitly using
STANDARD.
And even
Dear All,
RGW provides atomic PUT in order to guarantee write consistency.
cf: https://ceph.io/en/news/blog/2011/atomicity-of-restful-radosgw-operations/
But my understanding is that the are no guarantee regarding the PUT order
sequence.
So basically, if doing a storage class migration:
aws
Dear All
The rgw user metadata "default_storage_class" is not working as expected on
Nautilus 14.2.15.
See the doc: https://docs.ceph.com/en/nautilus/radosgw/placement/#user-placement
S3 API PUT with the header x-amz-storage-class:NVME is working as expected.
But without this header RGW
: Scheurer François
Sent: Thursday, May 13, 2021 2:36 PM
To: ceph-users@ceph.io
Subject: Re: rgw bug adding null characters in multipart object names and in
Etags
Hi All
listomapkeys is actually dealing correctly with the null chars and output them.
rmomapkey is not, but rados has a new option
Engineer
Zurlindenstrasse 52a
CH-8003 Zürich
tel: +41 44 466 60 00
fax: +41 44 466 60 10
mail: francois.scheu...@everyware.ch
web: http://www.everyware.ch
From: Scheurer François
Sent: Thursday, May 13, 2021 12:09:12 PM
To: ceph-users@ceph.io
Subject: [ceph-users
...@everyware.ch
web: http://www.everyware.ch
From: Scheurer François
Sent: Saturday, May 8, 2021 12:09:14 PM
To: ceph-users@ceph.io
Subject: [ceph-users] rgw bug adding null characters in multipart object names
and in Etags
Dear All
We are trying to remove old
Dear All
We are trying to remove old multipart uploads but get in trouble with some of
them having null characters:
rados -p zh-1.rgw.buckets.index rmomapkey
.dir.cb1594b3-a782-49d0-a19f-68cd48870a63.81880353.1.0
Dear All
We have the same question here, if anyone can help ... Thank you!
We did not find any documentation about the steps to reset & restart the sync.
Especially the implications of 'bilog trim', 'mdlog trim' and 'datalog trim'.
Our secondary zone is read-only. Both master and secondary
Dear All
We have the same question here, if anyone can help ... Thank you!
Cheers
Francois
From: ceph-users on behalf of P. O.
Sent: Friday, August 9, 2019 11:05 AM
To: ceph-us...@lists.ceph.com
Subject: [ceph-users] Multisite RGW - Large omap objects
issues/45765
[2] https://tracker.ceph.com/issues/47044
Zitat von Scheurer François :
> Hi everybody
>
>
>
> Does somebody had experience with important performance degradations during
>
> a pool deletion?
>
>
> We are asking because we are going to delete a 370
From: Frank Schilder
Sent: Saturday, January 9, 2021 12:10 PM
To: Glen Baars; Scheurer François; ceph-users@ceph.io
Subject: Re: performance impact by pool deletion?
Hi all,
I deleted a ceph fs data pool (EC 8+2) of size 240TB with about 150M objects
and it had no observable
Hi Wissem
Thank you for your reply.
As the erasure-code-profile is a pool setting, it is on a lower layer and rgw
should be unaware and independent of it, but it could play role regarding this
know issue with space allocation and EC pools.
Anyway this does so seem to be the cause here; after
Hi everybody
Does somebody had experience with important performance degradations during
a pool deletion?
We are asking because we are going to delete a 370 TiB with 120 M objects and
have never done this in the past.
The pool is using erasure coding 8+2 on nvme ssd's with rocksdb/wal on
Dear Ceph contributors
While our (new) rgw secondary zone is doing the initial data sync from our
master zone,
we noticed that the reported capacity usage was getting higher than on primary
zone:
Master Zone:
ceph version 14.2.5
zone parameters:
"log_meta":
OSD code? Or is just that some RGW
features depends on OSD features?
Thank you for your insights!
Cheers
Francois
From: Casey Bodley
Sent: Thursday, March 5, 2020 3:57 PM
To: Scheurer François; ceph-users@ceph.io
Cc: Engelmann Florian; Rafa
ailable featureset?
Is RGW really partly implemented in the OSD code? Or is just that some RGW
features depends on OSD features?
Thank you for your insights!
Cheers
Francois
From: Casey Bodley
Sent: Thursday, March 5, 2020 3:57 PM
To: Scheurer Franç
Hi Paul
Many thanks for your answer! very helpful!
Best Regards
Francois
From: Paul Emmerich
Sent: Friday, April 3, 2020 5:19 PM
To: Scheurer François
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] different RGW Versions on same ceph cluster
Dear All
One ceph cluster is running with all daemons (mon, mgr, osd, rgw) having the
version 12.2.12.
Let's say we configure an additional radosgw instance with version 14.2.8,
configured with the same ceph cluster name, realm, zonegroup and zone as the
existing instances.
Is it
Hi Thomas
To get the usage:
ceph osd df | sort -nk8
#VAR is the ratio to avg util
#WEIGHT is CRUSHMAP weight; typically the Disk capacity in TiB
#REWEIGHT is temporary (until osd restart or ceph osd set noout) WEIGHT
correction for manual rebalance
You can use for temporary
(resending to the new maillist)
Dear Casey, Dear All,
We tested the migration from Luminous to Nautilus and noticed two regressions
breaking the RGW integration in Openstack:
1) the following config parameter is not working on Nautilus but is valid on
Luminous and on Master:
25 matches
Mail list logo