Hi, the system is still in backfilling and still have the same pg in degraded.
I see that % of degraded object is in still.
I mean it never decrease belove 0.010% from days.
Is the backfilling connected to the degraded ?
System must finish backfilling before finishing the degraded one ?
[WRN] PG_
I have basically given up relying on bucket sync to work properly in
quincy. I have been running a cron job to manually sync files between
datacentres to catch the files that don't get replicated. It's pretty
inefficient, but at least all the files get to the backup datacentre.
Would love to
I'm unclear whether all of this currently works on upstream quincy
(apologies if all such backports have been done)? You might retest against
reef or your ceph/main branch.
Matt
On Mon, Apr 24, 2023 at 2:52 PM Yixin Jin wrote:
> Actually, "bucket sync run" somehow made it worse since now the
Hi Yuri,
We were hoping that the following patch would make it in for 16.2.13 if
possible:
https://github.com/ceph/ceph/pull/51200
Thanks,
Cory Snyder
From: Yuri Weinstein
Sent: Monday, April 24, 2023 11:39 AM
To: dev ; ceph-users ; clt
Subject: pacific 16.2.13 point release
We want to
Re-post my rely to make sure it goes through the mailing list.
- Forwarded Message - From: Yixin Jin To: Soumya
Koduri Sent: Monday, April 24, 2023 at 02:37:46 p.m.
EDTSubject: Re: [ceph-users] Re: Bucket sync policy
An update:
After creating and enabling the bucket sync policy, I
Actually, "bucket sync run" somehow made it worse since now the destination
zone shows "bucket is caught up with source" from "bucket sync status" even
though it clearly missed an object.
On Monday, April 24, 2023 at 02:37:46 p.m. EDT, Yixin Jin
wrote:
An update:
After creating and
On 4/24/23 21:52, Yixin Jin wrote:
Hello ceph gurus,
We are trying bucket-specific sync policy feature with Quincy release and we
encounter something strange. Our test setup is very simple. I use mstart.sh to
spin up 3 clusters, configure them with a single realm, a single zonegroup and
3 zon
Hello ceph gurus,
We are trying bucket-specific sync policy feature with Quincy release and we
encounter something strange. Our test setup is very simple. I use mstart.sh to
spin up 3 clusters, configure them with a single realm, a single zonegroup and
3 zones – z0, z1, z2, with z0 being the ma
We want to do the next urgent point release for pacific 16.2.13 ASAP.
The tip of the current pacific branch will be used as a base for this
release and we will build it later today.
Dev leads - if you have any outstanding PRs that must be included pls
merged them now.
Thx
YuriW
_
Hi Wesley,
I can only answer your second question and give an opinion on the last one!
- Yes the OSD activation problem (in cephadm clusters only) was
introduce by an unfortunate change (indentation problem in Python code)
in 16.2.11. The issue doesn't exist in 16.2.10 and is one of the fixed
Hi list,
we created a cluster for using cephfs with a kubernetes cluster. Since a
few weeks now the cluster keeps filling up at an alarming rate
(100 GB per day).
This is while the most relevant pg is deep scrubbing and was interupted
a few times.
We use about 150G (du using the mounted file
A few questions:
- Will the 16.2.12 packages be "corrected" and reuploaded to the ceph.com
mirror? or will 16.2.13 become what 16.2.12 was supposed to be?
- Was the osd activation regression introduced in 16.2.11 (or does 16.2.10
have it as well)?
- Were the hotfxes in 16.2.12 just related to pe
Hi,
I'm still interesting by getting feedback from those using the LRC
plugin about the right way to configure it... Last week I upgraded from
Pacific to Quincy (17.2.6) with cephadm which is doing the upgrade host
by host, checking if an OSD is ok to stop before actually upgrading it.
I had
Hi André,
at the cephalocon 2023 last week in amsterdam there were two
presentations by Adam and Mark that might help you.
Joachim
___
Clyso GmbH - Ceph Foundation Member
Am 21.04.23 um 10:53 schrieb André Gemünd:
Dear Ceph-users,
in the meantime I found thi
Hi casey,
I’ve tested that while you answered me actually :-)
So, all in all, we can’t stop the radosgw for now and tier cache option
can’t work as we use EC based pools (at least for nautilus).
Due to those constraints we’re currently thinking of the following
procedure:
1°/- Create the new EC
Dear List
we upgraded to 16.2.12 on April 17th, since then we've seen some
unexplained downed osd services in our cluster (264 osds), is there any
risk of data loss, if so, would it be possible to downgrade or is a fix
expected soon? if so, when? ;-)
FYI, we are running a cluster without cep
16 matches
Mail list logo