[ceph-users] Previously synced bucket resharded after sync removed

2023-11-20 Thread Szabo, Istvan (Agoda)
Hi, I had a multisite bucket which I've removed from sync completely and resharded on the master zone the bucket which was successful. On the 2nd site (which was expected) can't list anything inside that bucket anymore which is okay, the issue is how I can delete the data somehow? It was 50TB

[ceph-users] Re: CFP closing soon: Everything Open 2024 (Gladstone, Queensland, Australia, April 16-18)

2023-11-20 Thread Tim Serong
Update: the CFP has been extended 'til November 30 (see http://lists.linux.org.au/pipermail/eo-announce/2023-November/11.html) On 11/17/23 14:09, Tim Serong wrote: Everything Open (auspiced by Linux Australia) is happening again in 2024.  The CFP closes at the end of this weekend (November

[ceph-users] Re: Why is min_size of erasure pools set to k+1

2023-11-20 Thread Wesley Dillingham
" if min_size is k and you lose an OSD during recovery after a failure of m OSDs, data will become unavailable" In that situation data wouldnt become unavailable it would be lost. Having a min_size of k+1 provides a buffer between data being active+writeable and where data is lost. That

[ceph-users] Why is min_size of erasure pools set to k+1

2023-11-20 Thread Vladimir Brik
Could someone help me understand why it's a bad idea to set min_size of erasure-coded pools to k? >From what I've read, the argument for k+1 is that if min_size is k and you lose an OSD during recovery after a failure of m OSDs, data will become unavailable. But how does setting min_size to k+1

[ceph-users] Re: Bug fixes in 17.2.7

2023-11-20 Thread Konstantin Shalygin
Hi, > On Nov 20, 2023, at 19:24, Tobias Kulschewski > wrote: > > do you have a rough estimate of when this will happen? > > Not at this year I think. For now precedence for a 18.2.1 and last release of Pacific But you can request shaman build, and clone repo for your local usage k

[ceph-users] Re: Bug fixes in 17.2.7

2023-11-20 Thread Konstantin Shalygin
Hi Tobias, This was not meged to Quincy yet [1] k [1] https://tracker.ceph.com/issues/59730 Sent from my iPhone > On Nov 20, 2023, at 17:50, Tobias Kulschewski > wrote: > > Just wanted to ask, if the bug with the multipart upload [1] has been fixed > in 17.2.7?

[ceph-users] After hardware failure tried to recover ceph and followed instructions for recovery using OSDS

2023-11-20 Thread Manolis Daramas
Hello everyone, We had a recent power failure on a server which hosts a 3-node ceph cluster (with Ubuntu 20.04 and Ceph version 17.2.7) and we think that we may have lost some of our data if not all of them. We have followed the instructions on

[ceph-users] Bug fixes in 17.2.7

2023-11-20 Thread Tobias Kulschewski
Hi guys, thank you for releasing 17.2.7! Just wanted to ask, if the bug with the multipart upload [1] has been fixed in 17.2.7? When are you planning on fixing this bug? Best, Tobias [1] https://tracker.ceph.com/issues/58879_ smime.p7s Description: S/MIME Cryptographic Signature

[ceph-users] Re: blustore osd nearfull but no pgs on it

2023-11-20 Thread Wesley Dillingham
The large amount of osdmaps is what i was suspecting. "ceph tell osd.158 status" (or any osd other than 158) would show us how many osdmaps the osds are currently holding on to. Respectfully, *Wes Dillingham* w...@wesdillingham.com LinkedIn On Mon,

[ceph-users] 304 response is not RFC9110 compliant

2023-11-20 Thread Ondřej Kukla
Hello, I’ve noticed that 304 response from s3 and s3website api is not RFC9110 compliant. This is an issue especially for caching the content when you have a cache-control header set on the object. There was an old Issue and PR from 2020 fixing this issue but it was completely ignored. I’ve

[ceph-users] Re: blustore osd nearfull but no pgs on it

2023-11-20 Thread Debian
Hi, yes all of my small osds are affected i found the issue, my cluster is healthy and my rebalance finished - i have only to wait that my old osdmaps get cleaned up. like in the thread "Disks are filling up even if there is not a single placement group on them" thx! On 20.11.23 11:36,

[ceph-users] Re: blustore osd nearfull but no pgs on it

2023-11-20 Thread Debian
Hi, ohh that is exactly my problem, my Cluster is healthy and no rebalance active. I have only to wait that the old osdmaps get cleaned up,... thx! On 20.11.23 10:42, Michal Strnad wrote: Hi. Try to look on thread "Disks are filling up even if there is not a single placement group on

[ceph-users] Re: blustore osd nearfull but no pgs on it

2023-11-20 Thread Eugen Block
You provide only a few details at a time, it would help to get a full picture if you provided the output Wesley asked for (ceph df detail, ceph tell osd.158 status, ceph osd df tree). Is osd.149 now the problematic one or did you just add output from a different osd? It's not really clear

[ceph-users] Re: blustore osd nearfull but no pgs on it

2023-11-20 Thread Michal Strnad
Hi. Try to look on thread "Disks are filling up even if there is not a single placement group on them" in this mailing list. Maybe you encounter the same problem as me. Michal On 11/20/23 08:56, Debian wrote: Hi, the block.db size ist default and not custom configured: current:

[ceph-users] Re: How to use hardware

2023-11-20 Thread Frank Schilder
Hi Simon, we are using something similar for ceph-fs. For a backup system your setup can work, depending on how you back up. While HDD pools have poor IOP/s performance, they are very good for streaming workloads. If you are using something like Borg backup that writes huge files sequentially,