[ceph-users] Re: cephfs - max snapshot limit?

2023-05-01 Thread Venky Shankar
Hi Arnaud, On Fri, Apr 28, 2023 at 2:16 PM MARTEL Arnaud wrote: > > Hi Venky, > > > Also, at one point the kclient wasn't able to handle more than 400 > > snapshots (per file system), but we have come a long way from that and that > > is not a constraint right now. > Does it mean that there is

[ceph-users] Re: 16.2.13 pacific QE validation status

2023-05-01 Thread Brad Hubbard
On Fri, Apr 28, 2023 at 7:21 AM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/59542#note-1 > Release Notes - TBD > > Seeking approvals for: > > smoke - Radek, Laura > rados - Radek, Laura > rook - Sébastien Han > cephadm - Adam K >

[ceph-users] Re: 16.2.13 pacific QE validation status

2023-05-01 Thread Laura Flores
*Smoke* and *pacific-p2p *are approved; still working through some failures in *upgrade/octopus-x,* which I'll have ready soon. As for rados, I have summarized the suite and passed the results to Neha. In my eyes it looks good, but I want to leave final approval to Radek and/or Neha. smoke

[ceph-users] Re: [multisite] Resetting an empty bucket

2023-05-01 Thread Matt Benjamin
Hi Yixin, This sounds interesting. I kind of suspect that this feature requires some more conceptual design support. Like, at a high level, how a bucket's "zone residency" might be defined and specified, and what policies might govern changing it, not to mention, how you direct things

[ceph-users] [multisite] Resetting an empty bucket

2023-05-01 Thread Yixin Jin
Hi folks, Armed with bucket-specific sync policy feature, I found that we could move objects of a bucket between zones. It is migration via sync followed by object removal at the source. This allows us to better utilize available capacities in different clusters/zones. However, to achieve

[ceph-users] Re: Ceph recovery

2023-05-01 Thread wodel youchi
Thank you for the clarification. On Mon, May 1, 2023, 20:11 Wesley Dillingham wrote: > Assuming size=3 and min_size=2 It will run degraded (read/write capable) > until a third host becomes available at which point it will backfill the > third copy on the third host. It will be unable to create

[ceph-users] Re: Nearly 1 exabyte of Ceph storage

2023-05-01 Thread Yaarit Hatuka
We are very excited to announce that we have reached the 1 exabyte milestone of community Ceph clusters via telemetry ! Thank you to everyone who is opted-in! Read more here: https://ceph.io/en/news/blog/2023/telemetry-celebrate-1-exabyte/

[ceph-users] Re: PVE CEPH OSD heartbeat show

2023-05-01 Thread Peter
Hi Fabian, Thank you for your prompt response. It's crucial to understand how things work, and I appreciate your assistance. After replacing the switch for our Ceph environment, we experienced three days of normalcy before the issue recurred this morning. I noticed that the TCP in/out became

[ceph-users] Re: Ceph recovery

2023-05-01 Thread Wesley Dillingham
Assuming size=3 and min_size=2 It will run degraded (read/write capable) until a third host becomes available at which point it will backfill the third copy on the third host. It will be unable to create the third copy of data if no third host exists. If an additional host is lost the data will

[ceph-users] Re: RGW Lua - cancel request

2023-05-01 Thread Yuval Lifshitz
Vladimir and Ondřej, Created a quincy backport PR: https://github.com/ceph/ceph/pull/51300 Hopefully it would land in the next quincy release. Yuval On Mon, May 1, 2023 at 7:37 PM Vladimir Sigunov wrote: > Hi Yuval, > > Playing with Lua, I faced similar issue. > It would be perfect if you can

[ceph-users] Re: 16.2.13 pacific QE validation status

2023-05-01 Thread Adam King
approved for the rados/cephadm stuff On Thu, Apr 27, 2023 at 5:21 PM Yuri Weinstein wrote: > Details of this release are summarized here: > > https://tracker.ceph.com/issues/59542#note-1 > Release Notes - TBD > > Seeking approvals for: > > smoke - Radek, Laura > rados - Radek, Laura > rook -

[ceph-users] Re: RGW Lua - cancel request

2023-05-01 Thread Vladimir Sigunov
Hi Yuval, Playing with Lua, I faced similar issue. It would be perfect if you can backport this fix to Quincy. Thank you! Vladimir. -Original Message- From: Yuval Lifshitz To: Ondřej Kukla Cc: ceph-users@ceph.io Subject: [ceph-users] Re: RGW Lua - cancel request Date: Sun, 30 Apr 2023

[ceph-users] Re: Deep-scrub much slower than HDD speed

2023-05-01 Thread Niklas Hambüchen
That one talks about resilvering, which is not the same as neither ZFS scrubs nor ceph scrubs. The commit I linked is titled "Sequential scrub and resilvers". So ZFS scrubs are included. ___ ceph-users mailing list -- ceph-users@ceph.io To

[ceph-users] Ceph recovery

2023-05-01 Thread wodel youchi
Hi, When creating a ceph cluster, a failover domain is created, and by default it uses host as a minimal domain, that domain can be modified to chassis, or rack, ...etc. My question is : Suppose I have three osd nodes, my replication is 3 and my failover domain is host, which means that each

[ceph-users] RBD mirroring, asking for clarification

2023-05-01 Thread wodel youchi
Hi, When using rbd mirroring, the mirroring concerns the images only, not the whole pool? So, we don't need to have a dedicated pool in the destination site to be mirrored, the only obligation is that the mirrored pools must have the same name. In other words, We create two pools with the same

[ceph-users] Block RGW request using Lua

2023-05-01 Thread ondrej
Hello everyone, I've started playing with Lua scripting and would like to ask If anyone knows about a way to drop or close user request on the prerequest context. I would like to block creating buckets with dots in the name, but the use-case could be blocking certain operations, etc. I was

[ceph-users] Re: Deep-scrub much slower than HDD speed

2023-05-01 Thread Niklas Hambüchen
Hi all, Scrubs only read data that does exist in ceph as it exists, not every sector of the drive, written or not. Thanks, this does explain it. I just discovered: ZFS had this problem in the past: *

[ceph-users] Re: Deep-scrub much slower than HDD speed

2023-05-01 Thread Niklas Hambüchen
Hi Marc, thanks for your numbers, this seems to confirm the suspicions. Oh I get it. Interesting. I think if you will expand the cluster in the future with more disks you will spread the load have more iops, this will disappear. This one I'm not sure about: If I expand the cluster 2x, I'll

[ceph-users] Re: client isn't responding to mclientcaps(revoke), pending pAsLsXsFsc issued pAsLsXsFsc

2023-05-01 Thread Loic Tortay
On 01/05/2023 11:35, Frank Schilder wrote: Hi all, I think we might be hitting a known problem (https://tracker.ceph.com/issues/57244). I don't want to fail the mds yet, because we have troubles with older kclients that miss the mds restart and hold on to cache entries referring to the

[ceph-users] client isn't responding to mclientcaps(revoke), pending pAsLsXsFsc issued pAsLsXsFsc

2023-05-01 Thread Frank Schilder
Hi all, I think we might be hitting a known problem (https://tracker.ceph.com/issues/57244). I don't want to fail the mds yet, because we have troubles with older kclients that miss the mds restart and hold on to cache entries referring to the killed instance, leading to hanging jobs on our

[ceph-users] Re: How can I use not-replicated pool (replication 1 or raid-0)

2023-05-01 Thread Frank Schilder
I think you misunderstood Janne's reply. The main statement is at the end, ceph is not designed for an "I don't care about data" use case. If you need speed for temporary data where you can sustain data loss, go for something simpler. For example, we use beegfs with great success for a burst