[ceph-users] Re: Cephfs MDS tunning for deep-learning data-flow

2023-12-15 Thread mhnx
I found something useful and I think I need to dig and use this %100 https://docs.ceph.com/en/reef/cephfs/multimds/#dynamic-subtree-partitioning-with-balancer-on-specific-ranks DYNAMIC SUBTREE PARTITIONING WITH BALANCER ON SPECIFIC RANKS The CephFS file system provides the bal_rank_mask option

[ceph-users] Cephfs MDS tunning for deep-learning data-flow

2023-12-15 Thread mhnx
Hello everyone! How are you doing? I wasn't around for two years but I'm back and working on a new development. I deployed 2x ceph cluster: 1- user_data:5x node [8x4TB Sata SSD, 2x 25Gbit network], 2- data-gen: 3x node [8x4TB Sata SSD, 2x 25Gbit network], note: hardware is not my choice and I

[ceph-users] Re: rbd trash: snapshot id is protected from removal [solved]

2023-12-15 Thread Eugen Block
Ah of course, thanks for pointing that out, I somehow didn't think of the remaining clones. Thanks a lot! Zitat von Ilya Dryomov : On Fri, Dec 15, 2023 at 12:52 PM Eugen Block wrote: Hi, I've been searching and trying things but to no avail yet. This is uncritical because it's a test

[ceph-users] Re: rbd trash: snapshot id is protected from removal

2023-12-15 Thread Ilya Dryomov
On Fri, Dec 15, 2023 at 12:52 PM Eugen Block wrote: > > Hi, > > I've been searching and trying things but to no avail yet. > This is uncritical because it's a test cluster only, but I'd still > like to have a solution in case this somehow will make it into our > production clusters. > It's an

[ceph-users] rbd trash: snapshot id is protected from removal

2023-12-15 Thread Eugen Block
Hi, I've been searching and trying things but to no avail yet. This is uncritical because it's a test cluster only, but I'd still like to have a solution in case this somehow will make it into our production clusters. It's an Openstack Victoria Cloud with Ceph backend. If one tries to

[ceph-users] Re: How to configure something like osd_deep_scrub_min_interval?

2023-12-15 Thread Frank Schilder
Hi all, another quick update: please use this link to download the script: https://github.com/frans42/ceph-goodies/blob/main/scripts/pool-scrub-report The one I sent originally does not follow latest. Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14

[ceph-users] Corrupted and inconsistent reads from CephFS on EC pool

2023-12-15 Thread aschmitz
Hi everyone, I'm seeing different results from reading files, depending on which OSDs are running, including some incorrect reads with all OSDs running, in CephFS from a pool with erasure coding. I'm running Ceph 17.2.6. # More detail In particular, I have a relatively large backup of some