[ceph-users] Re: snaptrim number of objects

2023-08-21 Thread Angelo Höngens
On the other hand, mclock shouldn't break down the cluster in this way. > At least not with "high_client_ops" which I used. Maybe someone should > have a look at this. > > > Manuel > > > > On Fri, 4 Aug 2023 17:40:42 -0400 > Angelo Höngens wrote:

[ceph-users] snaptrim number of objects

2023-08-04 Thread Angelo Höngens
Hey guys, I'm trying to figure out what's happening to my backup cluster that often grinds to a halt when cephfs automatically removes snapshots. Almost all OSD's go to 100% CPU, ceph complains about slow ops, and CephFS stops doing client i/o. I'm graphing the cumulative value of the snaptrimq_l

[ceph-users] immutable bit

2023-07-07 Thread Angelo Höngens
Hey guys and girls, I noticed CephFS on my kinda default 17.2.6 CephFS volume, it does not support setting the immutable bit. (Want to start using it with the Veeam hardened repo that uses the immutable bit). I do see a lot of very, very old posts with technical details on how to implement it, bu

[ceph-users] changing crush map on the fly?

2023-06-22 Thread Angelo Höngens
Hey, Just to confirm my understanding: If I set up a 3-osd cluster really fast with an EC42 pool, and I set the crush map to osd failover domain, the data will be distributed among the osd's, and of course there won't be protection against host failure. And yes, I know that's a bad idea, but I nee

[ceph-users] degraded objects increasing

2023-06-15 Thread Angelo Höngens
Hey guys, I'm trying to understand what is happening in my cluster, I see the number of degraded objects increasing, while all OSD's are still up and running. Can someone explain what's happening? I would expect the number of misplaced objects to increase when ceph's balancing algorithm decides b

[ceph-users] how to use ctdb_mutex_ceph_rados_helper

2023-05-31 Thread Angelo Höngens
Hey, I have a test setup with a 3-node samba cluster. This cluster consists of 3 vm's storing its locks on a replicated gluster volume. I want to switch to 2 physical smb-gateways for performance reasons (not enough money for 3), and since the 2-node cluster can't get quorum, I hope to switch to

[ceph-users] backing up CephFS

2023-04-30 Thread Angelo Höngens
How do you guys backup CephFS? (if at all?) I'm building 2 ceph clusters, a primary one and a backup one, and I'm looking into CephFS as the primary store for research files. CephFS mirroring seems a very fast and efficient way to copy data to the backup location, and it has the benefit of the fil

[ceph-users] architecture help (iscsi, rbd, backups?)

2023-04-27 Thread Angelo Höngens
Hey guys and girls, I'm working on a project to build storage for one of our departments, and I want to ask you guys and girls for input on the high-level overview part. It's a long one, I hope you read along and comment. SUMMARY I made a plan last year to build a 'storage solution' including ce

[ceph-users] Best practices in regards to OSD’s?

2022-05-17 Thread Angelo Höngens
I’m a Ceph newbie in the planning phase for a first Ceph cluster. (7 osd nodes, each with separate boot disks, one nvme and 12x16TB spinner, intend to run rbd only with 4:2 ec, storing a lot of data, but low iops requirements). I really want encryption-at-rest. I guess I’ll be going with Pacific.

[ceph-users] Recommendations on books

2022-04-26 Thread Angelo Höngens
Hey guys and girls, Can you recommend some books to get started with ceph? I know the docs are probably a good source, but books, in my experience, do a better job of glueing it all together and painting the big picture. And I can take a book to places where reading docs on a laptop is inconvenien