Re: [Gluster-users] Gluster -> Ceph

2023-12-17 Thread Diego Zuccato
Il 17/12/2023 14:52, Joe Julian ha scritto: From what I've been told (by experts) it's really hard to make it happen. More if proper redundancy of MON and MDS daemons is implemented on quality HW. LSI isn't exactly crap hardware. But when a flaw causes it to drop drives under heavy load, the

Re: [Gluster-users] Gluster -> Ceph

2023-12-17 Thread Joe Julian
On December 17, 2023 5:40:52 AM PST, Diego Zuccato wrote: >Il 14/12/2023 16:08, Joe Julian ha scritto: > >> With ceph, if the placement database is corrupted, all your data is lost >> (happened to my employer, once, losing 5PB of customer data). > >From what I've been told (by experts) it's

Re: [Gluster-users] Gluster -> Ceph

2023-12-17 Thread Diego Zuccato
Il 14/12/2023 16:08, Joe Julian ha scritto: With ceph, if the placement database is corrupted, all your data is lost (happened to my employer, once, losing 5PB of customer data). From what I've been told (by experts) it's really hard to make it happen. More if proper redundancy of MON and

Re: [Gluster-users] Gluster -> Ceph

2023-12-14 Thread Marcus Pedersén
Thanks for you feedback! Please, do not get me wrong, I really like gluster and it has served us well for many, many years. But as from previous posts about gluster project health this worries me and I want to be able to have a good alternative prepared in case of Gluster is great and aligns

Re: [Gluster-users] Gluster -> Ceph

2023-12-14 Thread Joe Julian
Big raid isn't great as bricks. If the array does fail, the larger brick means much longer heal times. My main question I ask when evaluating storage solutions is, "what happens when it fails?" With ceph, if the placement database is corrupted, all your data is lost (happened to my employer,

Re: [Gluster-users] Gluster -> Ceph

2023-12-14 Thread Alvin Starr
On 2023-12-14 07:48, Marcus Pedersén wrote: Hi all, I am looking in to ceph and cephfs and in my head I am comparing with gluster. The way I have been running gluster over the years is either a replicated or replicated-distributed clusters. Here are my observations but I am far from an expert

Re: [Gluster-users] Gluster -> Ceph

2023-12-14 Thread Dmitry Melekhov
14.12.2023 16:48, Marcus Pedersén пишет: The problem is that I can not get my head around how to think when disaster strikes. So one fileserver burns up, there is still the other fileserver and from my understanding the ceph system will start to replicate the files on the same fileserver no,