[Gluster-users] gluster heal performance (was: Fwd: New GlusterFS deployment, doubts on 1 brick per host vs 1 brick per drive.)

2020-09-10 Thread Martin Bähr
Excerpts from Gionatan Danti's message of 2020-09-11 00:35:52 +0200: > The main point was the potentially long heal time could you (or anyone else) please elaborate on what long heal times are to be expected? we have a 3-node replica cluster running version 3.12.9 (we are building a new cluster

[Gluster-users] glusterd restarts every 2 minutes

2020-09-10 Thread Computerisms Corporation
Hi List, In my 2-server gluster setup, one server is consistently restarting the glusterd proccess. On the first second of every other minute, I get a shutdown in my glusterd log: W [glusterfsd.c:1596:cleanup_and_exit] (-->/lib/x86_64-linux-gnu/libpthread.so.0(+0x7fa3) [0x7f7410fa5fa3]

Re: [Gluster-users] performance

2020-09-10 Thread Computerisms Corporation
Hi Danti, the notes are not very verbose, but looks like the following lines were removed from their virtualization config: They also enabled hyperthreading, so having 12 "cores" instead of 6 now. Guessing that had a lot to do with it... On 2020-09-04 8:20 a.m., Gionatan Danti

Re: [Gluster-users] Fwd: New GlusterFS deployment, doubts on 1 brick per host vs 1 brick per drive.

2020-09-10 Thread Gionatan Danti
Il 2020-09-10 23:13 Miguel Mascarenhas Filipe ha scritto: can you explain better how a single disk failing would bring a whole node out of service? Oh, I did a bad cut/paste. A single disk failure will not put the entire node out-of-service. The main point was the potentially long heal time

Re: [Gluster-users] Fwd: New GlusterFS deployment, doubts on 1 brick per host vs 1 brick per drive.

2020-09-10 Thread Miguel Mascarenhas Filipe
On Thu, 10 Sep 2020 at 21:53, Gionatan Danti wrote: > Il 2020-09-09 15:30 Miguel Mascarenhas Filipe ha scritto: > > I'm setting up GlusterFS on 2 hw w/ same configuration, 8 hdds. This > > deployment will grow later on. > > Hi, I really suggest avoiding a replica 2 cluster unless it is for >

Re: [Gluster-users] Fwd: New GlusterFS deployment, doubts on 1 brick per host vs 1 brick per drive.

2020-09-10 Thread Gionatan Danti
Il 2020-09-09 15:30 Miguel Mascarenhas Filipe ha scritto: I'm setting up GlusterFS on 2 hw w/ same configuration, 8 hdds. This deployment will grow later on. Hi, I really suggest avoiding a replica 2 cluster unless it is for testing only. Be sure to add an arbiter at least (using a replica 2

Re: [Gluster-users] New GlusterFS deployment, doubts on 1 brick per host vs 1 brick per drive.

2020-09-10 Thread Miguel Mascarenhas Filipe
hi, thanks both for the replies On Thu, 10 Sep 2020 at 16:08, Darrell Budic wrote: > I run ZFS on my servers (with additional RAM for that reason) in my > replica-3 production cluster. I choose size and ZFS striping of HDDs, with > easier compression and ZFS controlled caching using SDDs for my

Re: [Gluster-users] New GlusterFS deployment, doubts on 1 brick per host vs 1 brick per drive.

2020-09-10 Thread Darrell Budic
I run ZFS on my servers (with additional RAM for that reason) in my replica-3 production cluster. I choose size and ZFS striping of HDDs, with easier compression and ZFS controlled caching using SDDs for my workload (mainly VMs). It performs as expected, but I don’t have the resources to

Re: [Gluster-users] new daemon gluster-ta-volume.service needed?

2020-09-10 Thread peter knezel
Hello Aravinda, thanks for the clarification. So i guessed it correctly - i have disabled it already. Thanks and kind regards, peterk On Thu, 10 Sep 2020 at 16:25, Aravinda VK wrote: > Hi Peter, > > On 10-Sep-2020, at 7:50 PM, peter knezel wrote: > > Hello all, > > i have updated glusterfs

Re: [Gluster-users] new daemon gluster-ta-volume.service needed?

2020-09-10 Thread Aravinda VK
Hi Peter, > On 10-Sep-2020, at 7:50 PM, peter knezel wrote: > > Hello all, > > i have updated glusterfs (-client,-common,-server) packages from 5.13-1 to > 6.10-1 > on 2 servers with debian stretch (9.x). > Strangely new daemon appeared: gluster-ta-volume.service > Is it needed or can be

[Gluster-users] new daemon gluster-ta-volume.service needed?

2020-09-10 Thread peter knezel
Hello all, i have updated glusterfs (-client,-common,-server) packages from 5.13-1 to 6.10-1 on 2 servers with debian stretch (9.x). Strangely new daemon appeared: gluster-ta-volume.service Is it needed or can be safely disabled? i am not using any arbiter, only: Type: Replicate Thanks and kind

Re: [Gluster-users] Fwd: New GlusterFS deployment, doubts on 1 brick per host vs 1 brick per drive.

2020-09-10 Thread Diego Zuccato
Il 09/09/20 15:30, Miguel Mascarenhas Filipe ha scritto: I'm a noob, but IIUC this is the option giving the best performance: > 2. 1 brick per drive, Gluster "distributed replicated" volumes, no > internal redundancy Clients can write to both servers in parallel and read scattered (read