Excerpts from Gionatan Danti's message of 2020-09-11 00:35:52 +0200:
> The main point was the potentially long heal time
could you (or anyone else) please elaborate on what long heal times are
to be expected?
we have a 3-node replica cluster running version 3.12.9 (we are building
a new cluster
Hi List,
In my 2-server gluster setup, one server is consistently restarting the
glusterd proccess. On the first second of every other minute, I get a
shutdown in my glusterd log:
W [glusterfsd.c:1596:cleanup_and_exit]
(-->/lib/x86_64-linux-gnu/libpthread.so.0(+0x7fa3) [0x7f7410fa5fa3]
Hi Danti,
the notes are not very verbose, but looks like the following lines were
removed from their virtualization config:
They also enabled hyperthreading, so having 12 "cores" instead of 6 now.
Guessing that had a lot to do with it...
On 2020-09-04 8:20 a.m., Gionatan Danti
Il 2020-09-10 23:13 Miguel Mascarenhas Filipe ha scritto:
can you explain better how a single disk failing would bring a whole
node out of service?
Oh, I did a bad cut/paste. A single disk failure will not put the entire
node out-of-service. The main point was the potentially long heal time
On Thu, 10 Sep 2020 at 21:53, Gionatan Danti wrote:
> Il 2020-09-09 15:30 Miguel Mascarenhas Filipe ha scritto:
> > I'm setting up GlusterFS on 2 hw w/ same configuration, 8 hdds. This
> > deployment will grow later on.
>
> Hi, I really suggest avoiding a replica 2 cluster unless it is for
>
Il 2020-09-09 15:30 Miguel Mascarenhas Filipe ha scritto:
I'm setting up GlusterFS on 2 hw w/ same configuration, 8 hdds. This
deployment will grow later on.
Hi, I really suggest avoiding a replica 2 cluster unless it is for
testing only. Be sure to add an arbiter at least (using a replica 2
hi,
thanks both for the replies
On Thu, 10 Sep 2020 at 16:08, Darrell Budic wrote:
> I run ZFS on my servers (with additional RAM for that reason) in my
> replica-3 production cluster. I choose size and ZFS striping of HDDs, with
> easier compression and ZFS controlled caching using SDDs for my
I run ZFS on my servers (with additional RAM for that reason) in my replica-3
production cluster. I choose size and ZFS striping of HDDs, with easier
compression and ZFS controlled caching using SDDs for my workload (mainly VMs).
It performs as expected, but I don’t have the resources to
Hello Aravinda,
thanks for the clarification.
So i guessed it correctly - i have disabled it already.
Thanks and kind regards,
peterk
On Thu, 10 Sep 2020 at 16:25, Aravinda VK wrote:
> Hi Peter,
>
> On 10-Sep-2020, at 7:50 PM, peter knezel wrote:
>
> Hello all,
>
> i have updated glusterfs
Hi Peter,
> On 10-Sep-2020, at 7:50 PM, peter knezel wrote:
>
> Hello all,
>
> i have updated glusterfs (-client,-common,-server) packages from 5.13-1 to
> 6.10-1
> on 2 servers with debian stretch (9.x).
> Strangely new daemon appeared: gluster-ta-volume.service
> Is it needed or can be
Hello all,
i have updated glusterfs (-client,-common,-server) packages from 5.13-1 to
6.10-1
on 2 servers with debian stretch (9.x).
Strangely new daemon appeared: gluster-ta-volume.service
Is it needed or can be safely disabled?
i am not using any arbiter, only: Type: Replicate
Thanks and kind
Il 09/09/20 15:30, Miguel Mascarenhas Filipe ha scritto:
I'm a noob, but IIUC this is the option giving the best performance:
> 2. 1 brick per drive, Gluster "distributed replicated" volumes, no
> internal redundancy
Clients can write to both servers in parallel and read scattered (read
12 matches
Mail list logo