Yes,
this makes alot of sense.
It's the behavior that I was experiencing that makes no sense.
When one node was shut down, the whole VM cluster locked up.
However, I managed to find that the culprit were the quorum settings.
I put the quorum at 2 bricks for quorum now, and I am not experienci
You may be mis-understanding the way the gluster system works in detail here,
but you’ve got the right idea overall. Since gluster is maintaining 3 copies of
your data, you can lose a drive or a whole system and things will keep going
without interruption (well, mostly, if a host node was using
-- Forwarded message --From: Carl Sirotic Date: Aug. 23, 2019 7:00 p.m.Subject: Re: [Gluster-users] Brick Reboot => VMs slowdown, client crashesTo: Joe Julian Cc:
AVIS DE CONFIDENTIALITÉ : Ce courriel peut contenir de l'information privilégiée et confidentielle. Nous vous demandons
Okay,
so it means, at least I am not getting the expected behavior and there
is hope.
I put the quorum settings that I was told a couple of emails ago.
After applying virt group, they are
cluster.quorum-type auto
cluster.quorum-count (null)
cluster.server-quorum-type server
cluster.server-qu
However,
I must have misunderstood the whole concept of gluster.
In a replica 3, for me, it's completely unacceptable, regardless of the
options, that all my VMs go down when I reboot one node.
The whole purpose of having a full 3 copy of my data on the fly is
suposed to be this.
I am in t
Yes you can start it afterwards, BUT DO NOT STOP it once enabled !
Bad things happen :D
Best Regards,
Strahil NikolovOn Aug 19, 2019 20:01, Carl Sirotic
wrote:
>
> No, I didn't.
>
> I am very interested about these settings.
>
> Also, is it possible to turn the shard feature AFTER the volume w
/var/lib/glusterd/groups/virt is a good start for ideas, notably some thread
settings and choose-local=off to improve read performance. If you don’t have at
least 10 cores on your servers, you may want to lower the recommended
shd-max-threads=8 to no more than half your CPU cores to keep healing
You want sharding for sure, it keeps the entire disk from being locked while it
heals. So you usually don’t notice it when you reboot a system, say.
It’s fine to enable after the fact, but existing files won’t be sharded. You
can work around this by stopping the VM and copying the file to new lo
On 2019-08-19 12:08 p.m., Darrell Budic wrote:
You also need to make sure your volume is setup properly for best performance.
Did you apply the gluster virt group to your volumes, or at least
features.shard = on on your VM volume?
That's what we did here:
gluster volume set W2K16_Rhenium cl
No, I didn't.
I am very interested about these settings.
Also, is it possible to turn the shard feature AFTER the volume was
started to be used ?
Carl
On 2019-08-19 12:08 p.m., Darrell Budic wrote:
You also need to make sure your volume is setup properly for best performance.
Did you appl
You also need to make sure your volume is setup properly for best performance.
Did you apply the gluster virt group to your volumes, or at least
features.shard = on on your VM volume?
> On Aug 19, 2019, at 11:05 AM, Carl Sirotic
> wrote:
>
> Yes, I made sure there was no heal.
> This is what
Yes, I made sure there was no heal.This is what I am suspecting thet shutting down a host isn't the right way to go.
Hi Carl,
Did you check for any pending heals before rebooting the gluster server?
Also, it was discussed that shutting down the node, does not stop the bricks properly and thus the
12 matches
Mail list logo