On Fri, 5 Feb 2021, 12:27 Diego Zuccato, wrote:
> Il 04/02/21 19:28, Nag Pavan Chilakam ha scritto:
>
> > What is the proper procedure to reduce a "replica 3 arbiter 1"
> volume?
> > Can you kindly elaborate the volume configuration. Is this a plain
> >
On Wed, 3 Feb 2021, 18:33 Diego Zuccato, wrote:
> Hello all.
>
> What is the proper procedure to reduce a "replica 3 arbiter 1" volume?
>
Can you kindly elaborate the volume configuration. Is this a plain arbiter
volume or is it a distributed arbiter volume?
Please share the volume info so that
Hi Michel,
Do you want to increase the distribute count or do you want to increase the
number of data bricks.
converting a 1x(3+1) to 1x(4+1), means increasing data brick count, which is
not supported, yet. The distribute count here is 1 and it still remains same.
Converting a 1x(3+1) to
are behind a certain usecase for your perf test, Kindly ellaborate the
intent in details. Hopefully someone on this list with that knowledge can help
you.
- Original Message -
From: "Tahereh Fattahi" <t28.fatt...@gmail.com>
To: "Nag Pavan Chilakam" <nchil...@
- Original Message -
From: "lejeczek" <pelj...@yahoo.co.uk>
To: "Nag Pavan Chilakam" <nchil...@redhat.com>
Cc: gluster-users@gluster.org
Sent: Wednesday, 8 February, 2017 7:15:29 PM
Subject: Re: [Gluster-users] Input/output error - would not heal
On 08
re about this in
http://gluster.readthedocs.io/en/latest/Troubleshooting/split-brain/?highlight=gfid
(Fixing Directory entry split-brain section)
(There is a bug already existing to resolve gfid splitbrain using CLI )
thanks,
nagpavan
- Original Message -----
From: "lejeczek
You can always go for x3(3 replica copies), to address your need which you have
asked
EC volumes can be seen as raid for understanding purpose, but don't see it as
an apple-to-apple comparison.
Raid4/6(mostly) relies on XOR'ing bits(so basic addition and subtraction), but
EC involves a more
Hi,
Can you help us with more information on the volume, like volume status and
volume info
One reason of "transport endpoint error" is the brick could be down
Also, i see that the syntax used for healing is wrong.
You need to use as below:
gluster v heal split-brain source-brick
In yourcase
Hi Milos,
You need to follow the below steps:
1)once you identify server2, install the same version gluster packages as in
server1
2) peer server2 from server1
3) do a "gluster volume add-brick replica 2
The syncing of files in your gluster storage will happen automatically when
self-heal
Hi Atul,
In Short: it is due to client side quorum behavior
Detailed info:
I see that there are 3 nodes in the cluster ie master1, master2, compute01
However the volume is being hosted only on master1 and master2.
Also, see that you have enabled server side quorum, and client side quorum
from
10 matches
Mail list logo