Hi All,
As i have now a geo-replication established between my 3 sites , wanted to
test the flow throughput , is there any tools , documentation to measure
the data flow b/w the various slave volumes setup.
Also i understand the replication is unidirectional ( master to slave) - so
incase of DR ,
Adding Gluster users' mail list.On Apr 5, 2019 06:02, Leo David
wrote:
>
> Hi Everyone,
> Any thoughts on this ?
>
>
> On Wed, Apr 3, 2019, 17:02 Leo David wrote:
>>
>> Hi Everyone,
>> For a hyperconverged setup started with 3 nodes and going up in time up to
>> 12 nodes, I have to choose betwe
On Thu, 4 Apr 2019 at 22:10, Darrell Budic wrote:
> Just the glusterd.log from each node, right?
>
Yes.
>
> On Apr 4, 2019, at 11:25 AM, Atin Mukherjee wrote:
>
> Darell,
>
> I fully understand that you can't reproduce it and you don't have
> bandwidth to test it again, but would you be able
Hi everyone,
First message in this list, hope I can help out as much as I can.
I was wondering if someone could point out any solution already working
or this would be a matter of scripting.
We are using Gluster for a kind of strange infrastructure , where we
have let's say 2 NODES , 2 bric
I have a gluster 4.1 system with three servers running
Docker/Kubernetes. The pods mount filesystems using gluster.
10.13.112.31 is the primary server [A] and all mounts specify it with
two other servers [10.13.113.116 [B] and 10.13.114.16 [C]] specified in
backup-volfile-servers.
I'm tes
Just the glusterd.log from each node, right?
> On Apr 4, 2019, at 11:25 AM, Atin Mukherjee wrote:
>
> Darell,
>
> I fully understand that you can't reproduce it and you don't have bandwidth
> to test it again, but would you be able to send us the glusterd log from all
> the nodes when this ha
Darell,
I fully understand that you can't reproduce it and you don't have bandwidth
to test it again, but would you be able to send us the glusterd log from
all the nodes when this happened. We would like to go through the logs and
get back. I would particularly like to see if something has gone w
I didn’t follow any specific documents, just a generic rolling upgrade one node
at a time. Once the first node didn’t reconnect, I tried to follow the
workaround in the bug during the upgrade. Basic procedure was:
- take 3 nodes that were initially installed with 3.12.x (forget which, but low
n
Hi,
Currently, thin-arbiter can be setup using GD2. glustercli command is provided
by GD2 only.
Have you installed and started GD2 first?
Could you please mention in which step you faced issue?
---
Ashish
- Original Message -
From: "banda bassotti"
To: gluster-users@gluster.org
Hi all, is there a detailed guide on how to configure a two-node cluster
with a thin arbiter? I tried to follow the guide:
https://docs.gluster.org/en/latest/Administrator%20Guide/Thin-Arbiter-Volumes/#setting-up-thin-arbiter-volume
but it doesn't work. I'm using debian stretch and gluster 6 re
I just noticed i left the most important parameters out :)
here's the write command with filesize and recordsize in it as well :)
./iozone -i 0 -t 1 -F /mnt/gluster/storage/thread1 -+n -c -C -e -I -w
-+S 0 -s 200G -r 16384k
also i ran the benchmark without direct_io which resulted in an even
Hi,
The performance hit that quota causes depended on a number of factors
like:
1) the number of files,
2) the depth of the directories in the FS
3) the breadth of the directories in the FS
4) the number of bricks.
These are the main contributions to the performance hit.
If the volume is of lesse
We don't hit https://bugzilla.redhat.com/show_bug.cgi?id=1694010 while
upgrading to glusterfs-6. We tested it in different setups and understood
that this issue is seen because of some issue in setup.
regarding the issue you have faced, can you please let us know which
documentation you have follo
Hi Amar,
I would like to test Cluster v6 , but as I'm quite new to oVirt - I'm not sure
if oVirt <-> Gluster will communicate properly
Did anyone test rollback from v6 to v5.5 ? If rollback is possible - I would be
happy to give it a try.
Best Regards,
Strahil NikolovOn Apr 3, 2019 11:35, Ama
14 matches
Mail list logo