On 27 October 2015 at 08:06, Stefan Michael Guenther
wrote:
> But I learned from the mailing list, that this is a common message and
> that I shouldn't me concerned about it.
>
> But the problem is, when I copy a 1.2 GB file from /root to /mnt (gluster
> volume mounted via
Hi Lindsay,
thanks for your information and comment.
> Hi Stefan, a bit more information on your setup would be useful
> - Brick topology - replica 3? distributed?
>
yes, replica 3.
The cluster is used for load balancing, therefore we have the needed
applications and data on all three nodes.
>
On Tue, Oct 27, 2015 at 01:56:31AM +0200, Roman wrote:
> Aren't we are talking about this patch?
> https://git.proxmox.com/?p=pve-qemu-kvm.git;a=blob;f=debian/patches/gluster-backupserver.patch;h=ad241ee1154ebbd536d7c2c7987d86a02255aba2;hb=HEAD
No, a backup-volserver option is only effective
On 27 October 2015 at 18:17, Stefan Michael Guenther
wrote:
> 2 x Intel Gigabit
> And ethtool tells me that it is indeed a gigabit link.
>
How are the configured? are they bonded? what sort of network switch do you
have?
Ideally they would be LACP bonded with a switch
Hi Atin,
You’re right in saying if it’s activate then all nodes should have it activated.
What I find strange is that when glusterfsd has problems communicating with the
other peers that that single node with issues isn’t considered “not connected”
and thus expelled from the cluster somehow;
-Atin
Sent from one plus one
On Oct 27, 2015 6:40 PM, "Sander Zijlstra"
wrote:
>
> Hi Atin,
>
> You’re right in saying if it’s activate then all nodes should have it
activated.
>
> What I find strange is that when glusterfsd has problems communicating
with the other
Hi,
We're using ovirt 3.5.3.1, and as storage backend we use GlusterFS. We
added a Storage Domain with the path "gluster.fqdn1:/volume", and as
options, we used "backup-volfile-servers=gluster.fqdn2". We now need to
restart both gluster.fqdn1 and gluster.fqdn2 machines due to system
update
Guys, why don't you make builds with working ipv6?This problem was fixed here: https://bugzilla.redhat.com/show_bug.cgi?id=1117886But new builds still not work with ipv6 correctly.I tried with 3.4, 3.5 and 3.6. So glusterfs is not scalable ;) 23.01.2015, 14:31, "Олег Кузнецов"
Hi Nicolas,
Here are my experiences with GlusterFS 3.6 & 3.7 on a dual-replica set up.
First of all, setting the "network.ping-timeout" to a low value (3-5 seconds)
helps in avoiding a 42 second freeze on the clients as mentioned in other
threads. This value seems to matter even if the
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Hi Niels,
my network.ping-timeout was already set to 5 seconds.
Unfortunately it seems i dont have the timout setting in Ubuntu 14.04
for my vda disk.
ls -al /sys/block/vda/device/ gives me only:
drwxr-xr-x 4 root root0 Oct 26 20:21 ./
Hi,
we had a 2 node glusterfs cluster.
we had a 2x2 Distributed-Replicate volume on it.
It was:
Brick1: s20gfs.ovirt:/gluster/VOL/brick1
Brick2: s21gfs.ovirt:/gluster/VOL/brick2
Brick3: s20gfs.ovirt:/gluster/VOL/brick3
Brick4: s21gfs.ovirt:/gluster/VOL/brick4
We added more nodes to the cluster.
11 matches
Mail list logo