well if you are addressing me, that was the point of my post re the
original posters complaint.
If his chosen test gets lousy or inconsistent results on non-gluster
setups then its hard to complain about gluster absent the known Gluster
issues (i.e. network bandwidth, fuse context switching,
I don't know what you are trying to test, but I'm sure this test doesn't show
anything meaningful.
Have you tested with your apps' workload ?
I have done your test and I get aprox 20MB/s, but I can asure you that the
performance is way better in my VMs.
Best Regards,
Strahil NikolovOn Jul 5,
On 7/4/2019 2:28 AM, Vladimir Melnik wrote:
So, the disk is OK and the network is OK, I'm 100% sure.
Seems to be a GlusterFS-related issue. Either something needs to be
tweaked or it's a normal performance for a replica-3 cluster.
There is more to it than Gluster on that particular test.
I
Thank you for reply Kotresh!
I found the root of the issue. I started over geo-rep setup and erased the
geo-replication.indexing on Master.
Replication worked fine if gluster volume is mounted natively or via
nfs-ganesha server.
But when I tried to make a change on a brick locally it did not go
The session is moved from "history crawl" to "changelog crawl". After this
point, there are no changelogs to be synced as per the logs.
Please checking in ".processing" directories if there are any pending
changelogs to be synced at
"/var/lib/misc/gluster/gsyncd///.processing"
If there are no
Hi everyone,
I have a problem with native geo-replication setup. It successfully starts,
makes initial sync but does not send any filesystem data changes afterward.
I'm using CentOS 7.6.1810 with official glusterfs-6.3-1.el7 build on top of ZFS
on Linux.
It is a single Master node with single
Hi, I have been doing some testing of GlusterFS. I have 2 servers running
Gluster 6.3 (although same happening in version 3). 1 Server has 32gb ram
the other 4gb. Volume is type Replicated.
On both servers I also have the volume mounted using the Fuse client and
when I run a small of copy 100 x
I compared 4.1.5 and 3.12.15, there's no pb with 3.12.15 client.
Regards,
Nicolas.
De: "Nithya Balachandran"
À: n...@furyweb.fr
Cc: "gluster-users"
Envoyé: Vendredi 5 Juillet 2019 08:09:52
Objet: Re: [Gluster-users] Parallel process hang on gluster volume
Did you see this behaviour
Did you see this behaviour with previous Gluster versions?
Regards,
Nithya
On Wed, 3 Jul 2019 at 21:41, wrote:
> Am I alone having this problem ?
>
> - Mail original -
> De: n...@furyweb.fr
> À: "gluster-users"
> Envoyé: Vendredi 21 Juin 2019 09:48:47
> Objet: [Gluster-users] Parallel