On 4 November 2016 at 14:35, Krutika Dhananjay wrote:
> It will be available in 3.9 (and latest
> upstream master too) if you're interested to try it out but
> DO NOT use it in production yet. It may have some stability
> issues as it hasn't been thoroughly tested.
>
> You
There is compound fops feature coming up which reduces the
number of calls over the network in AFR transactions, thereby
improving performance. It will be available in 3.9 (and latest
upstream master too) if you're interested to try it out but
DO NOT use it in production yet. It may have some
On 4 November 2016 at 03:38, Gambit15 wrote:
> There are lots of factors involved. Can you describe your setup & use case a
> little more?
Replica 3 Cluster. Individual Bricks are RAIDZ10 (zfs) that can manage
450 MB/s write, 1.2GB/s Read.
- 2 * 1GB Bond, Balance-alb
-
Hi all,
I still need help with this. After adding another set of bricks to the
volume the original problem went away and healing was complete.
Now, after a instance was terminated and replaced, the replaced node is
exhibiting the same issue.
I turned on debug logging on the volume for the
[root@pdsraid13 ~]# gluster volume info
Volume Name: pdsclust
Type: Disperse
Volume ID: 02629f52-cfe1-4542-8581-21d25e254d39
Status: Started
Number of Bricks: 1 x (4 + 2) = 6
Transport-type: tcp
Bricks:
Brick1: pdsraid4-gb:/data/gfs
Brick2: pdsraid8-gb:/data/gfs
Brick3: pdsraid10-gb:/data/gfs
There are lots of factors involved. Can you describe your setup & use case
a little more?
Doug
On 2 November 2016 at 00:09, Lindsay Mathieson
wrote:
> And after having posted about the dangers of premature optimisation ...
> any suggestion for improving IOPS? as
Thank you, I''ll wait for the new version.
Regards,
Radu
On Thu, Nov 3, 2016 at 3:03 PM, Prasanna Kalever
wrote:
> Hi,
>
> After our past two days of investigations, this is no longer a new/fresh
> bug :)
>
> The cause for is double unref of fd introduced in 3.8.5 with [1]
Hi,
After our past two days of investigations, this is no longer a new/fresh bug :)
The cause for is double unref of fd introduced in 3.8.5 with [1]
We have thoroughly investigated on this, and the fix [2] is likely to
be coming in the next gluster update.
[1]
Hi,
Similar issue was reported in nfs-ganesha github [1]. As mentioned in
the link, there is upcall thread (actively polling in a loop) spawned
for every export which might be consuming the CPU. There are few
optimizations needed here -
* Make this behavior optional by checking existing
(I don't have the message ID of the original so this will be a new thread.)
Original Message:
https://www.gluster.org/pipermail/gluster-users.old/2016-October/028892.html
I too am seeing ~10% CPU usage per Gluster export when using Ganesha
NFS. This occurs straight after the process starts and
Hi,
After updating glusterfs server to 3.8.5 (from Centos-gluster-3.8.repo) the
KVM virtual machines (qemu-kvm-ev-2.3.0-31) that access storage using
libgfapi are no longer able to start. The libvirt log file shows:
[2016-11-02 14:26:41.864024] I [MSGID: 104045] [glfs-master.c:91:notify]
On Thu, Nov 3, 2016 at 11:34 AM, Keiviw wrote:
> If GlusterFS does not support POSIX seekdir,what problems will user or
> GlusterFS have?
>
Glusterfs won't have any problem if we don't support seekdir. I am also not
sure whether applications have real use-case for seekdir. But,
12 matches
Mail list logo