I rerun the above failed tests on ext4 and below are the one who failed.
tests/basic/quota-anon-fd-nfs.t
tests/basic/volume-snapshot.t
tests/bugs/bug-1045333.t
tests/bugs/bug-1087198.t
tests/bugs/bug-1113975.t
tests/bugs/bug-1117851.t
tests/bugs/bug-1161886/bug-1161886.t
tests/bugs/bug-1162498.t
From the cmd log history I could see lots of volume status commands were
triggered parallely. This is a known issue for 3.6 and it would cause a
memory leak. http://review.gluster.org/#/c/9328/ should solve it.
~Atin
On 02/20/2015 04:36 PM, RASTELLI Alessandro wrote:
10MB log
sorry :)
This is fixed in http://review.gluster.org/9459 and should be available
in 3.7.
As a workaround, you can restart the selfheal daemon process (gluster v
start volname force). This should clear its history.
Thanks,
Ravi
On 02/20/2015 01:43 PM, Félix de Lelelis wrote:
Hi,
I generated a
Hi,
There is anyway to take information about the last changelog that is
applied on slave and master node in geo-replication?
Thanks
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
Hi,
I've noticed that one of our 6 gluster 3.6.2 nodes has glusterd process using
50% of RAM, on the other nodes usage is about 5%
This can be a bug?
Should I restart glusterd daemon?
Thank you
A
From: Volnei Puttini [mailto:vol...@vcplinux.com.br]
Sent: lunedì 9 febbraio 2015 18:06
To: RASTELLI
Could you please share the cmd_history.log glusterd log file to
analyze this high memory usage.
~Atin
On 02/20/2015 03:10 PM, RASTELLI Alessandro wrote:
Hi,
I've noticed that one of our 6 gluster 3.6.2 nodes has glusterd process
using 50% of RAM, on the other nodes usage is about 5%
This
I get this:
[root@gluster03-mi glusterfs]# git fetch git://review.gluster.org/glusterfs
refs/changes/28/9328/4 git checkout FETCH_HEAD
fatal: Couldn't find remote ref refs/changes/28/9328/4
What's wrong with that?
A.
-Original Message-
From: Atin Mukherjee
I found out the reason this happens a few of days back. Just to let you
know..
It seems it has partly to do with the way we handle reboots on our setup.
When we take down one of our replica servers (for testing/maintenance), to
ensure that the bricks are unmounted correctly, we kill off the
Hi All,
I have a problem on restarting the glusterd service release 3.4.2 some
of my 14 nodes (Centos 6.5 and 6.6) have stop the service and when I
want to restart it I got that message in /etc-glusterfs-glusterd.vol.log/
[root@xstoocky10 glusterfs]# cat etc-glusterfs-glusterd.vol.log
Something's wrong with the configuration data in /var/lib/glusterd. Try
run glusterd with debug:
glusterd --debug
It might have more details.
From: Pierre Léonard pleon...@jouy.inra.fr
To: gluster-users@gluster.org gluster-users@gluster.org
Date: 02/20/2015 08:08 PM
Subject:
On Fri, Feb 20, 2015 at 01:50:38PM +, RASTELLI Alessandro wrote:
I get this:
[root@gluster03-mi glusterfs]# git fetch git://review.gluster.org/glusterfs
refs/changes/28/9328/4 git checkout FETCH_HEAD
fatal: Couldn't find remote ref refs/changes/28/9328/4
What's wrong with that?
I
HTTP URL works fine.
Now shall I restart the glusterd daemon?
-Original Message-
From: Niels de Vos [mailto:nde...@redhat.com]
Sent: venerdì 20 febbraio 2015 15:47
To: RASTELLI Alessandro
Cc: Atin Mukherjee; gluster-users@gluster.org
Subject: Re: [Gluster-users] GlusterD uses 50% of RAM
Thanks Joe,
for the answers!
I was not clear enough about the set up apparently.
The Gluster cluster consist of 3 nodes with each 14 bricks. The bricks
are formatted as xfs, mounted locally as xfs. There is one volume, type:
Distributed-Replicate (replica 2). The configuration is so that
On 02/20/2015 12:21 PM, Olav Peeters wrote:
Let's take one file (3009f448-cf6e-413f-baec-c3b9f0cf9d72.vhd) as an
example...
On the 3 nodes where all bricks are formatted as XFS and mounted in
/export and 272b2366-dfbf-ad47-2a0f-5d5cc40863e3 is the mounting point
of a NFS shared storage
On 02/20/2015 01:47 PM, Olav Peeters wrote:
Thanks Joe,
for the answers!
I was not clear enough about the set up apparently.
The Gluster cluster consist of 3 nodes with each 14 bricks. The bricks
are formatted as xfs, mounted locally as xfs. There is one volume,
type: Distributed-Replicate
It look even worse than I had feared.. :-(
This really is a crazy bug.
If I understand you correctly, the only sane pairing of the xattrs is of
the two 0-bit files, since this is the full list of bricks:
root@gluster01 ~]# gluster volume info
Volume Name: sr_vol01
Type: Distributed-Replicate
hi All,
After I rebooted the a cluster it cannot fuse mount.
NFS mount still works fine.
$ gluster volume status
Status of volume: w-vol
Gluster processPortOnlinePid
--
Brick
That's funny. What's your glusterd version?
glusterd --version
-Pierre Léonard pleon...@jouy.inra.fr wrote: -
===
To: A Ghoshal a.ghos...@tcs.com, gluster-users@gluster.org
gluster-users@gluster.org
From: Pierre Léonard pleon...@jouy.inra.fr
Date: 02/20/2015
Please find the below gluster regression test summary report.
Test Summary Report
---
./tests/basic/ec/quota.t
(Wstat: 0 Tests: 22 Failed: 2)
Failed tests: 16, 20
./tests/basic/quota-anon-fd-nfs.t
(Wstat: 0 Tests: 21 Failed: 1)
Failed test: 18
./tests/basic/quota.t
(Wstat: 0
hi All,
After I rebooted the a cluster, linux clients are working fine.
But nodes cannot mount the cluster.
16:01 gl0(pts/0):/var/log/glusterfs$ gluster volume status
Status of volume: w-vol
Gluster processPortOnlinePid
Hi Ghoshal,
That's funny. What's your glusterd version?
glusterd --version
glusterfs 3.5.3 built on Nov 13 2014 11:06:04
It seems that I have diffrent release of glusterfs.
Could be a problem. I know also that I have made an update of that
computer a knew kernel, and new openssl and glibc .
This is a known issue non RPM (Fedora/EL) packages. The DEB packages and
packages for other distros don't do a post upgrade regeneration of
volfiles. So after the upgrade, GlusterD is searching for the new volfiles
which don't exist, and cannot give the clients with a volfile, leading to
the
Hi,
I have a Gluster volume with 20 servers. The volume is setup with a
replica of 2.
Each server has 1 brick on it, so in essence I have 20 bricks, 10 of
which are a replica of the other 10.
One of the servers had a bad hard drive and the brick on the server
stopped responding.
This caused
Hi,
after waiting a really long time (nearly two days) for a heal and a
rebalance to finish we are left with the following situation:
- the heal did get rid of some of the empty sticky bit files outside of
.glusterfs dir (on the root of each brick), but not all
- the duplicates are still
24 matches
Mail list logo