Hi all,
By several days tracking, we finally pinpointed the reason of glusterfs
uncleanly
detach file flocks in frequently network disconnection. We are now working
on
a patch to submit. And here is this issue details. Any suggestions will be
appreciated!
First of all, as I mentioned in
Reminder!!!
The weekly Gluster Community meeting is in 45 minutes, in
#gluster-meeting on IRC.
This is a completely public meeting, everyone is encouraged
to attend and be a part of it. :)
To add Agenda items
***
Add new items under the Other items to discuss point on the
On 17/09/2014, at 12:14 PM, Justin Clift wrote:
Reminder!!!
The weekly Gluster Community meeting is in 45 minutes, in
#gluster-meeting on IRC.
This is a completely public meeting, everyone is encouraged
to attend and be a part of it. :)
Short meeting today. ;)
Meeting Minutes:
Hi all,
Unfortunately it is impossible to validate non-trusted volfiles
using existing glusterfs options. Semantic and format of values
passed by the --xlator-option don't allow to deliver trusted
values without compromises with security.
So I have added a new --secure-xlator-option,
Please,
hi,
Till now the only method I used to find ref leaks effectively is to
find what operation is causing ref leaks and read the code to find if
there is a ref-leak somewhere. Valgrind doesn't solve this problem
because it is reachable memory from inode-table etc. I am just wondering
if
- Original Message -
From: Raghavendra Gowdappa rgowd...@redhat.com
To: Pranith Kumar Karampuri pkara...@redhat.com
Cc: Gluster Devel gluster-devel@gluster.org
Sent: Thursday, September 18, 2014 10:08:15 AM
Subject: Re: [Gluster-devel] how do you debug ref leaks?
For eg., if a
On 09/18/2014 10:08 AM, Raghavendra Gowdappa wrote:
For eg., if a dictionary is not freed because of non-zero refcount, if there is
an information on who has held these references would help to narrow down the
code path or component.
Yes that is the aim. The implementation I suggested tries
Hi all:
I do the following test:
I create a glusterfs replica volume (replica count is 2 ) with two server
node(server A and server B),use XFS as the underlying filesystem, then mount
the volume in client node,
then, I shut down the network of server A node, in client node,