Hi,
As a part of md-cache improvements to cache xattrs as well:
http://review.gluster.org/#/c/13408/
we now need to differentiate between virtual and non-virtual xattr.
Unfortunately, we have no standard naming convention for virtual and on-disk
xattr, and we cannot even change
now, as back
Well that may not be completely correct !
Its "gluster volume status all", unlike volume maintenance operation which are
rare.
Status can be issued multiple times in a day or might be put in a
script/cron-job to check the health of the
cluster.
But anyways the fix is ready as the bug says.
Cr
It's not clear to some of us that anyone is using this xlator.
The associated contrib/qemu sources are very old, and there is nobody currently
maintaining it. It would take a substantial effort to update it – to what end,
if nobody actually uses it?
Bundled in the source, the way it is now, is
On 03/04/2016 07:10 AM, Joseph Fernandes wrote:
> Might be this bug can give some context on the mem-leak (fix recently merged
> on master but not on 3.7.x)
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1287517
Yes, this is what we'd be fixing in 3.7.x too, but if you refer to [1]
the hike is
regards
Aravinda
On 03/03/2016 05:58 PM, Kaushal M wrote:
On Thu, Mar 3, 2016 at 2:39 PM, Aravinda wrote:
Thanks.
We can use Shared secret if https requirement can be completely
avoided. I am not sure how to use same SSL certificates in all the
nodes of the Cluster.(REST API server patch set
Might be this bug can give some context on the mem-leak (fix recently merged on
master but not on 3.7.x)
https://bugzilla.redhat.com/show_bug.cgi?id=1287517
~Joe
- Original Message -
> From: "Atin Mukherjee"
> To: "Joseph Fernandes"
> Cc: "Gluster Devel" , "Ajil Abraham"
>
> Sent:
-Atin
Sent from one plus one
On 04-Mar-2016 6:12 am, "Joseph Fernandes" wrote:
>
> Hi Ajil,
>
> Well few things,
>
> 1. Whenever you see a crash its better to send across the Backtrace(BT)
using gdb and attach the log files (or share it via some cloud drive)
>
> 2. About the memory leak, What kind
Hi Ajil,
Well few things,
1. Whenever you see a crash its better to send across the Backtrace(BT) using
gdb and attach the log files (or share it via some cloud drive)
2. About the memory leak, What kind of tools are you using for profiling
memory, valgrind ? if so please attach the valgrind r
Hi Atin,
The inputs I use are as per the requirements of a project I am working on
for one of the large finance institutions in Dubai. I will try to handle
the input validation within my code. I uncovered some of the issues while
doing a thorough testing of my code.
I tried with 3.7.6 and also m
Hi Ajil,
Its good to see that you are doing a thorough testing gluster. From your
mail it looks like your automation focuses on mostly negative tests. I need
few additional details to get to know whether they are known:
1. Version of gluster
2. Backtrace of the crash along with reproducer
3. Amou
For my project, I am trying to do some automation using glusterd. It is
very frustrating to see it crashing frequently. Looks like input validation
is the culprit. I also see lot of buffer overflow and memory leak issues.
Making a note of these and will try to fix them. Surprised to see such
basi
Hi,
Yes, with this patch we need not set conn->trans to NULL in rpc_clnt_disable
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Soumya Koduri"
> To: "Kotresh Hiremath Ravishankar" , "Raghavendra G"
>
> Cc: "Gluster Devel"
> Sent: Thursday, March 3, 2016 5:06:00 PM
> Su
On Thu, Mar 3, 2016 at 2:39 PM, Aravinda wrote:
> Thanks.
>
> We can use Shared secret if https requirement can be completely
> avoided. I am not sure how to use same SSL certificates in all the
> nodes of the Cluster.(REST API server patch set 2 was written based on
> shared secret method based o
- Original Message -
> From: "ABHISHEK PALIWAL"
> To: gluster-us...@gluster.org, gluster-devel@gluster.org
> Sent: Thursday, March 3, 2016 12:10:42 PM
> Subject: [Gluster-users] gluster volume heal info split brain command not
> showing files in split-brain
>
>
> Hello,
>
> In gl
On 03/03/2016 04:58 PM, Kotresh Hiremath Ravishankar wrote:
[Replying on top of my own reply]
Hi,
I have submitted the below patch [1] to avoid the issue of 'rpc_clnt_submit'
getting reconnected. But it won't take care of memory leak problem you were
trying to fix. That we have to carefully g
[Replying on top of my own reply]
Hi,
I have submitted the below patch [1] to avoid the issue of 'rpc_clnt_submit'
getting reconnected. But it won't take care of memory leak problem you were
trying to fix. That we have to carefully go through all cases and fix it.
Please have a look at it.
http:
Hi,
On 03/03/2016 11:14 AM, ABHISHEK PALIWAL wrote:
Hi Ravi,
As I discussed earlier this issue, I investigated this issue and find
that healing is not triggered because the "gluster volume heal
c_glusterfs info split-brain" command not showing any entries as a
outcome of this command even th
Hi Soumya,
I tested the lastes patch [2] on master where your previous patch [1] in merged.
I see crashes at different places.
1. If there are code paths that are holding rpc object without taking ref on
it, all those
code path will crash on invoking rpc submit on that object as rpc object
w
Update on Lock migration design.
For lock migration we are planning to get rid of fd association with the lock.
Rather we will base our lock operations
based on lk-owner(equivalent of pid) which is POSIX standard. The fd
association does not suit the need of lock migration
as migrated fd will no
Hi all,
This mail is an initiation of discussion on how to provide QoS in Glusterfs.
The thoughts I've put in this mail are in the context of Glusterfs architecture
till (and including) 3.x series. Discussions and suggestions for 4.0 are
welcome. Please note that this is a very much Work in Pro
Thanks.
We can use Shared secret if https requirement can be completely
avoided. I am not sure how to use same SSL certificates in all the
nodes of the Cluster.(REST API server patch set 2 was written based on
shared secret method based on custom HMAC signing
http://review.gluster.org/#/c/13214/2
21 matches
Mail list logo