Hello Prasanna,
Thank you for the deep explanations and detailled steps. I'LL jeep you posted
shortly on the results
- Francis L.
Envoyé depuis mon appareil Samsung
Message d'origine
De : Prasanna Kalever
Date : 16-07-08 08:13 (GMT-05:00)
À :
On Sat, Jul 09, 2016 at 03:37:09AM +0800, Zhengping Zhou wrote:
> Hello all:
>
> I have submitted some patch to glusterfs through gerrit , but there
> always are spurious regression test errors. I am wondering what to do when
> there is no relevant error message around this patch in
Hello all:
I have submitted some patch to glusterfs through gerrit , but there
always are spurious regression test errors. I am wondering what to do when
there is no relevant error message around this patch in Jenkins console's
output. It seems I only have two choice.
First is just
> In either of these situations, one glusterfsd process on whatever peer the
> client is currently talking to will skyrocket to *nproc* cpu usage (800%,
> 1600%) and the storage cluster is essentially useless; all other clients
> will eventually try to read or write data to the overloaded peer
(combining replies to multiple people)
Pranith:
> I agree about encouraging specific kind of review. At the same time we need
> to make reviewing, helping users in the community as important as sending
> patches in the eyes of everyone. It is very important to know these
> statistics to move in
Hello, users and devs.
TL;DR: One gluster client can essentially cause denial of service /
availability loss to entire gluster array. There's no way to stop it and
almost no way to find the bad client. Probably all (at least 3.6 and 3.7)
versions are affected.
We have two large replicate gluster
Does this issue have some fix pending, or there is just bugreport?
08.07.2016 15:12, Kaushal M написав:
On Fri, Jul 8, 2016 at 2:22 PM, Raghavendra Gowdappa
wrote:
There seems to be a major inode leak in fuse-clients:
https://bugzilla.redhat.com/show_bug.cgi?id=1353856
Hi Francis,
The tcmu-runner is not available as a pre built package in Ubuntu distro,
so you need to build it on your own, honestly I have not tried this on my
own but the suggested solution should solve it for you.
glfs is supported from initial version of tcmu-runner, so you can take up
any
On Fri, Jul 8, 2016 at 2:22 PM, Raghavendra Gowdappa
wrote:
> There seems to be a major inode leak in fuse-clients:
> https://bugzilla.redhat.com/show_bug.cgi?id=1353856
>
> We have found an RCA through code reading (though have a high confidence on
> the RCA). Do we want to
Hi,
Snaphsots in gluster have a scheduler, which relies heavily on crontab,
and the shared storage. I would like people using this scheduler, or for
people to use this scheduler, and provide us feedback on it's
experience. We are looking for feedback on ease of use, complexity of
features,
There seems to be a major inode leak in fuse-clients:
https://bugzilla.redhat.com/show_bug.cgi?id=1353856
We have found an RCA through code reading (though have a high confidence on the
RCA). Do we want to include this in 3.7.13?
regards,
Raghavendra.
- Original Message -
> From:
On Fri, Jul 8, 2016 at 11:23 AM, Poornima Gurusiddaiah
wrote:
>
> Completely agree with your concern here. Keeping aside the regression
> part, few observations and suggestions:
> As per the Maintainers guidelines (
>
On Fri, Jul 8, 2016 at 11:40 AM, Atin Mukherjee wrote:
> How about having "review marathon" once a week by every team? In past this
> has worked well and I don't see any reason why can't we spend 3-4 hours in
> a meeting on weekly basis to review incoming patches on the
On Fri, Jul 8, 2016 at 9:59 AM, Pranith Kumar Karampuri
wrote:
> Could you take in http://review.gluster.org/#/c/14598/ as well? It is ready
> for merge.
>
> On Thu, Jul 7, 2016 at 3:02 PM, Atin Mukherjee wrote:
>>
>> Can you take in
How about having "review marathon" once a week by every team? In past this
has worked well and I don't see any reason why can't we spend 3-4 hours in
a meeting on weekly basis to review incoming patches on the component that
the team owns.
On Fri, Jul 8, 2016 at 11:23 AM, Poornima Gurusiddaiah
15 matches
Mail list logo