On 04/13/2016 09:15 AM, Vijay Bellur wrote:
On 04/08/2016 10:25 PM, Vijay Bellur wrote:
Hey Pranith, Ashish -
We have broken support for group virt after the following commit in
release-3.7:
commit 46920e3bd38d9ae7c1910d0bd83eff309ab20c66
Author: Ashish Pandey
Date: Fri Mar 4 13:05:09 201
On 13 April 2016 at 13:57, Atin Mukherjee wrote:
> Yes, we have a patch which is awaiting regression to pass. Post that
> Kaushal should be able to tag 3.7.11. Thanks for your patience :)
No worries, thanks, it'l be ready when its ready.
--
Lindsay
_
On 04/12/2016 11:53 PM, Atin Mukherjee wrote:
On 04/13/2016 09:15 AM, Vijay Bellur wrote:
On 04/08/2016 10:25 PM, Vijay Bellur wrote:
Hey Pranith, Ashish -
We have broken support for group virt after the following commit in
release-3.7:
commit 46920e3bd38d9ae7c1910d0bd83eff309ab20c66
Author
On 04/13/2016 08:40 AM, Lindsay Mathieson wrote:
> On 7 April 2016 at 15:42, Kaushal M wrote:
>> This regression was decided as a blocker, and a decision was made to
>> do a quick GlusterFS-3.7.11 release solving it. The 3.7.11 release
>> should be available in within the next 2 days.
>
>
> Is
On 04/13/2016 09:15 AM, Vijay Bellur wrote:
> On 04/08/2016 10:25 PM, Vijay Bellur wrote:
>> Hey Pranith, Ashish -
>>
>> We have broken support for group virt after the following commit in
>> release-3.7:
>>
>> commit 46920e3bd38d9ae7c1910d0bd83eff309ab20c66
>> Author: Ashish Pandey
>> Date: F
On 04/08/2016 10:25 PM, Vijay Bellur wrote:
Hey Pranith, Ashish -
We have broken support for group virt after the following commit in
release-3.7:
commit 46920e3bd38d9ae7c1910d0bd83eff309ab20c66
Author: Ashish Pandey
Date: Fri Mar 4 13:05:09 2016 +0530
cluster/ec: Provide an option to
On 7 April 2016 at 15:42, Kaushal M wrote:
> This regression was decided as a blocker, and a decision was made to
> do a quick GlusterFS-3.7.11 release solving it. The 3.7.11 release
> should be available in within the next 2 days.
Is this still happening?
--
Lindsay
__
Hi,
Some of you may have noticed that one of the roadmap items for GlusterFS 3.8 is
to change the default for the volume option nfs.disable from 'off' to 'on'.
If you haven't noticed, then this email will serve to call your attention to it.
Changing the nfs.disable volume option from 'off' to '
On 04/12/2016 06:40 AM, Manikandan Selvaganesh wrote:
Hi all,
As discussed in the mails[1] we are implementing SELinux translator for
GlusterFS. We have a design doc[2] about the "SELinux Client Support in
GlusterFS". Comments and suggestions are highly appreciated.
[1] http://www.gluster.org/p
Hi all,
thanks for the participation today. In case you have missed the meeting,
remind yourself to join next week Wednesday at 12:00 UTC. More details
in the agenda:
https://public.pad.fsfe.org/p/gluster-community-meetings
#gluster-meeting: gluste
On 04/12/2016 05:54 PM, Jeff Darcy wrote:
tier can lead to parallel lookups in two different epoll threads on
hot/cold tiers. The race-window to hit the common-dictionary in lookup
use-after-free is too low without dict_copy_with_ref() in either ec/afr.
In either afr/ec side one thread should b
Hi all,
The feature page for lock migration can be found here:
https://github.com/gluster/glusterfs-specs/blob/master/accepted/Lock-Migration.md.
Please give your feedback on this mail thread it self or on gerrit
(http://review.gluster.org/#/c/13924/).
Thanks,
Susant
- Original Message --
> tier can lead to parallel lookups in two different epoll threads on
> hot/cold tiers. The race-window to hit the common-dictionary in lookup
> use-after-free is too low without dict_copy_with_ref() in either ec/afr.
> In either afr/ec side one thread should be executing dict_serialization
> in cl
On 04/12/2016 05:31 PM, Jeff Darcy wrote:
This is a memory corruption issue which is already reported and there is a
patch by Pranith in 3.7 [1] waiting to get reviews. Patch [1] will solve the
issue .
[1] : http://review.gluster.org/#/c/13574/
That patch seems to be about making and modifying
> This is a memory corruption issue which is already reported and there is a
> patch by Pranith in 3.7 [1] waiting to get reviews. Patch [1] will solve the
> issue .
> [1] : http://review.gluster.org/#/c/13574/
That patch seems to be about making and modifying a copy of xattr_req,
instead of modi
- Original Message -
> From: "Niels de Vos"
> To: "Karthik Subrahmanya"
> Cc: "gluster-devel"
> Sent: Tuesday, April 12, 2016 2:00:57 PM
> Subject: Re: [Gluster-devel] WORM/Retention Feature: 07-04-2016
>
> On Thu, Apr 07, 2016 at 08:18:43AM -0400, Karthik Subrahmanya wrote:
> > Hi al
Hi all,
We are planning to improve quota enable/disable process. Currently, when
quota is enabled or disabled on a volume, quota crawl was done from the
single mount point, this is very slow process if there are huge number of
files in the volume. This proposed feature will now spawn crawl process
Hi all,
As discussed in the mails[1] we are implementing SELinux translator for
GlusterFS. We have a design doc[2] about the "SELinux Client Support in
GlusterFS". Comments and suggestions are highly appreciated.
[1] http://www.gluster.org/pipermail/gluster-users/2016-March/025919.html
[2]
https
On 04/12/2016 04:01 PM, Nithya Balachandran wrote:
>
>>
>> We're getting close to that point again where nothing can pass,
>> including patches to fix regression tests, because even those can't get
>> past the gauntlet of spurious failures on other tests. Poornima
>> recently sent out a very he
Hi Vijay,
This is a memory corruption issue which is already reported and there is
a patch by Pranith in 3.7 [1] waiting to get reviews. Patch [1] will
solve the issue .
[1] : http://review.gluster.org/#/c/13574/
Regards
Rafi KC
On 04/12/2016 02:59 PM, Mohammed Rafi K C wrote:
> Hi Vijay,
>
> I
>
> We're getting close to that point again where nothing can pass,
> including patches to fix regression tests, because even those can't get
> past the gauntlet of spurious failures on other tests. Poornima
> recently sent out a very helpful list of failures requiring further
> investigation:
>
Hi Vijay,
I will take a look.
Regards
Rafi KC
On 04/12/2016 02:49 PM, Vijaikumar Mallikarjuna wrote:
> Hi,
>
> The test-case looks hung "./tests/basic/tier/tier-file-create.t".
>
> Can someone look at this issue?
>
> Below are the regression runs which are hung
>
> https://build.gluster.org/job
Hi,
The test-case looks hung "./tests/basic/tier/tier-file-create.t".
Can someone look at this issue?
Below are the regression runs which are hung
https://build.gluster.org/job/rackspace-regression-2GB-triggered/19659/console
https://build.gluster.org/job/rackspace-regression-2GB-triggered/196
On Thu, Apr 07, 2016 at 08:18:43AM -0400, Karthik Subrahmanya wrote:
> Hi all,
>
> This week's status:
Many thanks for sending these regular updates!
Could you create a bug and have it block the 'glusterfs-3.8.0' one? Just
put that string in the "blocks" field in bugzilla.
The next time you upd
On Tue, Apr 12, 2016 at 12:06:39PM +0530, Raghavendra Talur wrote:
> Hi All,
>
> We have now merged the first patch for Distaf libs and Distaf tests in
> Gluster git repo.[1]
> M S Vishwanath already has merge rights on gerrit for Gluster repo. I would
> like to propose Shwetha and Jonathan as add
+ gluster-devel
On 04/12/2016 11:40 AM, ies...@126.com wrote:
> Hi,
>
> I got a probem that "gluster vol heal test statistics heal-count
> replica" seems doesn't work.
> I create a replica 2 volume on one node, like this:
>
> Volume Name: test
> Type: Distributed-Replicate
> Volume ID: 7eca4759-
26 matches
Mail list logo