Hi All,
I request you to re-base your patch which are failed regression with
test-case bug-1153964.t
Thanks,
Vijay
On Wednesday 24 June 2015 11:42 AM, Raghavendra Gowdappa wrote:
http://review.gluster.org/#/c/11362/ has been merged.
- Original Message -
From: Atin Mukherjee
Hi All,
The above mentioned testcase failed for me which is not related to the patch.
Could someone look into it?
http://build.gluster.org/job/rackspace-regression-2GB-triggered/11267/consoleFull
Thanks and Regards,
Kotresh H R
___
Gluster-devel
Hi All,
As we maintain 3 releases ( currently 3.5, 3.6 and 3.7) of GlusterFS and
having an average of one release per week , we need more helping hands on
this task.
The responsibility includes building fedora and epel rpms using koji build
system and deploying the rpms to
Hi,
There will be no new or updated RPMs for Fedora 20 after today.
--
Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
On 24 June 2015 at 14:56, Humble Devassy Chirammal humble.deva...@gmail.com
wrote:
Hi All,
As we maintain 3 releases ( currently 3.5, 3.6 and 3.7) of GlusterFS and
having an average of one release per week , we need more helping hands on
this task.
The responsibility includes building
On 06/24/2015 02:17 PM, Atin Mukherjee wrote:
On 06/24/2015 02:09 PM, Kotresh Hiremath Ravishankar wrote:
Hi All,
The above mentioned testcase failed for me which is not related to the patch.
Could someone look into it?
IIRC, I am the author of it, let me take a look.
Logs didn't give
Hi All,
I'll be chairing this meeting today.
In about 45 minutes from now we will have the regular weekly Gluster
Community meeting.
Meeting details:
- location: #gluster-meeting on Freenode IRC
- webchat: https://webchat.freenode.net/?channels=gluster-meeting
- date: every Wednesday
- time:
On 06/24/2015 03:17 PM, M S Vishwanath Bhat wrote:
On 24 June 2015 at 14:56, Humble Devassy Chirammal
humble.deva...@gmail.com mailto:humble.deva...@gmail.com wrote:
Hi All,
As we maintain 3 releases ( currently 3.5, 3.6 and 3.7) of
GlusterFS and having an average of one
It knows which bricks are up/down. But they may not be the latest. Will
that matter?
AFAIK it's sufficient at this point to know which are up/down.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
humble.deva...@gmail.com mailto:humble.deva...@gmail.com wrote:
Hi All,
As we maintain 3 releases ( currently 3.5, 3.6 and 3.7) of
GlusterFS and having an average of one release per week , we need
more helping hands on this task.
The responsibility includes building
On 06/24/2015 07:44 PM, Soumya Koduri wrote:
On 06/24/2015 10:14 AM, Krishnan Parthasarathi wrote:
- Original Message -
I've been looking at the recent patches to redirect GF_FOP_IPC to an
active
subvolume instead of always to the first. Specifically, these:
On 06/24/2015 10:14 AM, Krishnan Parthasarathi wrote:
- Original Message -
I've been looking at the recent patches to redirect GF_FOP_IPC to an active
subvolume instead of always to the first. Specifically, these:
http://review.gluster.org/11346 for DHT
I haven't seen the patches yet. Failures can happen just at the time of
winding, leading to same failures. It at least needs to have the logic
of picking next_active_child. EC needs to lock+xattrop the bricks to
find bricks with good copies. AFR needs to perform getxattr to find good
copies.
On 06/24/2015 08:26 PM, Jeff Darcy wrote:
I haven't seen the patches yet. Failures can happen just at the time of
winding, leading to same failures. It at least needs to have the logic
of picking next_active_child. EC needs to lock+xattrop the bricks to
find bricks with good copies. AFR needs
As every week, we had our Gluster Community Meeting earlier today. The
agenda for next week can be found here:
https://public.pad.fsfe.org/p/gluster-community-meetings
Please add topics to the Open Floor / BYOT item around line 66 of
the etherpad and attend the meeting next week to discuss
Infra guys,
Could you check and get back?
--
~Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
On Wednesday 24 June 2015 11:20 PM, Atin Mukherjee wrote:
Infra guys,
Could you check and get back?
Seems to be working fine for me. Not certain why we run into transient
issues of this nature.
-Vijay
___
Gluster-devel mailing list
On 06/25/2015 08:50 AM, Atin Mukherjee wrote:
Infra guys,
Could you check and get back?
Its back to life now!!
--
~Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
On 06/25/2015 02:49 AM, Jeff Darcy wrote:
It knows which bricks are up/down. But they may not be the latest. Will
that matter?
AFAIK it's sufficient at this point to know which are up/down.
In that case, we need two functions which give first active child and
next_active_child in case of
Niels,
snip
Test 22: 81: EXPECT '4' count_lines $M0/rmtab
not ok 22 Got 2 instead of 4
snip
The above failure was observed with http://review.gluster.com/10445.
For more details -
http://build.gluster.org/job/rackspace-regression-2GB-triggered/11321/consoleFull
How should I proceed with
- Original Message -
hi,
Does anyone know why glusterfs hangs with valgrind?
When do you observe the hang? I started a single brick volume,
enabled valgrind on bricks and mounted it via fuse. I didn't
observe the mount hang. Could you share the set of steps which
lead to the hang?
hi,
Does anyone know why glusterfs hangs with valgrind?
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
On 06/25/2015 09:57 AM, Pranith Kumar Karampuri wrote:
hi,
Does anyone know why glusterfs hangs with valgrind?
Pranith
Yes. I have faced it too. It used work before. But recently its not
working. glusterfs hangs when run with valgrind.
Not sure why it is hanging.
Regards,
Raghavendra
Hi,
I see the above test case failing for my patch which is not related.
Could some one from AFR team look into it?
http://build.gluster.org/job/rackspace-regression-2GB-triggered/11332/consoleFull
Thanks and Regards,
Kotresh H R
___
Gluster-devel
nbslave7{2,4,5,h} had hung mounts, which lead to hung regression jobs.
I've rebooted these vms and retriggered the hung tests.
~kaushal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
25 matches
Mail list logo