/home/jenkins/root/workspace/rackspace-regression-2GB/xlators/cluster/dht/src/dht-common.c:
In function ‘dht_lookup_everywhere_done’:
/home/jenkins/root/workspace/rackspace-regression-2GB/xlators/cluster/dht/src/dht-common.c:1229:
warning: implicit declaration of function
Hi Ramon,
Sorry for replying so late.
Our disks are Enterprese SATA disk spining at 7200rpm, so it is not the
disk issue.
In these days, we traced the code into the fuse kernel module and found out
that is the kernel fuse module
problem.
Here is some code fragment in fs/fuse/file.c of
Hi,
does anyone have some time to review these ec patches ?
http://review.gluster.org/8368/ - Fix spurious crash
http://review.gluster.org/8369/ - Improve performance
http://review.gluster.org/8413/ - Remove Intel's SSE2 dependency
http://review.gluster.org/8420/ - Fix spurious crash
Thank you
Hi
I am tracking a bug that appear when running self_heald.t on NetBSD.
The test will hang on:
EXPECT $HEAL_FILES afr_get_pending_heal_count $V0
The problem inside afr_get_pending_heal_count is when calling
gluster volume heal $vol info
The command will never return. By adding a lot of
On 09/10/2014 12:22 AM, Justin Clift wrote:
On 09/09/2014, at 7:47 PM, Vijay Bellur wrote:
On 08/06/2014 06:26 PM, Justin Clift wrote:
- Original Message -
Did we get to break the tie? :)
Yep. Latest results are:
* 5:30 PM IST / 12:00 UTC - 47 votes (52%)
* 6:30 PM IST /
Same for slave21 and slave24. They have newly generated ssh keys,
so don't be stressed if you get warnings about that. ;)
+ Justin
On 09/09/2014, at 10:15 PM, Justin Clift wrote:
Just an FYI, slave22 in Rackspace went weird and couldn't
be rebooted. So a new VM has been created and put in
Reminder!!!
The weekly Gluster Community meeting is in 1 hour, in
#gluster-meeting on IRC.
This is a completely public meeting, everyone is encouraged
to attend and be a part of it. :)
To add Agenda items
***
Add them under the Other items to discuss point on the Etherpad:
On 09/08/2014 11:02 PM, Gluster Build System wrote:
SRC:
http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.4.6beta1.tar.gz
This release is made off jenkins-release-87
-- Gluster Build System
___
Gluster-users mailing list
yes, I'll take a look,
- Original Message -
From: Xavier Hernandez xhernan...@datalab.es
To: gluster-devel@gluster.org
Sent: Wednesday, September 10, 2014 4:21:17 AM
Subject: [Gluster-devel] Reviewers needed for ec xlator
Hi,
does anyone have some time to review these ec patches
On 10/09/2014, at 12:00 PM, Justin Clift wrote:
Reminder!!!
The weekly Gluster Community meeting is in 1 hour, in
#gluster-meeting on IRC.
This is a completely public meeting, everyone is encouraged
to attend and be a part of it. :)
Another productive meeting, with lots of interesting
On Thu, Sep 4, 2014 at 5:42 PM, Goswin von Brederlow goswin-...@web.de wrote:
On Tue, Sep 02, 2014 at 08:20:35AM -0700, Anand Avati wrote:
On Mon, Sep 1, 2014 at 6:07 AM, Vimal A R arvi...@yahoo.in wrote:
Hello fuse-devel / fs-cache / gluster-devel lists,
I would like to propose the idea
A gentle reminder!!
Regards,
Atin
On 09/01/2014 11:07 AM, Atin Mukherjee wrote:
I would appreciate if the following patches can get some review attention:
- http://review.gluster.org/#/c/8358/
- http://review.gluster.org/#/c/8375/
- http://review.gluster.org/#/c/8380/
-
I encourage folks to submit talks here - a really great conference with a high
density of intelligent people.
-JM
- Forwarded Message -
From: Erez Zadok erez.za...@usenix.org
To: John Walker johnm...@redhat.com
Sent: Wednesday, September 10, 2014 11:48:31 AM
Subject: FAST '15
Hi
Like everyone I am getting concerned about 3.6 release
getting closer while my patch await being reviewed.
Anyone can have a look? I have 3 patche sets, two of them
being obvious bug fix (NULL dereference, reliance on
specific sizeof(size_t))
Changes on master:
http://review.gluster.org/8441
Hi,
two more patches needing reviews. They are very straightforward though:
http://review.gluster.org/8694/ - Fix spurious test failure
http://review.gluster.org/8695/ - Fix bug in ftruncate
Thank you very much for your time,
Xavi
On 10.09.2014 10:21, Xavier Hernandez wrote:
Hi,
does
Hi guys, I wanted to share my experiences with Go. I have been using it
for the past few months and I have to say I am very impressed. Instead
of writing a massive email I created a blog entry:
http://goo.gl/g9abOi
Hope this helps.
- Luis
On 09/05/2014 11:44 AM, Jeff Darcy wrote:
Does
Hi guys, I wanted to share my experiences with Go. I have been using it
for the past few months and I have to say I am very impressed. Instead
of writing a massive email I created a blog entry:
http://goo.gl/g9abOi
Fantastic. Thanks, Luis!
___
On 10/09/2014, at 6:19 PM, Vijay Bellur wrote:
snip
We were affected by a bug in gerrit [1] which caused inconsistencies in the
release-3.6 branch of the git repository backing our gerrit instance. I have
attempted resolving the inconsistencies and I believe we are in a sane state
as far as
On 11/09/2014, at 2:51 AM, Luis Pabón wrote:
I think the real question is, Why do we depend on core files? What does it
provide? If we rethink how we may do debugging, we may realize that we only
require core files because we are used to it and it is familiar to us. Now,
I am not saying
On 11/09/2014, at 2:46 AM, Balamurugan Arumugam wrote:
snip
WRT glusterd problem, I see Salt already resolves most of them at
infrastructure level. Its worth considering it.
Salt used to have (~12 months ago) a reputation for being really
buggy. Any idea if that's still the case?
Apart
- Original Message -
From: Justin Clift jus...@gluster.org
To: Balamurugan Arumugam b...@gluster.com
Cc: Kaushal M kshlms...@gmail.com, gluster-us...@gluster.org, Gluster
Devel gluster-devel@gluster.org
Sent: Thursday, September 11, 2014 7:33:52 AM
Subject: Re: [Gluster-users]
Emmanuel,
The scheduling of a paused task happens when the epoll thread receives a POLLIN
event along with
the response from the remote endpoint. This is contingent on the fact that the
call back must issue
a synctask_wake, which will trigger the resumption of the task (in one of the
threads
Hi Folks,
I have worked on a patch [1] to ensure glusterd statedump captures some
important in-memory data structure on per gluster node. This will
definitely help on root causing an issue.
Following are the list of in-memory data members which are targeted in
this patch:
* Supported max/min
23 matches
Mail list logo