Several coredumps are being generated in regression runs now [1]. Anyone
had a chance to look into this?
-Vijay
[1]
http://build.gluster.org/job/rackspace-regression-2GB-triggered/4233/consoleFull
___
Gluster-devel mailing list
Gluster-devel@gluster
On 02/13/2015 12:07 AM, Niels de Vos wrote:
On Thu, Feb 12, 2015 at 11:39:51PM +0530, Pranith Kumar Karampuri wrote:
On 02/12/2015 11:34 PM, Pranith Kumar Karampuri wrote:
On 02/12/2015 08:15 PM, Xavier Hernandez wrote:
I've made some more investigation and the problem seems worse.
It seems
On 12.02.2015 19:09, Pranith Kumar Karampuri wrote:
> On
02/12/2015 11:34 PM, Pranith Kumar Karampuri wrote:
>
>> On 02/12/2015
08:15 PM, Xavier Hernandez wrote:
>>
>>> I've made some more
investigation and the problem seems worse. It seems that NFS sends a
huge amount of requests without w
On Thu, Feb 12, 2015 at 11:39:51PM +0530, Pranith Kumar Karampuri wrote:
>
> On 02/12/2015 11:34 PM, Pranith Kumar Karampuri wrote:
> >
> >On 02/12/2015 08:15 PM, Xavier Hernandez wrote:
> >>I've made some more investigation and the problem seems worse.
> >>
> >>It seems that NFS sends a huge amou
On 02/12/2015 01:27 PM, Xavier Hernandez wrote:
On 12.02.2015 19:09, Pranith Kumar Karampuri wrote:
On 02/12/2015 11:34 PM, Pranith Kumar Karampuri wrote:
On 02/12/2015 08:15 PM, Xavier Hernandez wrote:
I've made some more investigation and the problem seems worse. It
seems that NFS sends a
On 02/12/2015 11:34 PM, Pranith Kumar Karampuri wrote:
On 02/12/2015 08:15 PM, Xavier Hernandez wrote:
I've made some more investigation and the problem seems worse.
It seems that NFS sends a huge amount of requests without waiting for
answers (I've had more than 1400 requests ongoing). Prob
On 02/12/2015 08:15 PM, Xavier Hernandez wrote:
I've made some more investigation and the problem seems worse.
It seems that NFS sends a huge amount of requests without waiting for
answers (I've had more than 1400 requests ongoing). Probably there
will be many factors that can influence on th
FWIW, starting/stopping a volume that is fast doesn't consistently make
it slow. I just tried it again on an older volume... It doesn't make it
slow. I also went back and re-ran the test on test3brick and it isn't
slow any longer. Maybe there is a time lag after stopping/starting a
volume be
-- Original Message --
From: "Shyam"
To: "David F. Robinson" ; "Pranith Kumar
Karampuri" ; "Justin Clift"
Cc: "Gluster Devel"
Sent: 2/12/2015 11:26:51 AM
Subject: Re: [Gluster-devel] missing files
On 02/12/2015 11:18 AM, David F. Robinson wrote:
Shyam,
You asked me to stop/start
On 02/12/2015 11:18 AM, David F. Robinson wrote:
Shyam,
You asked me to stop/start the slow volume to see if it fixed the timing
issue. I stopped/started homegfs_backup (the production volume with 40+
TB) and it didn't make it faster. I didn't stop/start the fast volume
to see if it made it sl
Shyam,
You asked me to stop/start the slow volume to see if it fixed the timing
issue. I stopped/started homegfs_backup (the production volume with 40+
TB) and it didn't make it faster. I didn't stop/start the fast volume
to see if it made it slower. I just did that and sent out an email.
That is very interesting. I tried this test and received a similar
result. Start/stopping the volume causes a timing issue on the blank
volume. It seems like there is some parameter getting set when you
create a volume and gets reset when you start/stop a volume. Or,
something gets set duri
On 02/12/2015 06:22 AM, Pranith Kumar Karampuri wrote:
On 02/12/2015 03:05 PM, Pranith Kumar Karampuri wrote:
On 02/12/2015 09:14 AM, Justin Clift wrote:
On 12 Feb 2015, at 03:02, Shyam wrote:
On 02/11/2015 08:28 AM, David F. Robinson wrote:
Just to increase confidence performed one more t
On 12 Feb 2015, at 11:22, Pranith Kumar Karampuri wrote:
> Just to increase confidence performed one more test. Stopped the volumes and
> re-started. Now on both the volumes, the numbers are almost same:
Oh. So it's a problem that turns up after a certain amount of
activity has happened on a v
I've made some more investigation and the problem seems worse.
It seems that NFS sends a huge amount of requests without waiting for
answers (I've had more than 1400 requests ongoing). Probably there will
be many factors that can influence on the load that this causes, and one
of them could be
On 02/12/2015 04:52 PM, Pranith Kumar Karampuri wrote:
On 02/12/2015 03:05 PM, Pranith Kumar Karampuri wrote:
On 02/12/2015 09:14 AM, Justin Clift wrote:
On 12 Feb 2015, at 03:02, Shyam wrote:
On 02/11/2015 08:28 AM, David F. Robinson wrote:
My base filesystem has 40-TB and the tar takes 1
+1 for Thursday (anytime).
-1 for Friday.
On Wed, Feb 11, 2015 at 7:22 PM, Niels de Vos wrote:
> On Mon, Feb 09, 2015 at 05:34:04PM -0500, Jeff Darcy wrote:
> > The inaugural GlusterFS 4.0 meeting on Friday was a great success.
> > Thanks to all who attended. Minutes are here:
> >
> >
> http://
On 02/12/2015 03:05 PM, Pranith Kumar Karampuri wrote:
On 02/12/2015 09:14 AM, Justin Clift wrote:
On 12 Feb 2015, at 03:02, Shyam wrote:
On 02/11/2015 08:28 AM, David F. Robinson wrote:
My base filesystem has 40-TB and the tar takes 19 minutes. I copied
over 10-TB and it took the tar extra
On 02/12/2015 09:14 AM, Justin Clift wrote:
On 12 Feb 2015, at 03:02, Shyam wrote:
On 02/11/2015 08:28 AM, David F. Robinson wrote:
My base filesystem has 40-TB and the tar takes 19 minutes. I copied over 10-TB
and it took the tar extraction from 1-minute to 7-minutes.
My suspicion is that
On 02/12/2015 08:32 AM, Rudra Siva wrote:
> Rafi,
>
> I'm preparing the Phi RDMA patch for submission
If you can send a patch to support iWARP, that will be a great addition
to gluster rdma.
> - definitely
> performance is better with the buffer pre-registration fixes. My patch
> will be witho
- Original Message -
From: "Emmanuel Dreyfus"
To: "Sachin Pandit"
Cc: gluster-in...@gluster.org, "Gluster Devel"
Sent: Thursday, February 12, 2015 1:24:42 PM
Subject: Re: [Gluster-devel] Skip regression run for "work in progress" patch.
On Thu, Feb 12, 2015 at 02:35:46AM -0500, Sachin
21 matches
Mail list logo