On 02/12/2015 08:15 PM, Xavier Hernandez wrote:
I've made some more investigation and the problem seems worse.
It seems that NFS sends a huge amount of requests without waiting for
answers (I've had more than 1400 requests ongoing). Probably there
will be many factors that can influence on
On 02/12/2015 11:34 PM, Pranith Kumar Karampuri wrote:
On 02/12/2015 08:15 PM, Xavier Hernandez wrote:
I've made some more investigation and the problem seems worse.
It seems that NFS sends a huge amount of requests without waiting for
answers (I've had more than 1400 requests ongoing).
On 02/12/2015 01:27 PM, Xavier Hernandez wrote:
On 12.02.2015 19:09, Pranith Kumar Karampuri wrote:
On 02/12/2015 11:34 PM, Pranith Kumar Karampuri wrote:
On 02/12/2015 08:15 PM, Xavier Hernandez wrote:
I've made some more investigation and the problem seems worse. It
seems that NFS sends
On 12.02.2015 19:09, Pranith Kumar Karampuri wrote:
On
02/12/2015 11:34 PM, Pranith Kumar Karampuri wrote:
On 02/12/2015
08:15 PM, Xavier Hernandez wrote:
I've made some more
investigation and the problem seems worse. It seems that NFS sends a
huge amount of requests without waiting
+1 for Thursday (anytime).
-1 for Friday.
On Wed, Feb 11, 2015 at 7:22 PM, Niels de Vos nde...@redhat.com wrote:
On Mon, Feb 09, 2015 at 05:34:04PM -0500, Jeff Darcy wrote:
The inaugural GlusterFS 4.0 meeting on Friday was a great success.
Thanks to all who attended. Minutes are here:
On 02/12/2015 03:05 PM, Pranith Kumar Karampuri wrote:
On 02/12/2015 09:14 AM, Justin Clift wrote:
On 12 Feb 2015, at 03:02, Shyam srang...@redhat.com wrote:
On 02/11/2015 08:28 AM, David F. Robinson wrote:
My base filesystem has 40-TB and the tar takes 19 minutes. I copied
over 10-TB and
On 02/12/2015 09:14 AM, Justin Clift wrote:
On 12 Feb 2015, at 03:02, Shyam srang...@redhat.com wrote:
On 02/11/2015 08:28 AM, David F. Robinson wrote:
My base filesystem has 40-TB and the tar takes 19 minutes. I copied over 10-TB
and it took the tar extraction from 1-minute to 7-minutes.
On 02/13/2015 12:07 AM, Niels de Vos wrote:
On Thu, Feb 12, 2015 at 11:39:51PM +0530, Pranith Kumar Karampuri wrote:
On 02/12/2015 11:34 PM, Pranith Kumar Karampuri wrote:
On 02/12/2015 08:15 PM, Xavier Hernandez wrote:
I've made some more investigation and the problem seems worse.
It seems
Several coredumps are being generated in regression runs now [1]. Anyone
had a chance to look into this?
-Vijay
[1]
http://build.gluster.org/job/rackspace-regression-2GB-triggered/4233/consoleFull
___
Gluster-devel mailing list
Shyam,
You asked me to stop/start the slow volume to see if it fixed the timing
issue. I stopped/started homegfs_backup (the production volume with 40+
TB) and it didn't make it faster. I didn't stop/start the fast volume
to see if it made it slower. I just did that and sent out an email.
On 02/12/2015 11:18 AM, David F. Robinson wrote:
Shyam,
You asked me to stop/start the slow volume to see if it fixed the timing
issue. I stopped/started homegfs_backup (the production volume with 40+
TB) and it didn't make it faster. I didn't stop/start the fast volume
to see if it made it
On 12 Feb 2015, at 11:22, Pranith Kumar Karampuri pkara...@redhat.com wrote:
snip
Just to increase confidence performed one more test. Stopped the volumes and
re-started. Now on both the volumes, the numbers are almost same:
Oh. So it's a problem that turns up after a certain amount of
That is very interesting. I tried this test and received a similar
result. Start/stopping the volume causes a timing issue on the blank
volume. It seems like there is some parameter getting set when you
create a volume and gets reset when you start/stop a volume. Or,
something gets set
-- Original Message --
From: Shyam srang...@redhat.com
To: David F. Robinson david.robin...@corvidtec.com; Pranith Kumar
Karampuri pkara...@redhat.com; Justin Clift jus...@gluster.org
Cc: Gluster Devel gluster-devel@gluster.org
Sent: 2/12/2015 11:26:51 AM
Subject: Re: [Gluster-devel]
FWIW, starting/stopping a volume that is fast doesn't consistently make
it slow. I just tried it again on an older volume... It doesn't make it
slow. I also went back and re-ran the test on test3brick and it isn't
slow any longer. Maybe there is a time lag after stopping/starting a
volume
15 matches
Mail list logo