On Wed, Aug 10, 2016 at 1:58 PM, Serkan Çoban wrote:
> Hi,
>
> Any progress about the patch?
>
hi Serkan,
While testing the patch by myself, I am seeing that it is taking
more than one crawl to complete heals even when there are no directory
hierarchies. It is
To be more precious the hang is clearly seen when there is some IO(write) to
the mount point. Even rm -rf takes time to clear the files.
Below, time command showing the delay. Typically it should take less then a
second, but glusterfs take more than 5seconds just to list 32x 2GB files.
I did strace & its waiting on IO.
--
Deepak
-Original Message-
From: Vijay Bellur [mailto:vbel...@redhat.com]
Sent: Wednesday, August 10, 2016 2:17 PM
To: Deepak Naidu
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Linux (ls -l) command pauses/slow on GlusterFS
mounts
On
On 08/10/2016 05:12 PM, Deepak Naidu wrote:
Before we can try physical we wanted POC on VM.
Just a note the VMs are decently powerful 18cpus, 10gig NIC, 45GB Ram 1TB SSD
drives. This is per node spec.
I don't see the ls -l command hanging when I try to list the files from the
gluster-node
Before we can try physical we wanted POC on VM.
Just a note the VMs are decently powerful 18cpus, 10gig NIC, 45GB Ram 1TB SSD
drives. This is per node spec.
I don't see the ls -l command hanging when I try to list the files from the
gluster-node VMs itself So the question.
--
Deepak
> On Aug
On 08/10/2016 04:54 PM, Deepak Naidu wrote:
Anyone who has seen the issue in their env ?
--
Deepak
-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Deepak Naidu
Sent: Tuesday, August 09, 2016 9:14 PM
To:
Anyone who has seen the issue in their env ?
--
Deepak
-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Deepak Naidu
Sent: Tuesday, August 09, 2016 9:14 PM
To: gluster-users@gluster.org
Subject: [Gluster-users] Linux (ls
Thank you very much.I just noticed even without ganesha nfs i see this kind of
traffic to the lo address.and the warning message about the health status only
happen when i hit 100% brick utilization, so it should be fine anyway.I'll keep
digging.
Thanks again.
--
Respectfully
Mahdi
Hi,
How was the profile data collected? Was it a cumulative profile output
or an incremental profile output?
How did the initial data get written to the 20 bricks, before the read
workload was started?
What I suspect here is that, you are collecting cumulative output, which
is possibly
Il 10 ago 2016 14:17, "ML mail" ha scritto:
>
> Good point Gandalf! I really don't feel adventurous on a production
cluster...
>
>
This is mostly the only points that keeps me away from gluster on any
production storage.
If there isn't any official, safe and suggested
Good point Gandalf! I really don't feel adventurous on a production cluster...
On Wednesday, August 10, 2016 2:14 PM, Gandalf Corvotempesta
wrote:
Il 10 ago 2016 11:59, "ML mail" ha scritto:
>
> Hi,
>
> The Upgrading to 3.8 guide is
Il 10 ago 2016 11:59, "ML mail" ha scritto:
>
> Hi,
>
> The Upgrading to 3.8 guide is missing from:
>
>
> http://gluster.readthedocs.io/en/latest/Upgrade-Guide/README/
>
Additionally, all upgrade guides say, about a rolling upgrade, the
following:
"feel adventurous"
So, the
Hi,
The Upgrading to 3.8 guide is missing from:
http://gluster.readthedocs.io/en/latest/Upgrade-Guide/README/
Regards,
ML
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
Hi,
Any progress about the patch?
On Thu, Aug 4, 2016 at 10:16 AM, Pranith Kumar Karampuri
wrote:
>
>
> On Thu, Aug 4, 2016 at 11:30 AM, Serkan Çoban wrote:
>>
>> Thanks Pranith,
>> I am waiting for RPMs to show, I will do the tests as soon as
14 matches
Mail list logo