> On 03-Aug-2020, at 13:58, Aravinda VK wrote:
>
> Interesting numbers. Thanks for the effort.
>
> What is the unit of old/new numbers? seconds?
Minutes.
>
>> On 03-Aug-2020, at 12:47 PM, Susant Palai > <mailto:spa...@redhat.com>> wrote:
>>
>
at 11:16 AM Susant Palai wrote:
> Hi,
> Recently, we have pushed some performance improvements for Rebalance
> Crawl which used to consume a significant amount of time, out of the entire
> rebalance process.
>
>
> The patch [1] is recently merged in upstream and may land
.
Would request our community to try out the feature and give us feedback.
More information regarding the same will follow.
Thanks & Regards,
Susant Palai
[1] https://review.gluster.org/#/c/glusterfs/+/24443/
<https://review.gluster.org/#/c/glusterfs/+/24443/>
Community Meetin
m
kernel as part of a fop and the directory will be updated with the layout)
> On Mon, Jul 13, 2020 at 1:35 PM Susant Palai <mailto:spa...@redhat.com>> wrote:
> The log messages are fine. Since you added a new brick, the client is
> responding to that by syncing its in-memory
The log messages are fine. Since you added a new brick, the client is
responding to that by syncing its in-memory layout with latest server layout.
The performance drop could be because of locks taken during this layout sync.
> On 02-Jul-2020, at 20:09, Shreyansh Shah
> wrote:
>
> Hi All,
>
On Fri, May 29, 2020 at 1:28 PM jifeng-call <17607319...@163.com> wrote:
> Hi All,
> I have 6 servers that form a glusterfs 2x3 distributed replication volume,
> the details are as follows:
>
> [root@node1 ~]# gluster volume info
> Volume Name: ksvd_vol
> Type: Distributed-Replicate
> Volume ID: c
er in years - it's been the same 4
> bricks.
>
> We need to get to the bottom of this.
>
> Sincerely,
> Artem
>
> --
> Founder, Android Police <http://www.androidpolice.com>, APK Mirror
> <http://www.apkmirror.com/>, Illogical Robot LLC
> beerpla.net
On Tue, May 19, 2020 at 12:15 PM Aravinda VK wrote:
>
>
> On 19-May-2020, at 12:05 PM, Susant Palai wrote:
>
>
>
> On Thu, Apr 30, 2020 at 6:31 AM Artem Russakovskii
> wrote:
>
>> Hi,
>>
>> Every time I ls large dirs in our 1x4 replicate gluste
>From the logs it looks like, most of the directories needing heal and this
could slow down the ls -R operation. Possible reason for holes=1 in the
message could be that one of the brick was down when mkdir was going on or
you might have added a new brick recently to the cluster.
On Tue, May 19, 2
On Thu, Apr 30, 2020 at 6:31 AM Artem Russakovskii
wrote:
> Hi,
>
> Every time I ls large dirs in our 1x4 replicate gluster volume, I get a
> ton of these in the logs.
>
> If I run the same ls right away again, they won't repeat, but inevitably,
> in a couple of hours or days, they show up again.
a Bug.
>
Ok, then please file a bug with the details and we can discuss there.
Susant
> Thx.
>
> Am 13.03.2019 08:33:35, schrieb Susant Palai:
>
>
>
> On Tue, Mar 12, 2019 at 5:16 PM Taste-Of-IT
> wrote:
>
> Hi Susant,
>
> and thanks for your fast reply and
dress-family: inet
>
> Ok since there is enough disk space on other Bricks and i actually didnt
> complete brick-remove, can i rerun brick-remove to rebalance last Files and
> Folders?
>
> Thanks
> Taste
>
>
> Am 12.03.2019 10:49:13, schrieb Susant Palai:
>
> Would
disk space available on
the target nodes. You can start remove-brick again and it should move out
the remaining set of files to the other bricks.
>
> Thanks
> Taste
>
>
> Am 12.03.2019 10:49:13, schrieb Susant Palai:
>
> Would it be possible for you to pass the rebala
Would it be possible for you to pass the rebalance log file on the node
from which you want to remove the brick? (location :
/var/log/glusterfs/)
+ the following information:
1 - gluster volume info
2 - gluster volume status
2 - df -h output on all 3 nodes
Susant
On Tue, Mar 12, 2019 at 3:08
This does not restrict tiered migrations.
Susant
On 18 Jan 2018 8:18 pm, "Milind Changire" wrote:
On Tue, Jan 16, 2018 at 2:52 PM, Raghavendra Gowdappa
wrote:
> All,
>
> Patch [1] prevents migration of opened files during rebalance operation.
> If patch [1] affects you, please voice out your
lead to data inconsistency in the files owing to successful
writes by more than one client on a file incorrectly.
In this talk, I will present the design of lock migration, its status
and how this
solves the problem of data inconsistency.
Thanks,
Susant
,
Susant Palai
- Original Message -
> From: "Sergei Gerasenko"
> To: gluster-users@gluster.org
> Sent: Wednesday, 3 August, 2016 6:46:45 PM
> Subject: [Gluster-users] gluster reverting directory owndership?
>
> Hi,
>
> It seems that glusterfsd reverts own
er"
> To: "Susant Palai"
> Cc: gluster-users@gluster.org
> Sent: Thursday, 7 July, 2016 5:39:44 PM
> Subject: Re: [Gluster-users] rebalance immediately fails 3.7.11, 3.7.12, 3.8.0
>
> Ok. Could you please point me to a guide or documentation or instructions
> on
Hi Wade,
Would request you to give the rebalance core file for further analysis.
Thanks,
Susant
- Original Message -
> From: "Wade Holler"
> To: gluster-users@gluster.org
> Sent: Wednesday, 6 July, 2016 12:07:05 AM
> Subject: [Gluster-users] rebalance immediately fails 3.7.11, 3.7.1
Hi,
Please pass on the rebalance log from the 1st server for more analysis which
can be found under /var/log/glusterfs/"$VOL-rebalance.log".
And also we need the current layout xattrs from both the bricks, which can be
extracted by the following command.
"getfattr -m . -de hex <$BRICK_PATH>".
the
newly added brick. The other 2 processes seems normal. If that happens again, I
will send you the state dump.
Thank you.
PuYun
From: Susant Palai
Date: 2015-12-17 14:50
To: PuYun
CC: gluster-users
Subject: Re: [Gluster-users] How to diagnose volume rebalance failure?
helpfull?
- Original Message -
From: "Susant Palai"
To: "PuYun"
Cc: "gluster-users"
Sent: Thursday, 17 December, 2015 12:20:16 PM
Subject: Re: [Gluster-users] How to diagnose volume rebalance failure?
Hi PuYun,
Would you be able to run rebalance again and
Hi PuYun,
Would you be able to run rebalance again and take state-dumps in intervals
when you see high mem-usages. Here is the details.
##How to generate statedump
We can find the directory where statedump files are created using 'gluster
--print-statedumpdir' command.
Create that directory if
Hi PuYun,
We need to figure out some mechanism to get the huge log files. Until then
here is something I can think can be reason that can affect the performance.
The rebalance normally starts in medium level [performance wise] which means
for you in this case will generate two threads for migr
Hi Marco,
Can you send the stat of the files from the removed-brick?
Susant
- Original Message -
From: "Marco Lorenzo Crociani"
To: gluster-users@gluster.org
Sent: Tuesday, 27 October, 2015 6:58:51 PM
Subject: [Gluster-users] Missing files after add new bricks and remove old ones
-
Hi,
If the file creation hashes to the brick which is down, then it fails with
ENOENT.
Susant
- Original Message -
From: "Leonid Isaev"
To: gluster-users@gluster.org
Sent: Thursday, 8 October, 2015 7:54:07 AM
Subject: [Gluster-users] Writing to distributed (non-replicated) volume with
Comments inline.
- Original Message -
From: "Mohamed Pakkeer"
To: "Susant Palai"
Cc: "Mathieu Chateau" , "gluster-users"
, "Gluster Devel" ,
"Vijay Bellur" , "Pranith Kumar Karampuri"
, "Ashish Pand
comments inline.
++Ccing Pranith and Ashish to detail on disperse behaviour.
- Original Message -
From: "Mohamed Pakkeer"
To: "Susant Palai" , "Vijay Bellur"
Cc: "Mathieu Chateau" , "gluster-users"
, "Gluster Devel"
Sent:
Mohamed,
Will investigate in to weighted rebalance behavior.
Susant
- Original Message -
From: "Mohamed Pakkeer"
To: "Susant Palai"
Cc: "Mathieu Chateau" , "gluster-users"
, "Gluster Devel"
Sent: Tuesday, 25 August, 2015 9:40:0
Hi,
Cluster.min-free-disk controls new file creation on the bricks. If you happen
to write to the existing files on the brick and that is leading to brick
getting full, then most probably you should run a rebalance.
Regards,
Susant
- Original Message -
From: "Mathieu Chateau"
To: "Mo
We found a similar crash and the fix for the same is here
http://review.gluster.org/#/c/10389/. You can find the RCA in the commit
message.
Regards,
Susant
- Original Message -
> From: "Dang Zhiqiang"
> To: gluster-users@gluster.org
> Sent: Monday, 25 May, 2015 3:30:16 PM
> Subject: [G
Commets inline.
- Original Message -
> From: "Subrata Ghosh"
> To: gluster-de...@gluster.org, gluster-users@gluster.org
> Cc: "Nobin Mathew" , "Susant Palai"
> , "Vijay Bellur"
>
> Sent: Thursday, 21 May, 2015 4:26:05 PM
&
Unaware of self heal and rebalance interaction. But rebalance and mount log
will be helpful here.
+CCING Ravi
- Original Message -
> From: "Ben Turner"
> To: "Alex" , "Susant Palai"
> Cc: gluster-users@gluster.org
> Sent: Wednesday, May 6
We have addressed few parts of the rebalance performance which should be
backported to 3.7 soon.
Regards,
Susant
- Original Message -
> From: "Raghavendra Bhat"
> To: "Alex Crow"
> Cc: gluster-users@gluster.org
> Sent: Thursday, 30 April, 2015 2:30:41 PM
> Subject: Re: [Gluster-users]
"Sharad Shukla"
> To: "Susant Palai"
> Cc: "gluster-users"
> Sent: Thursday, April 23, 2015 6:14:54 PM
> Subject: Re: [Gluster-users] Files not visible under mount point
>
> Hi Susant,
>
> i send you the xattrs of a file from the brick and from
e hex ".
Regards,
Susant
- Original Message -
> From: "Sharad Shukla"
> To: "Susant Palai"
> Cc: "gluster-users"
> Sent: Thursday, 23 April, 2015 1:09:48 PM
> Subject: Re: [Gluster-users] Files not visible under mount point
>
> Hi Susa
Can you give stat of the files form the brick ?
- Original Message -
> From: "Sharad Shukla"
> To: Gluster-users@gluster.org
> Sent: Wednesday, 22 April, 2015 10:03:26 PM
> Subject: [Gluster-users] Files not visible under mount point
>
> Hi All,
>
> Somehow due to some hardware replacem
- Original Message -
From: "Pierre Léonard"
To: gluster-users@gluster.org
Sent: Tuesday, 21 April, 2015 2:08:40 PM
Subject: [Gluster-users] rm: cannot remove `calendar-data': Directory not
empty
Hi All,
I have a list of directory as the following :
rm: cannot remove `calendar-
Hi Peter,
I tried your scenario on my setup [deleted the directory on one of the
brick{hashed}]. Hence, I don't see the directory on the mount point.
So what I tried is, created a fresh mount and sent a lookup on the missing
directory name.
e.g /mnt/fresh is your new mount point. And /mnt/fres
Hi Peter,
As I mentioned in my previous mail, you need to send fresh lookups on the
missing directories. :)
Susant
- Original Message -
From: "Peter B."
To: gluster-users@gluster.org
Sent: Tuesday, 2 December, 2014 5:29:57 PM
Subject: Re: [Gluster-users] Folder disappeared on volume,
Hi,
In case the missing directory path is known, a fresh lookup on that path will
heal the directory entry across the cluster and it will be shown on the mount
point.
e.g on the mount point: ls .
* The directory may not get a fresh lookup on the existing mount. In that case,
the best thing t
Hi,
Can you upload the logs ?
Susant
- Original Message -
From: "Pranith Kumar Karampuri"
To: "SINCOCK John" , gluster-users@gluster.org
Cc: "Susant Palai"
Sent: Wednesday, 18 June, 2014 7:48:19 AM
Subject: Re: [Gluster-users] Unable to delete fi
Hey sorry. couldn't notice you have already uploaded logs. Kaushal is looking
at the issue now.
- Original Message -
From: "Franco Broi"
To: "Susant Palai"
Cc: "Lalatendu Mohanty" , "Niels de Vos"
, "Pranith Kumar Karampuri&qu
Can you figure out the failure from log and update here ?
- Original Message -
From: "Franco Broi"
To: "Lalatendu Mohanty"
Cc: "Susant Palai" , "Niels de Vos" ,
"Pranith Kumar Karampuri" , gluster-users@gluster.org,
"Raghavendr
.)
Hi Lala,
Can you provide the steps to downgrade to 3.4 from 3.5 ?
Thanks :)
- Original Message -
From: "Franco Broi"
To: "Susant Palai"
Cc: "Pranith Kumar Karampuri" , gluster-users@gluster.org,
"Raghavendra Gowdappa" , kdhan...@redhat.com,
Pranith can you send the client and bricks logs.
Thanks,
Susant~
- Original Message -
From: "Pranith Kumar Karampuri"
To: "Franco Broi"
Cc: gluster-users@gluster.org, "Raghavendra Gowdappa" ,
spa...@redhat.com, kdhan...@redhat.com, vsomy...@redhat.com, nbala...@redhat.com
Sent: Wednesd
living brick) and the current issue
>looks more similar. Well will look at the client logs for more information.
Susant.
- Original Message -
From: "Franco Broi"
To: "Pranith Kumar Karampuri"
Cc: "Susant Palai" , gluster-users@gluster.org, "R
47 matches
Mail list logo