Re: [Gluster-devel] Pull Request review workflow

2020-10-15 Thread Ashish Pandey
I think it's a very good suggestion, I have faced this issue too. I think we should it now before we get used to of the current process :) --- Ashish - Original Message - From: "Xavi Hernandez" To: "gluster-devel" Sent: Thursday, October 15, 2020 6:16:06 PM Subject: Re:

Re: [Gluster-devel] Removing problematic language in geo-replication

2020-07-22 Thread Ashish Pandey
1. Can I replace master:slave with primary:secondary everywhere in the code and the CLI? Are there any suggestions for more appropriate terminology? >> Other options could be - Leader : follower 2. Is it okay to target the changes to a major release (release-9) and *not* provide backward

Re: [Gluster-devel] [Gluster-users] "Transport endpoint is not connected" error + long list of files to be healed

2019-11-13 Thread Ashish Pandey
Hi Mauro, Yes, it will take time to heal these files and time depends on the number of file/dir you have created and the amount of data you have written while the bricks were down. YOu can just run following command and keep observing that the count is changing or not - gluster volume

[Gluster-devel] Gluster Community Meeting : 2019-07-09

2019-07-09 Thread Ashish Pandey
Hi All, Today, we had Gluster Community Meeting and the minutes of meeting can be found on following link - https://github.com/gluster/community/blob/master/meetings/2019-07-09-Community_meeting.md --- Ashish ___ Community Meeting Calendar:

[Gluster-devel] Gluster Community Meeting (APAC friendly hours)

2019-07-08 Thread Ashish Pandey
@gluster.org ATTENDEE;CN=Ashish Pandey;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TR UE:mailto:aspan...@redhat.com ORGANIZER;CN=Ashish Pandey:mailto:aspan...@redhat.com DTSTART;TZID="Asia/Kolkata":20190709T113000 DTEND;TZID="Asia/Kolkata":20190709T123000 STATUS:CONFIRMED CLASS:PU

Re: [Gluster-devel] Should we enable features.locks-notify.contention by default ?

2019-05-30 Thread Ashish Pandey
- Original Message - From: "Xavi Hernandez" To: "Ashish Pandey" Cc: "Amar Tumballi Suryanarayan" , "gluster-devel" Sent: Thursday, May 30, 2019 2:03:54 PM Subject: Re: [Gluster-devel] Should we enable features.locks-notify.contention by

[Gluster-devel] Meeting Details on footer of the gluster-devel and gluster-user mailing list

2019-05-07 Thread Ashish Pandey
Hi, While we send a mail on gluster-devel or gluster-user mailing list, following content gets auto generated and placed at the end of mail. Gluster-users mailing list gluster-us...@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users Gluster-devel mailing list

Re: [Gluster-devel] Should we enable contention notification by default ?

2019-05-02 Thread Ashish Pandey
s value and what should be the default value of this option? --- Ashish - Original Message - From: "Xavi Hernandez" To: "gluster-devel" Cc: "Pranith Kumar Karampuri" , "Ashish Pandey" , "Amar Tumballi" Sent: Thursday, May 2,

Re: [Gluster-devel] [Gluster-users] Gluster : Improvements on "heal info" command

2019-03-06 Thread Ashish Pandey
r : Improvements on "heal info" command Hi , This sounds nice. I would like to ask if the order is starting from the local node's bricks first ? (I am talking about --brick=one) Best Regards, Strahil Nikolov On Mar 5, 2019 10:51, Ashish Pandey wrote: Hi All, We have o

[Gluster-devel] Gluster : Improvements on "heal info" command

2019-03-05 Thread Ashish Pandey
Hi All, We have observed and heard from gluster users about the long time "heal info" command takes. Even when we all want to know if a gluster volume is healthy or not, it takes time to list down all the files from all the bricks after which we can be sure if the volume is healthy or not.

Re: [Gluster-devel] Release 6: Kick off!

2019-01-23 Thread Ashish Pandey
Following is the patch I am working and targeting - https://review.gluster.org/#/c/glusterfs/+/21933/ It is under review phase and yet to be merged. -- Ashish - Original Message - From: "RAFI KC" To: "Shyam Ranganathan" , "GlusterFS Maintainers" , "Gluster Devel" Sent:

Re: [Gluster-devel] Regression health for release-5.next and release-6

2019-01-14 Thread Ashish Pandey
I downloaded logs of regression runs 1077 and 1073 and tried to investigate it. In both regression ec/bug-1236065.t is hanging on TEST 70 which is trying to get the online brick count I can see that in mount/bricks and glusterd logs it has not move forward after this test. glusterd.log -

Re: [Gluster-devel] Master branch lock down: RCA for tests (ec-1468261.t)

2018-08-12 Thread Ashish Pandey
for that. --- Ashish - Original Message - From: "Ashish Pandey" To: "Shyam Ranganathan" Cc: "GlusterFS Maintainers" , "Gluster Devel" Sent: Monday, August 13, 2018 10:54:16 AM Subject: Re: [Gluster-devel] Master branch lock down: RCA for tests (ec-14682

Re: [Gluster-devel] Master branch lock down: RCA for tests (ec-1468261.t)

2018-08-12 Thread Ashish Pandey
RCA - https://lists.gluster.org/pipermail/gluster-devel/2018-August/055167.html Patch - https://review.gluster.org/#/c/glusterfs/+/20657/ should also fix this issue. Checking if we can put extra test to make sure bricks are connected to shd before heal begin. Will send a patch for that.

Re: [Gluster-devel] Master branch lock down status

2018-08-08 Thread Ashish Pandey
I think the problem with this failure is the same which Shyam suspected for other EC failure. Connection to bricks are not being setup after killing bricks and starting volume using force. ./tests/basic/ec/ec-1468261.t - Failure reported - 23:03:05 ok 34,

Re: [Gluster-devel] [Gluster-users] Integration of GPU with glusterfs

2018-01-15 Thread Ashish Pandey
It is disappointing to see the limitation being put by Nvidia on low cost GPU usage on data centers. https://www.theregister.co.uk/2018/01/03/nvidia_server_gpus/ We thought of providing an option in glusterfs by which we can control if we want to use GPU or not. So, the concern of gluster

Re: [Gluster-devel] [Gluster-users] Integration of GPU with glusterfs

2018-01-11 Thread Ashish Pandey
I have updated the comment. Thanks!!! --- Ashish - Original Message - From: "Shyam Ranganathan" <srang...@redhat.com> To: "Ashish Pandey" <aspan...@redhat.com> Cc: "Gluster Devel" <gluster-devel@gluster.org> Sent: Thursday, Jan

[Gluster-devel] Integration of GPU with glusterfs

2018-01-10 Thread Ashish Pandey
Hi, We have been thinking of exploiting GPU capabilities to enhance performance of glusterfs. We would like to know others thoughts on this. In EC, we have been doing CPU intensive computations to encode and decode data before writing and reading. This requires a lot of CPU cycles and we have

Re: [Gluster-devel] Regression failure : /tests/basic/ec/ec-1468261.t

2017-11-06 Thread Ashish Pandey
gluster.org>, "Xavi Hernandez" <jaher...@redhat.com>, "Ashish Pandey" <aspan...@redhat.com> Sent: Monday, November 6, 2017 6:35:24 PM Subject: Regression failure : /tests/basic/ec/ec-1468261.t Can someone take a look at this? The run was aborted ( https://

Re: [Gluster-devel] Need inputs on patch #17985

2017-08-22 Thread Ashish Pandey
endra Gowdappa" <rgowd...@redhat.com> To: "Ashish Pandey" <aspan...@redhat.com> Cc: "Pranith Kumar Karampuri" <pkara...@redhat.com>, "Xavier Hernandez" <xhernan...@datalab.es>, "Gluster Devel" <gluster-devel@gluster.org> Sent:

Re: [Gluster-devel] High load on CPU due to glusterfsd process

2017-08-02 Thread Ashish Pandey
Hi, The issue you are seeing is a little complex one but information you have provided is very less. - Volume info - Volume status? - What kind of IO is going on? - Any brick is down or not? - Snapshot of Top command. - Anything you are seeing in glustershd or mount logs or bricks logs?

Re: [Gluster-devel] Glusto failures with dispersed volumes + Samba

2017-07-05 Thread Ashish Pandey
Hi Nigel, As Pranith has already mentioned, we are getting different gfid's in loc and loc->inode. It looks like issue with DHT. If a re validate fails for gfid, a fresh look up should be done. I don't know if it is related or not but a similar bug was fixed by Pranith

Re: [Gluster-devel] Disperse volume : Sequential Writes

2017-07-04 Thread Ashish Pandey
I think it is a good Idea. May be we can add more enhancement in this xlator to improve things in future. - Original Message - From: "Pranith Kumar Karampuri" <pkara...@redhat.com> To: "Ashish Pandey" <aspan...@redhat.com> Cc: "Xavier Hernand

[Gluster-devel] BUG: Code changes in EC as part of Brick Multiplexing

2017-06-22 Thread Ashish Pandey
Hi, There are some code changes in EC which is impacting response time of gluster v heal info I have sent following patch to initiate the discussion on this and to understand why this code change was done. https://review.gluster.org/#/c/17606/1 ec: Increase

Re: [Gluster-devel] Build failed in Jenkins: regression-test-with-multiplex #60

2017-06-12 Thread Ashish Pandey
Ok, I will check if this is catching the data corruption or not after modifying the code in EC. Initially it was not doing so. - Original Message - From: "Atin Mukherjee" <amukh...@redhat.com> To: "Ashish Pandey" <aspan...@redhat.com> Cc

Re: [Gluster-devel] Build failed in Jenkins: regression-test-with-multiplex #60

2017-06-11 Thread Ashish Pandey
are trying to kill a brick and starting it using command line. I think that is what actually failing. In multiplexing, can we do it? Or is there some other way of doing the same thing? Ashish - Original Message - From: "Atin Mukherjee" <amukh...@redhat.com> To

Re: [Gluster-devel] Performance experiments with io-stats translator

2017-06-08 Thread Ashish Pandey
Please note the bug in fio https://github.com/axboe/fio/issues/376 which is actually impacting performance in case of EC volume. I am not sure if this would be relevant in your case but thought to mention it. Ashish - Original Message - From: "Manoj Pillai" To:

Re: [Gluster-devel] EC Healing Algorithm

2017-04-06 Thread Ashish Pandey
If the data is written on minimum number of brick, heal will take place on failed brick only. Data will be read from good bricks, encoding will happen and the fragment on the failed brick will be written only. - Original Message - From: "jayakrishnan mm"

Re: [Gluster-devel] [Gluster-users] Proposal to deprecate replace-brick for "distribute only" volumes

2017-03-16 Thread Ashish Pandey
- Original Message - From: "Atin Mukherjee" To: "Raghavendra Talur" , gluster-devel@gluster.org, gluster-us...@gluster.org Sent: Thursday, March 16, 2017 4:22:41 PM Subject: Re: [Gluster-devel] [Gluster-users] Proposal to deprecate

Re: [Gluster-devel] Spurious regression failure? tests/basic/ec/ec-background-heals.t

2017-01-26 Thread Ashish Pandey
Xavi, shd has been disabled in this test on line number 12 and we have also disabled client side heal. So, no body is going to try to heal it. Ashish - Original Message - From: "Atin Mukherjee" <amukh...@redhat.com> To: "Ashish Pandey" <aspan

Re: [Gluster-devel] Spurious regression failure? tests/basic/ec/ec-background-heals.t

2017-01-24 Thread Ashish Pandey
ppa" <rgowd...@redhat.com> To: "Nithya Balachandran" <nbala...@redhat.com> Cc: "Gluster Devel" <gluster-devel@gluster.org>, "Pranith Kumar Karampuri" <pkara...@redhat.com>, "Ashish Pandey" <aspan...@redhat.com> Sent:

Re: [Gluster-devel] [Gluster-users] Error being logged in disperse volumes

2016-12-20 Thread Ashish Pandey
That means ec is not getting correct trusted.ec.config xattr from minimum number of bricks. 1 - Did you see any error on client side while accessing any file? 2 - If yes, check the file xattr's from all the bricks for such files. It is too less information to find out the cause. If [1] is

[Gluster-devel] 1402538 : Assertion failure during rebalance of symbolic links

2016-12-13 Thread Ashish Pandey
Hi All, We have been seeing an issue where re balancing symbolic links leads to an assertion failure in EC volume. The root cause of this is that while migrating symbolic links to other sub volume, it creates a link file (with attributes .T) . This file is a regular file. Now,

[Gluster-devel] EC volume: Bug caused by race condition during rmdir and inodelk

2016-11-24 Thread Ashish Pandey
Hi All, On EC volume, we have been seeing an interesting bug caused by fine race between rmdir and inodelk which leads to EIO error. Pranith, Xavi and I had a discussion on this and have some possible solution. Your inputs are required on this bug and its possible solution. 1 - Consider

[Gluster-devel] Review request for EC - set/unset dirty flag for data/metadata update

2016-09-07 Thread Ashish Pandey
Hi, Please review the following patch for EC- http://review.gluster.org/#/c/13733/ Ashish ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Compilation failed on latest gluster

2016-08-25 Thread Ashish Pandey
As Susant and Atin suggested, I cleaned everything and did installation from scratch and it is working now. - Original Message - From: "Nigel Babu" <nig...@redhat.com> To: "Ashish Pandey" <aspan...@redhat.com> Cc: "Manikandan Selvaganesh&q

[Gluster-devel] Compilation failed on latest gluster

2016-08-24 Thread Ashish Pandey
Hi, I am trying to build latest code on my laptop and it is giving compilation error - CC cli-rl.o CC cli-cmd-global.o CC cli-cmd-volume.o cli-cmd-volume.c: In function ‘cli_cmd_quota_cbk’: cli-cmd-volume.c:1712:35: error: ‘EVENT_QUOTA_ENABLE’ undeclared (first use in this function)

[Gluster-devel] Patch Review

2016-06-06 Thread Ashish Pandey
Hi All, I have modified the code for volume file generation to support decompounder translator. Please review this patch and provide me your comments/suggestion. http://review.gluster.org/#/c/13968/ Ashish ___ Gluster-devel mailing list

Re: [Gluster-devel] Regression-test-burn-in crash in EC test

2016-04-29 Thread Ashish Pandey
Hi Jeff, Where can we find the core dump? --- Ashish - Original Message - From: "Pranith Kumar Karampuri" <pkara...@redhat.com> To: "Jeff Darcy" <jda...@redhat.com> Cc: "Gluster Devel" <gluster-devel@gluster.org>, "Ashish Pan

Re: [Gluster-devel] [Gluster-users] Assertion failed: ec_get_inode_size

2016-04-19 Thread Ashish Pandey
force" will do the same. Regards, Ashish - Original Message - From: "Serkan Çoban" <cobanser...@gmail.com> To: "Ashish Pandey" <aspan...@redhat.com> Cc: "Gluster Users" <gluster-us...@gluster.org>, "Gluster Devel" &

Re: [Gluster-devel] [Gluster-users] Assertion failed: ec_get_inode_size

2016-04-15 Thread Ashish Pandey
d to send statedump of all the bricks.. - Original Message - From: "Serkan Çoban" <cobanser...@gmail.com> To: "Ashish Pandey" <aspan...@redhat.com> Cc: "Gluster Users" <gluster-us...@gluster.org>, "Gluster Devel" <gluster-d

Re: [Gluster-devel] [Gluster-users] Assertion failed: ec_get_inode_size

2016-04-15 Thread Ashish Pandey
Hi Serkan, Could you also provide us the statedump of all the brick processes and clients? Commands to generate statedumps for brick processes/nfs server/quotad For bricks: gluster volume statedump For nfs server: gluster volume statedump nfs We can find the directory where statedump

Re: [Gluster-devel] [Gluster-users] Assertion failed: ec_get_inode_size

2016-04-15 Thread Ashish Pandey
/statedump.md Ashish - Original Message - From: "Serkan Çoban" <cobanser...@gmail.com> To: "Ashish Pandey" <aspan...@redhat.com> Cc: "Gluster Users" <gluster-us...@gluster.org>, "Gluster Devel" <gluster-devel@gluster.org>

Re: [Gluster-devel] [Gluster-users] Assertion failed: ec_get_inode_size

2016-04-15 Thread Ashish Pandey
I think this is the statesump of only one brick. We would required statedump from all the bricks and client process in case of fuse or nfs process if it is mounted through nfs. Ashish - Original Message - From: "Serkan Çoban" <cobanser...@gmail.com> To

[Gluster-devel] Fragment size in Systematic erasure code

2016-03-14 Thread Ashish Pandey
Hi Xavi, I think for Systematic erasure coded volume you are going to take fragment size of 512 Bytes. Will there be any CLI option to configure this block size? We were having a discussion and Manoj was suggesting to have this option which might improve performance for some workload. For